I want terminal keypresses to Just Work. What do I mean Just Work? I mean programs in a simple orthogonal way, can determine the key that was pressed, and this model should easily map to the user's expectation.
Why doesn't it currently? Currently, there are a number of classes of physically-distinct key presses that terminals encode using the same bytes. Special names for some ASCII characters collide with Ctrl-modified letters. For example, Tab is Ctrl-I. The Ctrl modifier encodes lower or upper-case letters identically; Ctrl-I is Ctrl-Shift-I. UTF-8 collides with 8bit-high characters; e.g. Alt-C might be encoded as 0xc3, which is the first UTF-8 byte of many Unicode characters, such as é. Sometimes programs rely on timing information to distinguish, for example, Alt-C from the two separate keypresses of Escape followed by C. This results in delays to recognise a plain Escape key if the time is too long, or erroneously recognising Escape followed by a letter as an Alt-modified letter if typed too quickly, or if the time limit is too short. Even when keys produce unique unambiguous byte sequences, it may be that some programs cannot recognise them. The way terminals encode modified keypresses is an extension to the original methods used in the 1970s, and some older code cannot parse them correctly. Press Ctrl-Up on a modern xterm and watch a program fail and die in all sorts of interesting ways because they don't have real CSI parsers.
By having a sane and sensible model on BOTH ends of the terminal interaction, and a well-defined way of communicating. How exactly we go about this really depends who you are: