unmdplyr's blog

submited by
Style Pass
2021-06-17 03:00:06

Human-computer interaction is very different from computer-computer interaction. For a human to communicate with a computer, English-like commands on text shell or graphical user interfaces that uses point-and-click model works great. But once that ends and when two or more computers have to interact, text protocols may not be all that helpful. Yet, text based computer-to-computer protocols are extremely common and often seemingly ubiquitous even in places where it's impossible for an average human to interact.

To be clear, historically computers had different bit widths ranging anywhere from 1 to 48 bits per byte between vendors and models. Even text itself was encoded differently on different computers from BCD, IBM, ISO, and 100s more. If you have any latest Linux or BSD computer, type iconv -l to list some popular text encoding formats. ASCII and UTF-8/16/32 eventually fixed the encoding problems only recently in late 80s and early 90s. Text was never as portable as it's believed to be.

It was only somewhere in early 90s did ISO/IEC 2382-1:1993 came up suggesting power of 2 for measuring data widths -- fixing byte as 8 bits. And now nearly almost all CPUs in use today represent numbers in two's complement making binary messages a simpler alternative over text formats.

Leave a Comment