Making AI Better Than Us – What Could Possibly Go Wrong? - National Academy of Public Administration

submited by
Style Pass
2024-07-10 01:30:07

It was not until the end of 2022 that ChatGPT entered the public domain, meaning we all had to learn what generative AI was about. Its overnight success was no less amazing as it took just five days to reach a million users. ChatGPT version 3.5 was trained on 175 billion parameters, and ChatGPT version 4 was trained on roughly ten trillion words. No matter how one looks at such incomprehensible numbers - that is how much information humans have produced. Structured and unstructured data are processed and stored in what may be likened to a giant data vacuum cleaner sucking up everything it can.

Generative AI continues to learn from us and our collected data every moment of every day. It learns our languages, our biases (conscious and unconscious), our nuances, our way of writing, our way of speaking, and everything there is to know about humans. But knowing and understanding are not necessarily the same.

As interest in generative AI and all its offerings increases, so does concern over bias and ethics—let alone false and misleading information. Without proper filters, it is no wonder AI sometimes spits out some very scary stuff. And it should not surprise us that much of the AI output is a true reflection of us, with all our faults and biases. Sometimes, AI can make things worse by “speaking” with authority.

Leave a Comment