Artificial Intelligence (AI) is everywhere. With the recent proliferation of Large Language Models (LLMs) in particular – like OpenAI’s GPT or Goo

Are We Losing Our Minds to Artificial Intelligence?

submited by
Style Pass
2024-07-18 08:00:06

Artificial Intelligence (AI) is everywhere. With the recent proliferation of Large Language Models (LLMs) in particular – like OpenAI’s GPT or Google’s Gemini – it seems like every app and service has integrated “AI” in the last few years. Maybe you’re someone who uses these AI tools weekly. Maybe you’re someone who uses AI in your workflows every day. Or maybe you are an LLM, reading this post to summarize it for a lazy human somewhere.

Regardless of which applies to you, the fact remains that improvements and changes in Artificial Intelligence cannot be ignored. Certain aspects of these developments are getting a lot of airtime. There is a growing cohort of AI researchers who spend most of their time worrying about “AI X-risk” (read: AI eXtinction risk)1 . That is, the likelihood of humanity developing an Artificial Superintelligence that doesn’t like humans very much and kills us all (Terminator-style). That thought sometimes keeps me up at night.

But I want to write about a much less scary-sounding risk that comes with AI. I don’t have a catchy name for it like “x-risk”, so for now I’ve settled on “atrophy risk”. Specifically, we at Supernotes have been thinking a lot about what happens in the long-term when you come to rely too heavily on AI (specifically LLMs) in your day-to-day workflows.

Leave a Comment