I’m increasingly concerned about where AI-driven automation is heading. Every week, I get bombarded with job offers on LinkedIn, WhatsApp, and email

Why Human Intelligence Thrives Where Machines Fail

submited by
Style Pass
2024-11-14 18:30:15

I’m increasingly concerned about where AI-driven automation is heading. Every week, I get bombarded with job offers on LinkedIn, WhatsApp, and email. Some are from big-name companies; others from startups with “the perfect role” for me. Lately, it’s harder to tell if the offers are genuine. Are real people behind this? Or AI?

Much of today’s discussion simply assumes that AI is smart and getting smarter in a way that will either replace us or make us superhuman. The problem is, well, that’s not what’s happening. While we worry about (very real) issues with trust and bias, we’re ceding huge philosophical and cognitive space to the systems that we, after all, built. It’s frankly stupid. That’s why I’m writing about it here—to clear it up.

The concept of fat tails—significant, outlier events outside the normal distribution—should be at the center of our conversation about AI. Yes, you’ve likely heard “bell curve” objections to machine learning-based AI before. It’s not enough to get the idea of statistical averages.

Leave a Comment