Two years ago I wrote some pragmatic arguments that “human-like AI” is hard to develop and would be fairly useless. My focus was on the difficulty

Against intelligence

submited by
Style Pass
2021-06-08 13:00:04

Two years ago I wrote some pragmatic arguments that “human-like AI” is hard to develop and would be fairly useless. My focus was on the difficulty of defining a metric for evaluation and the cost-effectiveness of human brains.

But I think I failed to stress another fundamental point, which is that "intelligence" as commonly thought of may not be critical to acquiring knowledge about or power to change the reality external to our own body.

If I’m allowed to psychoanalyse just a tiny bit, the kind of people that think a lot about “AI” are the kind of people that value conceptual intelligence too much, because that's the best part of their thinking.

A rough definition of conceptual is: the kind of thinking that can easily be put into symbols, such as words, code or math. I’m making this distinction because there are many networks in the brain that accomplish unimaginably complex (intelligent) tasks, which are very nonconceptual. Most people that think about “AI” don’t view these functions as “worth imitating” (rightfully so in most cases).

There’s a ton of processing dedicated to going from “photovoltaic activation of retinal neurons” to “conscious experience of seeing”. But we use digital cameras to record the world for computer vision, so we don’t have to imitate most of the brain processes involved in sight.

Leave a Comment