Moravec’s Paradox is the observation that high-level reasoning (e.g., chess, math) is relatively easy for computers to perform, while simple sensory tasks (e.g., perception, reflexes, mobility) are much harder.
Moravec believed that most people thought this result was the opposite of what most people expected,
It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility. – Hans Moravec
He hypothesized this was because high-level reasoning was a relatively recent development in human evolution that was built on top of the much older, low-level sensory and motor skills.
You can see Moravec’s paradox everywhere today – from chess-playing AIs (easy) to self-driving cars (hard). It has implications for our expectations for AI. Tasks we find easy might not translate easily to AI (I think this is true even within reasoning tasks).
It’s another argument that AI won’t replace humans. Instead, it might complement us.
This is essentially the thesis of Meghan O'Gieblyn’s excellent book “God, Human, Animal, Machine”: in an inversion of centuries of enlightenment thinking that humans have used to justify subjugating animals and the environment, it turns out what might be our most human traits are those we actually share with animals
We're about to cross the line Moravec described: robots are matching children's basic levels of perception and mobility. I don't see how this is an argument that computers won't replace us. In fact, this is an area where machines will clearly vastly outperform us in the next 10 years. Moravec’s Paradox is about to be eliminated.
Self-driving cars are challenging because some rare driving situations require advanced reasoning. That also has nothing to do with Moravec's example of basic perception or mobility. This does fall under "Tasks we find easy might not translate easily to AI", but it is interesting to consider why: because self-driving cars require full situational understanding. Drivers might have to understand what a traffic guard is saying, for instance.
None of these cases really explain why computers will complement and not replace us. It just creates a high bar, twice as high as OpenAI is currently hitting. But is 2x that high?