Moravec’s Paradox is the observation that high-level reasoning (e.g., chess, math) is relatively easy for computers to perform, while simple sensory tasks (e.g., perception, reflexes, mobility) are much harder.
This is essentially the thesis of Meghan O'Gieblyn’s excellent book “God, Human, Animal, Machine”: in an inversion of centuries of enlightenment thinking that humans have used to justify subjugating animals and the environment, it turns out what might be our most human traits are those we actually share with animals
We're about to cross the line Moravec described: robots are matching children's basic levels of perception and mobility. I don't see how this is an argument that computers won't replace us. In fact, this is an area where machines will clearly vastly outperform us in the next 10 years. Moravec’s Paradox is about to be eliminated.
Self-driving cars are challenging because some rare driving situations require advanced reasoning. That also has nothing to do with Moravec's example of basic perception or mobility. This does fall under "Tasks we find easy might not translate easily to AI", but it is interesting to consider why: because self-driving cars require full situational understanding. Drivers might have to understand what a traffic guard is saying, for instance.
None of these cases really explain why computers will complement and not replace us. It just creates a high bar, twice as high as OpenAI is currently hitting. But is 2x that high?
For foreseeable future, we should start calling AI as Augmentated Intelligence and make Artificial Intelligence or AGI as an aspirational goal. The current models will definitely beat us in several areas. However, when it comes to replacing us completely at work, it has a long way to go even in the fields like computer programming, writing etc. I believe in the future; a lot of good writers, poets etc. will be creating training data for the AI rather than writing books/poems for human as we are running out of real good data on the internet, and it will be probably true for best programmers, who will be writing the code so it can used by the models. The synthetic data may not be good for keep on enhancing models. Another lines of work no one thought we would ever need about a year back.
These models will be augmenting human and making us more productive or another mind to validate our ideas or identify risks/issues. For sure, sometimes these models will hallucinate and that's why we should not outsource our thinking/brain completely to them and use them as a tool and not as a replacement.
This is essentially the thesis of Meghan O'Gieblyn’s excellent book “God, Human, Animal, Machine”: in an inversion of centuries of enlightenment thinking that humans have used to justify subjugating animals and the environment, it turns out what might be our most human traits are those we actually share with animals
We're about to cross the line Moravec described: robots are matching children's basic levels of perception and mobility. I don't see how this is an argument that computers won't replace us. In fact, this is an area where machines will clearly vastly outperform us in the next 10 years. Moravec’s Paradox is about to be eliminated.
Self-driving cars are challenging because some rare driving situations require advanced reasoning. That also has nothing to do with Moravec's example of basic perception or mobility. This does fall under "Tasks we find easy might not translate easily to AI", but it is interesting to consider why: because self-driving cars require full situational understanding. Drivers might have to understand what a traffic guard is saying, for instance.
None of these cases really explain why computers will complement and not replace us. It just creates a high bar, twice as high as OpenAI is currently hitting. But is 2x that high?
For foreseeable future, we should start calling AI as Augmentated Intelligence and make Artificial Intelligence or AGI as an aspirational goal. The current models will definitely beat us in several areas. However, when it comes to replacing us completely at work, it has a long way to go even in the fields like computer programming, writing etc. I believe in the future; a lot of good writers, poets etc. will be creating training data for the AI rather than writing books/poems for human as we are running out of real good data on the internet, and it will be probably true for best programmers, who will be writing the code so it can used by the models. The synthetic data may not be good for keep on enhancing models. Another lines of work no one thought we would ever need about a year back.
These models will be augmenting human and making us more productive or another mind to validate our ideas or identify risks/issues. For sure, sometimes these models will hallucinate and that's why we should not outsource our thinking/brain completely to them and use them as a tool and not as a replacement.
Excellent description of what we see out there in the world.