Posts

Narrow AI vs. General AI

avatar of @markkujantunen
25
@markkujantunen
·
·
0 views
·
3 min read

Introduction

AI has advanced remarkably in the past 15-20 years. Only in the beginning of this century, self-driving cars were pure science fiction. Today, self-driving cars exist, although they're not qualified to drive without human supervision in any jurisdiction so far. Speech recognition is so advanced that its practical applications are seen everywhere. In a lot of domains, AI is utilized to optimize processes. Human world champions in chess were beaten by the best chess programs already in 1997 and finally in 2016, it became clear that humans would never match machines at go again. At countless tasks, AI already completely outperforms every human. Should we conclude that humans will soon be outdone by AI in every possible way?

Narrow AI driven by big data and processing power

It turns out this isn't likely the case. Machine learning can result in astonishingly good results at solving narrowly defined problems. Consider the go playing program, for example. In go, both the goal and the problem domain are well-defined. The problem is the combinatorial explosion resulting from there being hundreds of possible ways to play allowed by the rules at each turn. It turns out that this problem is amenable to self-play on modern supercomputers being able to generate massive amounts of data from which clever machine learning algorithms can pick up relevant winning probability distributions enabling well-trained multi-layered convoluted neural nets to store efficient representations of them. Winning at a game of go is a problem where pure number crunching power can can ultimately be used to create and run a program far beyond human capability. But compared to biological neural networks, Alpha Zero is horrendously inefficient at energy consumption particularly at the training stage.

Building general AI requires a theory of intelligence

It is often said how human imagination is what separates us from machines. This is not the case. Narrow AI's are perfectly capable of not only modeling problem domains but generating outputs based on their models. For example, it is entirely possible to use a musical genre as an input to an AI training and have it compose new music in that genre. Most of human creativity isn't any more advanced that that.

But what separates general AI from narrow AI is that the former is an integrated whole capable of independent goal setting. We are nowhere near creating an AI system with such capabilities. It took billions of years for human level intelligence to arise in nature through evolution. For engineers and scientists to build the first general AI, they'd have to understand the architecture of the human brain and how it gives rise to intelligence in much greater detail than now. Even a single neuron is a computationally heavy task to simulate. In 2018, the largest neuromorphic supercomputer in the world called Spiking Neural Network Architecture, or SpiNNaker, located at the University of Manchester in the United Kingdom, was created to simulate brain tissue. It has a million cores on 1200 circuit boards. Each of the cores can simulate approximately a million synapses. The whole human brain has 100 billion neurons and the average neuron has 1000 synapses. The supercomputer is used for studying disease models. It's simply many orders of magnitude too small to even begin to model the brain as as whole in real time.

Conclusion

I'm guessing the creation of the first general AI system will require most likely multiple paradigm shifts in computational substrate from integrated circuits made up from transistors etched with lasers on a flat silicon wafer to something much more powerful. That will enable the building of sufficiently powerful computers on which systems as complex as minds can be simulated. Alpha Zero or the systems capable of driving cars are nothing like minds. At best, they're like tapeworms with their simple brains repurposed to play a parlor game or keep a vehicle on the road and from colliding into obstacles. Building one using an evolutionary approach sounds unlikely to succeed, which means that a comprehensive theory of intelligence is required.

While the potential of narrow AI is hardly exhausted, general AI may prove an obstacle at least temporarily. At some point the world will run out of smart enough people to do fill all the tech and scientific positions if tech is to increase by an order of magnitude as a percentage of the economy.

Posted Using LeoFinance Beta