There are two problems: each based on a critical assumption.
The short term problem: IF computers can do MOST of the work that MOST humans are capable of doing, and can do it better than MOST humans are capable, then what happens to human job seekers? How does society hold together?
The long-term problem: Computers keep getting more capable of intelligent behavior. It is likely (short of global disaster) that this trend will continue. IF so, then AT SOME point they will become more intelligent than any human. IF that happens, then how do we control something much smarter than we are?
The first problem is well explained by this video.
Humans Need Not Apply – YouTube
In the past, it took a lot of engineering and a lot of capital to produce a machine that could perform better than a human. Later, the capital costs went down, but it took a lot of programming.
But the new generation of systems don’t need to be programmed or engineered by humans. They learn and can adapt.
In some cases the learning needs to be supervised–so there’s still a person in the loop. But more and more it’s unsupervised–show the system the inputs and the outputs and IT figures out what to do.
Most humans are capable of learning things with supervision. Only a few can figure out how to do something that they have not been taught to do. So more and more human jobs are at risk.
In the past, humans who lost their jobs to technology were able to find better jobs for which humans were qualified and machines were not capable. But the set of jobs that machines can’t do, is rapidly shrinking.
The long-term problem is more daunting. It’s explained in this TED Talk:
Can we build an AI without losing control
Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
3:34Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.
So are these assumptions valid?
I would argue that the first problem (not enough jobs for job-seekers) is already here–but hidden.
And the second (computer systems smarter than humans) is inevitable, and close enough that we should be thinking seriously about it.