The rise of smart technology and the future of humanity

There are two problems: each based on a critical assumption.

The short term problem: IF computers can do MOST of the work that MOST humans are capable of doing, and can do it better than MOST humans are capable, then what happens to human job seekers? How does society hold together?

The long-term problem: Computers keep getting more capable of intelligent behavior. It is likely (short of global disaster) that this trend will continue. IF so, then AT SOME point they will become more intelligent than any human. IF that happens, then how do we control something much smarter than we are?

The first problem is well explained by this video.

Humans Need Not Apply – YouTube

In the past, it took a lot of engineering and a lot of capital to produce a machine that could perform better than a human. Later, the capital costs went down, but it took a lot of programming.

But the new generation of systems don’t need to be programmed or engineered by humans. They learn and can adapt.

In some cases the learning needs to be supervised–so there’s still a person in the loop. But more and more it’s unsupervised–show the system the inputs and the outputs and IT figures out what to do.

Most humans are capable of learning things with supervision. Only a few can figure out how to do something that they have not been taught to do.  So more and more human jobs are at risk.

In the past, humans who lost their jobs to technology were able to find better jobs for which humans were qualified and machines were not capable. But the set of jobs that machines can’t do, is rapidly shrinking.

The long-term problem is more daunting. It’s explained in this TED Talk:

Can we build an AI without losing control

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

3:34Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

So are these assumptions valid?

I would argue that the first problem (not enough jobs for job-seekers) is already here–but hidden.

And the second (computer systems smarter than humans) is inevitable, and close enough that we should be thinking seriously about it.

One thought on “The rise of smart technology and the future of humanity”

  1. Last week when we discussed smart technology I heard concerns that machines might put many people out of work and also that there’s risks of machines getting so smart that they become our masters. I was not much worried about those challenges but didn’t articulate it very well so I’ll give it another try.

    In the march to increasingly sophisticated technology there are always two major stories, the technology itself and sustaining the flow of resources to support it. In any big technological advance, putting together the funds and the talent over long stretches of time is often harder to do than for technically sophisticated people to advance the technology. Any major advance–transcontinental railroads, Brooklyn Bridge, radio networks, automobiles and highways, going to the moon just 66 years after the Wright Brothers first flight–getting hold of capital was at least as challenging as technical aspects of the job. In fact in the summer of 1969 when humans first walked on the moon, probably most people would have been skeptical of anyone predicting that 38 years later not only would humans not have set foot anywhere else off this planet but we wouldn’t even be going to the moon any more. It wasn’t lack of technology that kept our horizons so low!

    In the case of smart technology, if lots of people are displaced and the situation appears to be worsening, who’s going to keep coming up with funds to put more of their customers out of a job or worse, to give themselves a non-human boss? Of course it the motivation is defense, as in the Manhattan Project, then government can take the money. But wouldn’t that mean we’d be taking soldiers and sailors out of harm’s way? Doesn’t sound like a societal threat that’s much worse than the ones we’ve already grappled with!

Leave a Reply