Beyond Labels

A 360° Discussion of Foreign, National and Local Policy Issues

The rise of smart technology and the future of humanity

There are two problems: each based on a critical assumption.

The short term problem: IF computers can do MOST of the work that MOST humans are capable of doing, and can do it better than MOST humans are capable, then what happens to human job seekers? How does society hold together?

The long-term problem: Computers keep getting more capable of intelligent behavior. It is likely (short of global disaster) that this trend will continue. IF so, then AT SOME point they will become more intelligent than any human. IF that happens, then how do we control something much smarter than we are?

The first problem is well explained by this video.

Humans Need Not Apply – YouTube

In the past, it took a lot of engineering and a lot of capital to produce a machine that could perform better than a human. Later, the capital costs went down, but it took a lot of programming.

But the new generation of systems don’t need to be programmed or engineered by humans. They learn and can adapt.

In some cases the learning needs to be supervised–so there’s still a person in the loop. But more and more it’s unsupervised–show the system the inputs and the outputs and IT figures out what to do.

Most humans are capable of learning things with supervision. Only a few can figure out how to do something that they have not been taught to do.  So more and more human jobs are at risk.

In the past, humans who lost their jobs to technology were able to find better jobs for which humans were qualified and machines were not capable. But the set of jobs that machines can’t do, is rapidly shrinking.

The long-term problem is more daunting. It’s explained in this TED Talk:

Can we build an AI without losing control

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

3:34Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

So are these assumptions valid?

I would argue that the first problem (not enough jobs for job-seekers) is already here–but hidden.

And the second (computer systems smarter than humans) is inevitable, and close enough that we should be thinking seriously about it.

Fiscal Status of US Social Programs—The Long Version

During last week’s session, we spent a bit of time on the above topic, but agreed that we weren’t in a position to discuss this complex topic “off the cuff,” without the benefit of some background reading.

I offered to provide some evidence of why I’m concerned about the fiscal status of the various social programs, and backup to my statement that, without changing the programs I am concerned that they are not sustainable—in the sense that young workers contributing today cannot reasonably expect to receive benefits comparable to what today’s eligible recipients get.

Here is some reading for the holidays (remember, the Library is closed for the next two Mondays–and we have another topic already selected for our January 9, 2017 meeting). Continue reading “Fiscal Status of US Social Programs—The Long Version”

For Tomorrow (Dec. 12)

As Mike noted, we plan to continue our discussion about the value of art tomorrow, at least for a while, in the hope that Sarah and Marion will be able to attend.

After that, we plan to discuss two “blog” articles (and I’m adding an op-ed piece I read yesterday that reinforces one of the likely discussion tangents:

Start by reading “Why is the ‘Decimation of Public Schools’ a Bad Thing?,” which provides (at least in my reading) a pretty cogent explication of how important being specific in political discussion can be—rather than sound-bite slogans, which frequently don’t advance the dialog (or change anyone’s mind) at all. But the main subject of the article is expressing skepticism about “school choice” in the Trump-DeVos era. It’s not very long and an easy read.

Then read Mike’s friend (and I like his writing as well) Scott Alexander’s article “Contra Robinson on Schooling.” As usual, he takes a relatively deep dive I like about his written arguments because they are 1) they’re pretty cogent and 2) well “sourced” with links. So you can click through to examine the basis for many of his statements.

If you have lots of time, you might want to read the comments to his “Contra” blog article. Fair warning: there are a lot of them. If you don’t have that much time, consider his “Highlights from the Comment Thread on School Choice” article. It singles out the comments he thinks are worthy of note and, in some cases, a bit of debate.

Lastly, in the spirit of the “Decimation” description of the Liberal-Conservative language divide and our recent discussion on Identity Politics, you might be interested in Nicholas Kristof’s op-ed on “Echo Chambers on Campus.” It’s similar to the piece we discussed two weeks ago in the sense that it seems like a thoughtful self-critique of liberal behavior/platform/rhetoric/you name it. I don’t often agree with him, but I respect his views. And I do agree with many of the observations he makes in this piece. Good fodder for discussion.

See you tomorrow!

  • Subscribe via Email

    Receive email notification of new posts/announcements about our weekly meeting.

    Join 238 other subscribers
  • Recent Posts

  • Recent Comments