Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.
Greece and Puerto Rico are teetering on the verge of disaster and perhaps that's symbolic of mankind as a whole.
We're quaint schizophrenes, on the one hand believing we're very clever, but on the other not being able to charge a phone for more than a day.
And now we're delighting in making machines that will soon be far smarter than we are.
Some can't wait for the day. Others,, fear that we'll be destroyed in the process.
Now another British professor has professed his deep fears. Dr. Stuart Armstrong spends all his days worried about the future. He is, indeed, part of Oxford University's Future of Humanity Institute.
And the thing he's worried about most is that he'll soon be shoved into a coffin and place on a heroin drip.
No, I'm not revealing anything about his personal proclivities. It's just that, as the Telegraph reports, he was speaking at a debate in London organized by research company Gartner and he painted a picture of the future in shades of black.
He said that humans has always had the upper hand because they'd always been smarter. However: "When machines become smarter than humans, we'll be handing them the steering wheel."
The question is how will they decide to drive. What principles of thought will dominate their decision-making?
Armstrong fears, for example, that being told to keep humans "safe and happy," the robots might "entomb everyone in concrete coffins on heroin drips."
Yes, we'd be permanently trapped in the 1960s. (At least the music would be good.)
His fear revolves around the concept of Artificial General Intelligence. This is when robots aren't merely task-specific in their actions, but are given a more general power over life.
They might think like super-nerds, rather than humans with, say, a heart. For example, being asked to "prevent human suffering" they might see the optimal decision as putting everyone out of their misery.
Yes, literally. By killing everyone.
The way Armstrong defines the problem is quite similar to the way some humans are already confused by the decision-making of the techie crowd.
He said: "You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."
We often know the meaning of things without articulating them. But in communicating with machines, we will have to find the best ways of articulating optimal behavior that we have. These may not be optimal articulations.
Armstrong fears that these changes are coming rather more quickly than people realize. This, for me at least, makes the idea of disappearing to a remote island increasingly attractive.
Perhaps one shouldn't worry. Perhaps we'll run out of natural resources to power the robots before the robots have the resources to have complete power over us.