Tech Leaders Warn of the Dangers of AI

A human hand touching a robotic hand.

Image credit: Shutterstock

In one of Elon Musk’s ever-quotable interviews, he mentioned something that has spurred quite a bit of debate online. Should we be afraid of the development of artificial intelligence, also known as AI?

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium, Musk warned that we should tread carefully when it comes to AI.

“Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish,” Musk stated. “With artificial intelligence we are summoning the demon.”

Yes, Elon Musk compared working in AI to summoning a demon.

But it’s not just Musk. Stephen Hawking and Bill Gates have also issued dire warnings on the topic. But the thing that is interesting about all of these tech leaders is that none of them are actually doing work in AI themselves; they’re merely reacting to the theoretical danger of AI without doing any of the practical work.

While there is a tendency to associate AI with sci-fi movies, in the real world AI is nothing close to the sentient computers shown in blockbuster movies. And while there’s a possibility that we might eventually reach that stage, it’s still quite a long ways off.

Some are so spooked by the idea that they propose federal regulation on this type of technology. But we have to remember that such regulation can often have a chilling effect. Look at the effect that making marijuana a Schedule I drug had on testing its medical capabilities, for example. For a fledgling technology that isn’t anywhere close to being a real danger yet, putting undue restrictions on it could cause the entire industry to be stillborn.

Should we worry? Maybe. But let’s not panic about our space elevators until they’re funded, okay?


Breakthroughs in Understanding Social Hierarchies Lead to Advanced AI

A graphic that illustrates computer chips in a human brain.

Image credit: Shutterstock

Social hierarchies are important, especially in the workplace where understanding the chain of command is crucial. Workers need to know who they can turn to for help, who they have to watch out for, and who they need to take orders from. This is a learning process that can take a while, but a study by researchers from London College and DeepMind have found that it is a process that makes significant use of the prefrontal cortex of the brain.

Researchers had participants undergo an fMRI while they imagined themselves as employees at a fictional company. Researchers then had the participants watch video interactions between “coworkers” to determine who “won” those interactions. Whoever won the interaction was determined to have more power in the hierarchy. Participants also watched similar videos but this time, they were asked to imagine their friend as an employee there. The findings show that we’re better at understanding the hierarchies to which we belong than those of others, which makes sense.

So what good is this research? Knowing what part of the brain is used in learning something that we pick up more or less “by instinct” may not sound immediately useful, but that’s because it’s part of a long-term project to help develop better artificial intelligence. That’s what DeepMind works on, actually.

DeepMind is trying to develop AI that can be applied to “some of the world’s most intractable problems.” If you’ve ever seen a movie about a robot, you know how hard it is for them to understand humans. By having a better idea of how our brains process human interactions, we can develop AI systems that better understand human interactions. Along the way, perhaps future research in this area will help us to better understand how we interact and maybe get a head start on fixing those problems before the robots are ready to help.

%d bloggers like this: