Tech Leaders Warn of the Dangers of AI

A human hand touching a robotic hand.

Image credit: Shutterstock

In one of Elon Musk’s ever-quotable interviews, he mentioned something that has spurred quite a bit of debate online. Should we be afraid of the development of artificial intelligence, also known as AI?

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium, Musk warned that we should tread carefully when it comes to AI.

“Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish,” Musk stated. “With artificial intelligence we are summoning the demon.”

Yes, Elon Musk compared working in AI to summoning a demon.

But it’s not just Musk. Stephen Hawking and Bill Gates have also issued dire warnings on the topic. But the thing that is interesting about all of these tech leaders is that none of them are actually doing work in AI themselves; they’re merely reacting to the theoretical danger of AI without doing any of the practical work.

While there is a tendency to associate AI with sci-fi movies, in the real world AI is nothing close to the sentient computers shown in blockbuster movies. And while there’s a possibility that we might eventually reach that stage, it’s still quite a long ways off.

Some are so spooked by the idea that they propose federal regulation on this type of technology. But we have to remember that such regulation can often have a chilling effect. Look at the effect that making marijuana a Schedule I drug had on testing its medical capabilities, for example. For a fledgling technology that isn’t anywhere close to being a real danger yet, putting undue restrictions on it could cause the entire industry to be stillborn.

Should we worry? Maybe. But let’s not panic about our space elevators until they’re funded, okay?

Breakthroughs in Understanding Social Hierarchies Lead to Advanced AI

A graphic that illustrates computer chips in a human brain.

Image credit: Shutterstock

Social hierarchies are important, especially in the workplace where understanding the chain of command is crucial. Workers need to know who they can turn to for help, who they have to watch out for, and who they need to take orders from. This is a learning process that can take a while, but a study by researchers from London College and DeepMind have found that it is a process that makes significant use of the prefrontal cortex of the brain.

Researchers had participants undergo an fMRI while they imagined themselves as employees at a fictional company. Researchers then had the participants watch video interactions between “coworkers” to determine who “won” those interactions. Whoever won the interaction was determined to have more power in the hierarchy. Participants also watched similar videos but this time, they were asked to imagine their friend as an employee there. The findings show that we’re better at understanding the hierarchies to which we belong than those of others, which makes sense.

So what good is this research? Knowing what part of the brain is used in learning something that we pick up more or less “by instinct” may not sound immediately useful, but that’s because it’s part of a long-term project to help develop better artificial intelligence. That’s what DeepMind works on, actually.

DeepMind is trying to develop AI that can be applied to “some of the world’s most intractable problems.” If you’ve ever seen a movie about a robot, you know how hard it is for them to understand humans. By having a better idea of how our brains process human interactions, we can develop AI systems that better understand human interactions. Along the way, perhaps future research in this area will help us to better understand how we interact and maybe get a head start on fixing those problems before the robots are ready to help.

Americans Are Apprehensive About “Enhancing” Human Abilities

A computer generated image of an x-ray of a human head. Inside the head is a computer chip that is transmitting waves of information. There is a galaxy in the background.

For years, there’s been speculation about scientists being able to enhance human abilities through advanced technology. But what was once dismissed as mere conspiracy theory may become a reality sooner than we think.
Image: Shutterstock

For fans of science fiction, the idea of humans artificially enhancing their abilities (by implanting computer chips or genetically modifying embryos to protect against various diseases or disorders) is a pretty familiar idea. And some of those technologies are likely to arise within the next few decades, but with that technology comes some serious concerns.

According to a recent survey conducted by the Pew Research Center, while many Americans believe that we’ll be able to transplant artificial organs, cure most cancers, or implant computer chips into our bodies within the next fifty years, they aren’t quite sold on whether or not we should actually do those things. The survey asked people how likely they would be to have computer chips installed into their brains, get synthetic blood transfusions, or edit their babies’ genes. About a third of respondents said they would consider having a computer chip installed into their brain or get a synthetic blood transfusion. Meanwhile, just about half said they would consider editing their babies’ genes.

Among the findings, the Pew Research Center concluded that Americans with strong religious identities were less likely to want such procedures, and more likely to think that they were a bad idea in general, stating that such procedures were crossing the line by interfering with nature. However, people were also more likely to considering undergoing an enhancement if it were controllable or reversible, or if those enhancements would bring about a sort of health equality. People were less likely to think it is okay for synthetic blood to make people faster or stronger than they are, or to let computer chips improve cognitive abilities.

The survey tells us that overall, Americans are confident that science will continue to advance human capabilities as we move forward, but our fears about such procedures might outweigh the potential benefits. While curing cancer seems like an easy sell, implanting computer chips into our brains seems like it might be slightly harder to pull off.