Google Announces Plans To Open Artificial Intelligence Lab in China

A photo of Google's Beijing office.

Google’s Beijing office.
Photo credit: testing / Shutterstock

Google’s business has been largely absent from China since 2010, when the company pulled many of its core businesses out. Now, though, Google is returning to China with the announcement of a new Chinese center devoted to artificial intelligence. This is a small gesture but a strong symbolic move that represents the tech giant returning to the most populous nation on earth.

Google largely pulled out of China seven years ago, citing government controls and surveillance initiatives that ran counter to its guiding philosophy of a free and open internet. Since then, however, China has made a resurgence as one of the world’s leading tech powers. This is why, as The New York Times reported, Google is putting a team of experts in Beijing for further research and development of AI. The office will be led by Fei-Fei Li, who’s currently in charge of the AI lab at Stanford. She will join Jia Li, the head of AI research and development for Google Cloud.

“We have 600-plus employees in China, and we had a similar number in 2010,” Google spokesman Taj Meadows told the Times. “Roughly half of them are engineers working on global products. Work on A.I. will be in a similar vein.”

Google is not the only tech company to capitalize on the progress China has made in recent years. Microsoft and IBM are also hard at work on hiring Chinese staff members. This movement coincides with efforts from the Chinese government to upgrade the country’s tech infrastructure and move away from foreign-made hardware and software.

The relationship between Google and China continues to evolve. In 2010, the company said that it could no longer tolerate China’s stance on censorship, as well as the government’s alleged hacking of some human rights activists’ Gmail accounts. Google never left China entirely, though, and it now looks poised to rebuild its presence among the world’s largest population of internet users.

Advertisements

Tech Leaders Warn of the Dangers of AI

A human hand touching a robotic hand.

Image credit: Shutterstock

In one of Elon Musk’s ever-quotable interviews, he mentioned something that has spurred quite a bit of debate online. Should we be afraid of the development of artificial intelligence, also known as AI?

Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium, Musk warned that we should tread carefully when it comes to AI.

“Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish,” Musk stated. “With artificial intelligence we are summoning the demon.”

Yes, Elon Musk compared working in AI to summoning a demon.

But it’s not just Musk. Stephen Hawking and Bill Gates have also issued dire warnings on the topic. But the thing that is interesting about all of these tech leaders is that none of them are actually doing work in AI themselves; they’re merely reacting to the theoretical danger of AI without doing any of the practical work.

While there is a tendency to associate AI with sci-fi movies, in the real world AI is nothing close to the sentient computers shown in blockbuster movies. And while there’s a possibility that we might eventually reach that stage, it’s still quite a long ways off.

Some are so spooked by the idea that they propose federal regulation on this type of technology. But we have to remember that such regulation can often have a chilling effect. Look at the effect that making marijuana a Schedule I drug had on testing its medical capabilities, for example. For a fledgling technology that isn’t anywhere close to being a real danger yet, putting undue restrictions on it could cause the entire industry to be stillborn.

Should we worry? Maybe. But let’s not panic about our space elevators until they’re funded, okay?

Breakthroughs in Understanding Social Hierarchies Lead to Advanced AI

A graphic that illustrates computer chips in a human brain.

Image credit: Shutterstock

Social hierarchies are important, especially in the workplace where understanding the chain of command is crucial. Workers need to know who they can turn to for help, who they have to watch out for, and who they need to take orders from. This is a learning process that can take a while, but a study by researchers from London College and DeepMind have found that it is a process that makes significant use of the prefrontal cortex of the brain.

Researchers had participants undergo an fMRI while they imagined themselves as employees at a fictional company. Researchers then had the participants watch video interactions between “coworkers” to determine who “won” those interactions. Whoever won the interaction was determined to have more power in the hierarchy. Participants also watched similar videos but this time, they were asked to imagine their friend as an employee there. The findings show that we’re better at understanding the hierarchies to which we belong than those of others, which makes sense.

So what good is this research? Knowing what part of the brain is used in learning something that we pick up more or less “by instinct” may not sound immediately useful, but that’s because it’s part of a long-term project to help develop better artificial intelligence. That’s what DeepMind works on, actually.

DeepMind is trying to develop AI that can be applied to “some of the world’s most intractable problems.” If you’ve ever seen a movie about a robot, you know how hard it is for them to understand humans. By having a better idea of how our brains process human interactions, we can develop AI systems that better understand human interactions. Along the way, perhaps future research in this area will help us to better understand how we interact and maybe get a head start on fixing those problems before the robots are ready to help.

%d bloggers like this: