At Google, robots go to school and learn using AI algorithms

Placeholder when loading item promotions

MOUNTAIN VIEW, Calif. — Researchers here at the Google lab recently asked a robot to build a burger out of various plastic toy ingredients.

The mechanical arm knew enough to add ketchup after the meat and before the salad, but saw fit to put the entire bottle inside the burger.

While this robot won’t be working as a line cook anytime soon, it is representative of a major breakthrough announced by Google engineers on Tuesday. Using recently developed artificial intelligence software known as large language models, researchers have been able to design robots that can help humans with a wider range of everyday tasks.

Instead of providing a long list of instructions – guiding each robot’s movement one at a time – the robots can now respond to full requests, more like a human.

At a demonstration last week, a researcher said to a robot, “I’m hungry, can you get me a snack?” The robot then searched a cafeteria, opened a drawer, found a bag of chips and brought it to the human.

It’s the first time language models have been built into robots, Google executives and researchers say.

“This is a fundamentally different paradigm,” says Brian Ichter, a research scientist at Google and one of the authors of a new paper published Tuesday describing the progress the company has made.

Robots are already commonplace. Millions of them work in factories around the world, but they follow specific instructions and typically only focus on one or two tasks, such as: B. moving a product on an assembly line or welding two pieces of metal together. The race to build a robot that can perform a variety of mundane tasks and learn as you work is far more complex. Technology companies big and small have been working for years to build such general-purpose robots.

Big Tech builds AI with bad data. So scientists looked for better data.

Language models work by taking massive amounts of text uploaded to the internet and using it to train artificial intelligence software to guess what types of answers might come after certain questions or comments. Models have gotten so good at predicting the right response that interacting with you often feels like talking to a knowledgeable human. Google and other companies, including OpenAI and Microsoft, have invested resources in building better models and training them on ever-increasing amounts of text in multiple languages.

The work is controversial. In July, Google fired one of his employees who had claimed he believed the software was sentient. The consensus among AI experts is that the models aren’t sentient, but many worry they’re biased because they’ve been trained on massive amounts of unfiltered human-generated text.

Some language models have proven their worth racist or sexistor easily manipulated into spreading hate speech or lies when prompted with the right statements or questions.

In general, language models could provide robots with knowledge about high-level planning steps, said Carnegie Mellon assistant professor Deepak Pathak, who studies AI and robotics and has commented on the area, not Google specifically. But these models don’t give robots all the information they need — like how much force to use to open a refrigerator, for example. That knowledge has to come from somewhere else.

“It just solves the high-level planning problem,” he said.

Still, Google is moving forward and has now merged language models with some of its robots. Instead of coding specific technical instructions for each task a robot can perform, researchers can now simply speak to them in everyday language. More importantly, the new software helps the robots parse complex multi-step instructions on their own. Now the robots can interpret instructions they’ve never heard before and come up with meaningful responses and actions.

These robots have been trained on AI. They became racist and sexist.

Robots that can use language models could transform the way manufacturing and distribution facilities operate, said Zac Stewart Rogers, an assistant professor of supply chain management at Colorado State University.

“A human and a robot working together is always the most productive right now,” he said. “Robots can do manual heavy lifting. Humans can do the nuanced troubleshooting.”

If robots were able to solve complex tasks, it could mean distribution centers could get smaller, with fewer people and more robots. That could mean fewer jobs for people, although Rogers points out that when there is a decline in one area due to automation, generally jobs are created in other areas.

There is probably still a long way to go. Artificial intelligence techniques such as neural networks and reinforcement learning have been used to train robots for years. It has led to some breakthroughs, but progress is still slow. Google’s robots are far from ready for the real world, and in interviews, Google researchers and executives have repeatedly said that they are merely running a research lab and have no plans to commercialize the technology just yet.

But it’s clear that Google and other big tech companies have a serious interest in robotics. Amazon uses many robots in its warehouses, is experimenting with delivery by drone and earlier this month agreed to buy the maker of the vacuum cleaning robot Roomba $1.7 billion. (Amazon founder Jeff Bezos owns the Washington Post).

Tesla says it’s building a “friendly” robot that does menial tasks and doesn’t fight back

So is Tesla, which has developed some autonomous driving features for its cars Working on general purpose robots.

In 2013, Google went on a buying spree, buying several robotics companies including Boston Dynamics, maker of the robot dogs that often go viral on social media. But the executive responsible for the program has been accused of sexual misconduct, and left the company soon after. 2017 Google Boston Dynamics sold in Japanese telecom and technology investment giant Softbank. The hype about ever smarter robots designed by the most powerful tech companies faded.

In the language model project, Google researchers collaborated with those of everyday robot, a separate but wholly-owned company within Google that works specifically to build robots that can perform a range of “repetitive” and “tedious” tasks. The bots are already in use in various Google canteens, wiping down counters and throwing out rubbish.


Google researchers collaborated with those at Everyday Robots, a separate but wholly owned company within Google. A previous version of this story called the company wrong. This story has been updated.

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here