Google is training its robots to be more like humans

Researchers here at Google's lab recently asked a robot to build a burger out of various plastic toy ingredients.

The mechanical arm knew enough to add ketchup after the meat and before the lettuce but thought the right way to do so was to put the entire bottle inside the burger.

Using recently developed artificial-intelligence software known as "large language models," the researchers say they've been able to design robots that can help humans with a broader range of everyday tasks. 

Google is infusing LLM in-home robots

Google has dubbed the resulting system PaLM-SayCan, the name capturing how the model combines the language understanding skills of LLMs with the affordance grounding of its robots. 

Language models work by taking huge amounts of text uploaded to the internet and using it to train AI software to guess what kinds of responses might come after certain questions or comments.  

The models have become so good at predicting the right response that engaging with one often feels like having a conversation with a knowledgeable human. 

Google and other companies, including OpenAI and Microsoft, have poured resources into building better models and training them on ever-bigger sets of text, in multiple languages. 

The work is controversial. In July, Google fired one of its employees who had claimed he believed the software was sentient.

The consensus among AI experts is that the models are not sentient, but many are concerned that they exhibit biases because they've been trained with huge amounts of unfiltered, human-generated text.