Skip to main content

The latest research from Google

Talking to Robots in Real Time

A grand vision in robot learning, going back to the SHRDLU experiments in the late 1960s, is that of helpful robots that inhabit human spaces and follow a wide variety of natural language commands. Over the last few years, there have been significant advances in the application of machine learning (ML) for instruction following, both in simulation and in real world systems. Recent Palm-SayCan work has produced robots that leverage language models to plan long-horizon behaviors and reason about abstract goals. Code as Policies has shown that code-generating language models combined with pre-trained perception systems can produce language conditioned policies for zero shot robot manipulation. Despite this progress, an important missing property of current "language in, actions out" robot learning systems is real time interaction with humans.

Making a Traversable Wormhole with a Quantum Computer

Better Language Models Without Massive Compute

Google at NeurIPS 2022

Conversation Summaries in Google Chat

The Data Cards Playbook: A Toolkit for Transparency in Dataset Documentation

Mixture-of-Experts with Expert Choice Routing

Characterizing Emergent Phenomena in Large Language Models

Multi-layered Mapping of Brain Tissue via Segmentation Guided Contrastive Learning

ReAct: Synergizing Reasoning and Acting in Language Models

Infinite Nature: Generating 3D Flythroughs from Still Photos