Skip to main content

The latest research from Google

Better Language Models Without Massive Compute

In recent years, language models (LMs) have become more prominent in natural language processing (NLP) research and are also becoming increasingly impactful in practice. Scaling up LMs has been shown to improve performance across a range of NLP tasks. For instance, scaling up language models can improve perplexity across seven orders of magnitude of model sizes, and new abilities such as multi-step reasoning have been observed to arise as a result of model scale. However, one of the challenges of continued scaling is that training new, larger models requires great amounts of computational resources. Moreover, new models are often trained from scratch and do not leverage the weights from previously existing models.

Google at NeurIPS 2022

Conversation Summaries in Google Chat

The Data Cards Playbook: A Toolkit for Transparency in Dataset Documentation

Mixture-of-Experts with Expert Choice Routing

Characterizing Emergent Phenomena in Large Language Models

Multi-layered Mapping of Brain Tissue via Segmentation Guided Contrastive Learning

ReAct: Synergizing Reasoning and Acting in Language Models

Infinite Nature: Generating 3D Flythroughs from Still Photos

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Robots That Write Their Own Code