Blog
The latest from Google Research
The Google Brain Team’s Approach to Research
miércoles, 13 de septiembre de 2017
Posted by Jeff Dean, Google Senior Fellow
About a year ago, the
Google Brain team
first shared our mission “Make machines intelligent. Improve people’s lives.” In that time, we’ve shared updates on our work to infuse machine learning across Google products that hundreds of millions of users access everyday, including
Translate
,
Maps
, and more. Today, I’d like to share more about how we approach this mission both through advancement in the fundamental theory and understanding of machine learning, and through research in the service of product.
Five years ago, our colleagues Alfred Spector, Peter Norvig, and Slav Petrov published a
blog post
and
paper
explaining Google’s hybrid approach to research, an approach that always allowed for varied balances between curiosity-driven and application-driven research. The biggest challenges in machine learning that the Brain team is focused on require the broadest exploration of new ideas, which is why our researchers set their own agendas with much of our team focusing specifically on advancing the state-of-the-art in machine learning. In doing so, we have published
hundreds of papers
over the last several years in conferences such as
NIPS
,
ICML
and
ICLR
, with acceptance rates significantly above conference averages.
Critical to achieving our mission is contributing new and fundamental research in machine learning. To that end, we’ve built a thriving team that conducts long-term, open research to advance science. In pursuing research across fields such as visual and auditory perception, natural language understanding, art and music generation, and systems architecture and algorithms, we regularly collaborate with researchers at external institutions, with fully 1/3rd of our papers in 2017 having one or more cross-institutional authors. Additionally, we host collaborators from academic institutions to enhance our own work and strengthen our connection to the external scientific community.
We also believe in the importance of clear and understandable explanations of the concepts in modern machine learning.
Distill.pub
is an online technical journal providing a forum for this purpose,
launched by
Brain team members Chris Olah and Shan Carter.
TensorFlow Playground
is an in-browser experimental venue created by the Google Brain team’s visualization experts to give people insight into how neural networks behave on simple problems, and
PAIR
’s
deeplearn.js
is an open source WebGL-accelerated JavaScript library for machine learning that runs entirely in your browser, with no installations and no backend.
In addition to working with the best minds in academia and industry, the Brain team, like many other teams at Google, believes in fostering the development of the next generation of scientists. Our team hosts more than 50 interns every year, with the goal of publishing their work in top machine learning venues (roughly 25% of our group’s publications so far in 2017 have intern co-authors, usually as primary authors). Additionally, in 2016, we welcomed the first cohort of the
Google Brain Residency Program
, a one-year program for people who want to learn to do machine learning research. In
its inaugural year
, 27 residents conducted research alongside and under the mentorship of
Brain team members
, and authored more than 40 papers that were accepted in top research conferences. Our second group of 36 residents started their one-year residency in our group in July, and are already involved in a wide variety of projects.
Along with other teams within
Google Research
, we enjoy the freedom to both contribute fundamental advances in machine learning, and separately conduct product-focused research. Both paths are important in ensuring that advances in machine learning have a significant impact on the world.
Highlights from the Annual Google PhD Fellowship Summit, and Announcing the 2017 Google PhD Fellows
martes, 12 de septiembre de 2017
Posted by Susie Kim, Program Manager, University Relations
In 2009, Google created the
PhD Fellowship Program
to recognize and support outstanding graduate students doing exceptional research in Computer Science and related disciplines. Now in its ninth year, our Fellowships have helped support over 300 graduate students in
Australia
,
China and East Asia
,
India
,
North America, Europe and the Middle East
who seek to shape and influence the future of technology.
Recently, Google PhD Fellows from around the globe converged on our Mountain View campus for the second annual Global PhD Fellowship Summit. VP of Education and University Programs
Maggie Johnson
welcomed the Fellows and went over Google's approach to research and its impact across our products and services. The students heard talks from researchers like
Ed Chi
,
Douglas Eck
,
Úlfar Erlingsson
,
Dina Papagiannaki
,
Viren Jain
,
Ian Goodfellow
,
Kevin Murphy
and
Galen Andrew
, and got a glimpse into some of the state-of-the-art research pursued across Google.
Google Fellows attending the 2017 Global PhD Fellowship Summit
The event included a panel discussion with
Domagoj Babic
,
Kathryn McKinley
,
Nina Taft
,
Roy Want
and
Sunny Colsalvo
about their unique career paths in academia and industry. Fellows also had the chance to connect one-on-one with Googlers to discuss their research, as well as receive feedback from leaders in their fields in smaller deep dives and a poster event.
Fellows share their work with Google researchers during the poster session
Our PhD Fellows represent some the best and brightest young researchers around the globe in Computer Science and it is our ongoing goal to support them as they make their mark on the world.
We’d additionally like to announce the complete list of our 2017 Google PhD Fellows, including the latest recipients from China and East Asia, India, and Australia. We look forward to seeing each of them at next year’s summit!
2017 Google PhD Fellows
Algorithms, Optimizations and Markets
Chiu Wai Sam Wong,
University of California, Berkeley
Eric Balkanski,
Harvard University
Haifeng Xu,
University of Southern California
Human-Computer Interaction
Motahhare Eslami,
University of Illinois, Urbana-Champaign
Sarah D'Angelo,
Northwestern University
Sarah Mcroberts,
University of Minnesota - Twin Cities
Sarah Webber,
The University of Melbourne
Machine Learning
Aude Genevay,
Fondation Sciences Mathématiques de Paris
Dustin Tran,
Columbia University
Jamie Hayes,
University College London
Jin-Hwa Kim,
Seoul National University
Ling Luo,
The University of Sydney
Martin Arjovsky,
New York University
Sayak Ray Chowdhury,
Indian Institute of Science
Song Zuo,
Tsinghua University
Taco Cohen,
University of Amsterdam
Yuhuai Wu,
University of Toronto
Yunhe Wang,
Peking University
Yunye Gong,
Cornell University
Machine Perception, Speech Technology and Computer Vision
Avijit Dasgupta,
International Institute of Information Technology - Hyderabad
Franziska Müller,
Saarland University - Saarbrücken GSCS and Max Planck Institute for Informatics
George Trigeorgis,
Imperial College London
Iro Armeni,
Stanford University
Saining Xie,
University of California, San Diego
Yu-Chuan Su,
University of Texas, Austin
Mobile Computing
Sangeun Oh,
Korea Advanced Institute of Science and Technology
Shuo Yang,
Shanghai Jiao Tong University
Natural Language Processing
Bidisha Samanta,
Indian Institute of Technology Kharagpur
Ekaterina Vylomova,
The University of Melbourne
Jianpeng Cheng,
The University of Edinburgh
Kevin Clark,
Stanford University
Meng Zhang,
Tsinghua University
Preksha Nama,
Indian Institute of Technology Madras
Tim Rocktaschel,
University College London
Privacy and Security
Romain Gay,
ENS - École Normale Supérieure
Xi He,
Duke University
Yupeng Zhang,
University of Maryland, College Park
Programming Languages, Algorithms and Software Engineering
Christoffer Quist Adamsen,
Aarhus University
Muhammad Ali Gulzar,
University of California, Los Angeles
Oded Padon,
Tel-Aviv University
Structured Data and Database Management
Amir Shaikhha,
EPFL CS
Jingbo Shang,
University of Illinois, Urbana-Champaign
Systems and Networking
Ahmed M. Said Mohamed Tawfik Issa,
Georgia Institute of Technology
Khanh Nguyen,
University of California, Irvine
Radhika Mittal,
University of California, Berkeley
Ryan Beckett,
Princeton University
Samaneh Movassaghi,
Australian National University
Build your own Machine Learning Visualizations with the new TensorBoard API
lunes, 11 de septiembre de 2017
Posted by Chi Zeng and Justine Tunney, Software Engineers, Google Brain Team
When
we open-sourced TensorFlow in 2015
, it included
TensorBoard
, a suite of visualizations for inspecting and understanding your TensorFlow models and runs. Tensorboard included a small, predetermined set of visualizations that are generic and applicable to nearly all deep learning applications such as observing how loss changes over time or
exploring clusters in high-dimensional spaces
. However, in the absence of reusable APIs, adding new visualizations to TensorBoard was prohibitively difficult for anyone outside of the TensorFlow team, leaving out a long tail of potentially creative, beautiful and useful visualizations that could be built by the research community.
To allow the creation of new and useful visualizations, we announce the release of a consistent
set of APIs
that allows developers to add custom visualization plugins to TensorBoard. We hope that developers use this API to extend TensorBoard and ensure that it covers a wider variety of use cases.
We have updated the existing dashboards (tabs) in TensorBoard to use the new API, so they serve as examples for plugin creators. For the current listing of plugins included within TensorBoard, you can explore the
tensorboard/plugins directory on GitHub
. For instance, observe the new plugin that generates
precision-recall curves
:
The plugin demonstrates the 3 parts of a standard TensorBoard plugin:
A TensorFlow summary op used to collect data for later visualization. [
GitHub
]
A Python backend that serves custom data. [
GitHub
]
A dashboard within TensorBoard built with TypeScript and polymer. [
GitHub
]
Additionally, like other plugins, the “pr_curves” plugin
provides a demo
that (1) users can look over in order to learn how to use the plugin and (2) the plugin author can use to generate example data during development. To further clarify how plugins work, we’ve also created a barebones
TensorBoard “Greeter” plugin
. This simple plugin collects greetings (simple strings preceded by “Hello, ”) during model runs and displays them. We recommend starting by exploring (or forking) the Greeter plugin as well as other
existing plugins
.
A notable example of how contributors are already using the TensorBoard API is
Beholder,
which was recently created by
Chris Anderson
while working on his master’s degree. Beholder shows a live video feed of data (e.g. gradients and convolution filters) as a model trains. You can watch the demo video
here
.
We look forward to seeing what innovations will come out of the community. If you plan to contribute a plugin to TensorBoard’s repository, you should get in touch with us first through the
issue tracker
with your idea so that we can help out and possibly guide you.
Acknowledgements
Dandelion Mané and William Chargin played crucial roles in building this API.
Seminal Ideas from 2007
miércoles, 6 de septiembre de 2017
Posted by Anna Ukhanova, Technical Program Manager, Google Research Europe
It is not everyday we have the chance to pause and think about how previous work has led to current successes, how it influenced other advances and reinterpret it in today’s context. That’s what the
ICML Test-of-Time Award
is meant to achieve, and this year it was given to the work
Sylvain Gelly
, now a researcher on the
Google Brain team
in our
Zurich office
, and
David Silver
, now at
DeepMind
and lead researcher on
AlphaGo
, for their 2007 paper
Combining Online and Offline Knowledge in UCT
. This paper presented new approaches to incorporate knowledge, learned offline or created online on the fly, into a search algorithm to augment its effectiveness.
The
Game of Go
is an ancient Chinese board game, which has tremendous popularity with millions of players worldwide. Since the success of
Deep Blue
in the game of Chess in the late 90’s, Go has been considered as the next benchmark for machine learning and games. Indeed, it has simple rules, can be efficiently simulated, and progress can be measured objectively. However, due to the vast search space of possible moves, making an ML system capable of playing Go well represented a considerable challenge. Over the last two years, DeepMind’s
AlphaGo
has pushed the limit of what is possible with machine learning in games, bringing many
innovations and technological advances
in order to successfully defeat some of the best players in the world [
1
], [
2
], [
3
].
A little more than 10 years before the success of AlphaGo, the classical
tree search
techniques that were so successful in Chess were reigning in computer Go programs, but only reaching weak amateur level for human Go players. Thanks to
Monte-Carlo Tree Search
— a (then) new type of search algorithm based on sampling possible outcomes of the game from a position, and incrementally improving the
search tree
from the results of those simulations — computers were able to search much deeper in the game. This is important because it made it possible to incorporate less human knowledge in the programs — a task which is very hard to do right. Indeed, any missing knowledge that a human expert either cannot express or did not think about may create errors in the computer evaluation of the game position, and lead to blunders
*
.
In 2007, Sylvain and David augmented the Monte Carlo Tree Search techniques by exploring two types of knowledge incorporation: (i) online, where the decision for the next move is taken from the current position, using compute resources at the time when the next move is needed, and (ii) offline, where the learning process happens entirely before the game starts, and is summarized into a model that can be applied to all possible positions of a game (even though not all possible positions have been seen during the learning process). This ultimately led to the computer program
MoGo
, which showed an improvement in performance over previous Go algorithms.
For the online part, they adapted the simple idea that some actions don’t necessarily depend on each other. For example, if you need to book a vacation, the choice of the hotel, flight and car rental is obviously dependent on the choice of your destination. However, once given a destination, these things can be chosen (mostly) independently of each other. The same idea can be applied to Go, where some moves can be estimated partially independently of each other to get a very quick, albeit imprecise, estimate. Of course, when time is available, the exact dependencies are also analysed.
For offline knowledge incorporation, they explored the impact of learning an approximation of the position value with the computer playing against itself using
reinforcement learning
, adding that knowledge in the tree search algorithm. They also looked at how expert play patterns, based on human knowledge of the game, can be used in a similar way. That offline knowledge was used in two places; first, it helped focus the program on moves that looked similar to good moves it learned offline. Second, it helped simulate more realistic games when the program tried to estimate a given position value.
These improvements led to good success on the smaller version of the game of Go (9x9), even beating one professional player in an exhibition game, and also reaching a stronger amateur level on the full game (19x19). And in the years since 2007, we’ve seen many rapid advances (almost on a monthly basis) from researchers all over the world that have allowed the development of algorithms culminating in AlphaGo (which itself introduced many innovations).
Importantly, these algorithms and techniques are not limited to applications towards games, but also enable improvements in many domains. The contributions introduced by David and Sylvain in their collaboration 10 years ago were an important piece to many of the improvements and advancements in machine learning that benefit our lives daily, and we offer our sincere congratulations to both authors on this well-deserved award.
*
As a side note, that’s why
machine learning
as a whole is such a powerful tool: replacing expert knowledge with algorithms that can more fully explore potential outcomes.
↩
Etiquetas
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
AI for Social Good
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
AutoML
Awards
BigQuery
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Compression
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICCV
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
Kaggle
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
materials science
Mixed Reality
ML
ML Fairness
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
NeurIPS
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
Recommender Systems
Reinforcement Learning
renewable energy
Research
Research Awards
resource optimization
Responsible AI
Robotics
schema.org
Search
search ads
Security and Privacy
Self-Supervised Learning
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Sound Search
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
Unsupervised Learning
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
Year in Review
YouTube
Archive
2022
jun
may
abr
mar
feb
ene
2021
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2020
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2019
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2018
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2017
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2016
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2015
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2014
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2013
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2012
dic
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2011
dic
nov
sep
ago
jul
jun
may
abr
mar
feb
ene
2010
dic
nov
oct
sep
ago
jul
jun
may
abr
mar
feb
ene
2009
dic
nov
ago
jul
jun
may
abr
mar
feb
ene
2008
dic
nov
oct
sep
jul
may
abr
mar
feb
2007
oct
sep
ago
jul
jun
feb
2006
dic
nov
sep
ago
jul
jun
abr
mar
feb
Feed
Follow @googleai
Give us feedback in our
Product Forums
.