Blog
The latest news from Google AI
Market Algorithms and Optimization Meeting
Wednesday, January 28, 2009
Posted by
Vahab S. Mirrokni and Muthu Muthukrishnan
Google auctions ads, and enables a market with millions of advertisers and users. This market presents a unique opportunity to test and refine economic principles as applied to a very large number of interacting, self-interested parties with a myriad of objectives. Researchers in economics, computer science, operations research, marketing and business are increasingly involved in defining, understanding and influencing this market.
On January 7th, a group of computer science researchers from Google and various universities formed a workshop to discuss key research directions in this area, specifically "Market Algorithms and Optimization." Corinna Cortes (Head, Google Research, NY) and
Alfred
Spector
(VP of Research, Google) gave short talks about research groups in Google, academic collaborations, and research awards. The morning session comprised talks by Google researchers including Noam Nisan, Jon Feldman, Vahab Mirrokni, Yishay Mansour, and Muthu Muthukrishnan. These talks explored the following topics and key issues:
Role of budgets. Identify suitable properties of mechanisms for repeated auctions in the presence of budgets.
Truthfulness is impossible
, and may not be the suitable property.
Advertisers' bidding strategies. While bidding equally on all keywords has certain
desirable properties
, identify other bidding strategies if advertisers want to maximize different utility functions and use
rich bidding features
.
Dynamics of ad games. Understand dynamics of sophisticated advertiser bidder strategies under various ad allocation rules. For proportional allocation rules, the dynamics of simple bidder strategies are described
here
.
Online Reservations. Design suitable mechanisms for selling ad inventory in the future rather than on the spot. Models and methods for what happens when reservations can not be honored are explored in
WWW08
,
WINE08
, and
SODA09
papers.
The research presented in these talks can be found at the Google
publications site
. Some general research directions and open problems in ad auctions are
here
.
The post-lunch (sushi included of course) session comprised talks by researchers from academia.
Bobby Kleinberg
(Cornell) went first and took the audience far into projective geometry in an attempt to understand equilibria resulting from a sequence of selfish behavior of players using specific learning algorithms.
Silvio Micali
(MIT) described how difficult it was to model or propose mechanisms in presence of collusions, and then went on to show how to correctly
design such mechanisms for combinatorial auctions
.
Kamesh Munagala
(Duke Univ.) and
Anna Karlin
( U. Wash) discussed algorithmic problems related to ad auctions, including
how to run auctions
in presence of advertisers with mixed utilities and tight remnant budgets. Other participants --
Richard Cole
(NYU),
Amos Fiat
(Tel Aviv),
Uri Feige
(Weizmann),
Michel Goemans
(MIT),
Anupam Gupta
(CMU),
Nicole Immorlica
(Northwestern),
Michael Rabin
(Harvard), and
Eva Tardos
(Cornell) -- kept it lively with questions and discussions.
Part of what makes Google ad systems exciting is the challenges they pose and overcome, and yet others in the horizon. It is remarkable how some of the fundamental problems Google ad systems grapple with are also some of the hardest research problems in the community of Market Algorithms. Joint Google and Academia meetings like this help researchers begin to attack these problems, and may be a model for research collaborations in the future.
Google University Research Awards
Wednesday, January 28, 2009
Posted by Juan Vargas,
University Relations
For most people, the word "Google" evokes associations of Internet search, free apps, and advertising--a far cry from our roots in research and academia. Google, however, has not lost sight of that fundamental relationship. Many of our offices (or "campuses") are located near local universities, and we maintain a working environment many consider to be collegiate: informal, collaborative, and home to expert lectures not only about science and technology, but also about literature, the economy, world peace, green energy, fitness, etc.
But these aspects are only one component of our enduring partnership with academia. Given the unique technical challenges we face, from the beginning we've often recognized the need for a strong relationship with the research community. One of the key ways we interact with this community is through the
Google Research Awards Program
.
Started in 2005, the program seeks to identify and support leading-edge research in strategic areas of engineering and computer science. Professors from universities worldwide submit proposals three times per year that are evaluated by a team of Google engineers and scientists.
Like Google itself, the program is global in scope: in the most recent round of submissions, nearly a third came from outside of the United States. During that round, we received 149 proposals and we ultimately decided to fund about a third of the projects.
A challenging aspect of running an awards program is making the funding decisions. We want to ensure that every proposal is reviewed by the best experts in the field. Luckily, we're fortunate to have many people at Google with the right expertise who help us make those decisions.
As
Alfred
Spector
, VP for Research, explains, "The winning proposals not only get financial support from Google, but also receive the extra benefit of being assigned a Google liaison who maintains a special relation with the professor during the life of the award, contributes to the research, and ensures that the outcomes are valuable to Google and academia." One example of this is the
recent paper
featured on the
Google Research home page
, by Phil Long and Rocco Servedio from Columbia University titled "Random Classification Noise Defeats All Convex Potential Boosters."
The following are some highlights from other recently-funded projects:
Social Networks Research Through CourseRank
Hector Garcia-Molina
,
Stanford University
Hector Garcia-Molina and his team at Stanford are pursuing research on social networks and web usability by using and expanding CourseRank, a course evaluation and recommendation system for the Stanford community. As of June 2008, CourseRank has over 6,700 users and over 134,000 course evaluations. In addition to providing a service for Stanford students, it provides useful data to learn about social networks. Dr. Garcia-Molina plans to use CourseRank to investigate problems such as spam, trust, recommendations with complex objects, and the nature of social interactions. You can see a video with student testimonials and a demo
here
.
Energy-Efficient Storage Architectures for Data Centers
Sudhanva
Gurumurthi
,
Mircea
Stan
,
University of Virginia
Drs. Gurumurthi and Stan from the University of Virgina have developed a new disk drive architecture called "Intra-Disk Parallelism" that they hope will reduce the energy consumption of data center storage systems by over 60% while providing high performance to applications. This architecture extends conventional disk drives by
allowing it to handle
multiple I/O requests
in parallel
and by provisioning additional hardware resources to enhance the parallel capabilities even further. Details of their research were published at the
2008 International Symposium on Computer Architecture (
ISCA
)
. The conference paper has also been selected to appear in the
IEEE
Micro
special issue on Top Picks from Computer Architecture Conferences.
Finding Better Spoken Dialog System Metrics
Maxine
Eskenazi
,
Carnegie Mellon University
When the number of calls to spoken dialog systems mounts into the thousands, or millions, it is impossible to listen to every call. Prof. Eskenazi and her team at CMU are studying new metrics that could improve the performance of those systems. Dr. Eskenazi hopes this research will help illuminate how best to cope with advances in tuned speech recognition and new synthetic voices.
Interested in learning more about the Google University Relations and our Research Awards Program? Please visit our
web site
where scholars can learn more and submit proposals.
Smart Thumbnails on YouTube
Monday, January 19, 2009
Posted by Tomáš Ižo (Software Engineer) and Jay Yagnik (Head of Computer Vision Research)
One of our favorite aspects of Google web search are the informative snippets that appear with each search result. The more relevant they are, the quicker we can find what we're looking for. As members of the computer vision team, we're pleased to say that we now apply a similar philosophy to choosing thumbnails for videos on YouTube. After all, the thumbnail is the first visual piece of information our users get when searching or browsing YouTube videos. As we recently announced in a post on the
YouTube Blog
, our previous system of choosing thumbnails from the 25, 50 and 75% marks in the video, which often led to arbitrary, uninformative or sometimes even misleading images, is now a thing of the past. When a new video comes to YouTube, we now analyze it with an algorithm whose aim is to pick a set of images that are visually representative of the content of the video. As with the ground-breaking video identification tools that we launched on YouTube
last year
, this system is another example of how looking inside the video can lead to a safer, more relevant experience for our users.
Launching a computer vision algorithm on YouTube comes with a unique set of challenges, many of which have to do with the incredible diversity of the content and the awe-inspiring rate at which it pours into our servers: each minute YouTube receives as much as 13 hours of video content in a wide variety of genres, styles, formats and resolutions. Of course, even with some of the technical challenges aside, our work is far from finished. We'll keep on studying how our users interact with thumbnails and, more generally, with video content on the web, and we'll continue thinking of innovative ways to use computer vision and machine learning to improve the experience.
Maybe your computer just needs a hug
Monday, January 12, 2009
Posted by Anthony Francis
Maybe your computer just needs a hug
I'm Anthony Francis, an artificial intelligence researcher working in Google's Search Quality group. One of the things I like about Google is that we give back to the world in ways that make sense for our business, both as a company and as individuals. Google's search engine runs on electrical power, and so Google as a company invests in renewable energy and encourages Googlers as individuals to conserve energy. Google's search engine runs on open source software, and so Google as a company open sources its software and encourages Googlers to participate in existing open source projects.
What may be less obvious is that Google's search engine runs on ideas. Google as a company has a great Research department, but it also encourages Googlers to participate in the research community - both in visible, obvious ways like writing papers and attending conferences, but also in less visible but still necessary chores that keep the research community running, like
reviewing
papers and
organizing
conferences. What I'd like to tell you about today is how Google helped me give back to the research community by letting me take time to write a paper on work I did
before
I came to Google.
The Personal Pet Project
Ashwin Ram's Cognitive Computing Lab at Georgia Tech has been working on emotional agents for over ten years, first with robots, and more recently in computer games and virtual environments. I worked on one of our earliest efforts in this area, the PEPE (Personal Pet) project, a joint effort with Yamaha to develop an robotic pet. Dr. Ram noticed that many consumer electronics require configuration steps that are frustrating, even baffling: everything from VCRs to toasters has blinking clocks and unused features. Pets, on the other hand, don't have to be "configured": they understand us on emotional level, noticing what makes us happy or angry, remembering what brings reward and punishment, and learning to respond appropriately without us reading a manual or pushing a button.
PEPE emulated this intuitive understanding with an emotional long term memory module, which I developed. This module had three parts: basic emotions, emotional memory, and emotional reminding. Basic emotions gave the robot realistic behavior "out of the box": for example, the robot interpreted being petted on the head as being pleasant, which made it want to socialize; it interpreted a kick to the rear as unpleasant, which made it want to hide. The emotional memory associated these primitive emotions with the people and objects the robot saw in its environment. Emotional reminding closed the loop: when the robot saw a person or object again, it was reminded of the past emotion, which influenced but did not determine the robot's ultimate emotional expression. So the robot naturally wanted to play with people who petted it and fled people who had kicked it in the past, but it would be possible to overcome that past learning by ignoring the robot (if you didn't want to play with it as much anymore) or by petting it (if you had kicked it but no longer wanted it to be scared of you).
I worked with a team of engineers at Yamaha in Japan to implement this emotional long term memory on a small robot they were building. Our job was easier because PEPE's basic architecture enabled creating many different kinds of behaviors that could easily interact with each other, in parallel or in sequence. On top of this architecture, we added basic emotions, emotional learning and emotional reminding in stages. Our initial tests were positive, and upon my return to the United States, we began porting this software back to Georgia Tech's PEPE robot. PEPE even had a brief moment of fame, appearing on a local TV station; ultimately, however, Georgia Tech and Yamaha decided to move on to new projects.
Characters with Personality
But the dream of emotional agents didn't die. Manish Mehta, a graduate student at Georgia Tech researching interactive games, began working with Ram to try to develop more sophisticated models of emotional change. Mehta realized that emotional events can act as a trigger for behavioral change: if we humans try something that works really poorly, or spectacularly well, the emotions generated by that experience can prompt us to make changes to our behavior in the future --- for example, remembering to bring that umbrella in the future so we no longer have to walk home wet in the rain.
Mehta used ABL, an agent language developed by Michael Mateas and Andrew Stern for the Facade project. Like the fundamental architecture of PEPE, ABL enabled the author of an agent to design complicated behaviors that could interact in sequence, in parallel, or through complex triggering. Mehta extended ABL to allow an agent to rewrite its own behaviors, and triggered that rewriting based on charged emotional events. This worked: using two agents playing tag, Mehta was able to make them revise their behaviors based on how well they played the game. However, it had an unintended consequence: one of the agents decided it was less stressed out when it was "IT". When it was "IT", it wasn't being chased and wasn't feeling stressed, so it just let itself get tagged, enabling it to "chase" the other player at a leisurely pace.
While this result was surprising, even amazing, it failed to achieved the original objective. Mehta realized this had exposed one more property of human behavioral change: emotional events can prompt change, but we also have a self image - a model of what we think our personality should be like. As our personalities change, we audit the changes to make sure they fit our self image and apply further corrections. For a computer game character, that "personality" is really the designer's intent, so Mehta augmented his emotional learning system to check potential behavioral changes and make sure that they did not violate the original intended design. With this change, both characters were able to learn from their good and bad experiences playing tag, and neither "quit playing the game" just because they didn't want the stress of being chased.
Putting It All Together
The paper that Ram, and Mehta and I wrote for the
Handbook of Synthetic Emotions and Sociable Robotics
reports this work and ties together all of these threads. We were not alone in challenges we faced in developing intelligent agents: others faced them too, in robotics, computer games and academic AI. Many people use flexible behavior systems like PEPE's architecture or like ABL, but building large systems out of these can be challenging and extremely labor intensive. What Mehta and I found, however, is that the use of emotion models acted like a force multiplier: adding basic emotional responses to an existing behavior system radically increased the flexibility and apparent realism of an intelligent agent, whether it was a robotic puppy or a character in a computer game.
Moreover, the more components of the emotion stack you add to an agent, the easier it becomes to extend. While adding emotion made PEPE extended the behaviors it could perform; modifying those emotions using memories changed the character of its behavior, making it seem more lifelike. Mehta's work in extending ABL took this further, enabling creative behavior changes in response to agent frustration. Adding personality models makes these changes stable, enabling the agent to adapt to new situations while staying consistent with their designer's intent. Often, just adding more behaviors to an existing agent can make it more brittle; we found in contrast adding new layers inspired by research in human and animal emotion made the agent more sophisticated and robust.
Giving Something Back
Writing this paper did not take much time from Google. Most of it was done in the evening on my own time, with the occasional email exchange with my coauthors and editors during the day while I was waiting on compiles. And that is how it should be: it may be a long time before this work directly benefits Google, since we do not currently develop robots or computer games. But Google still benefited because the advances that we need to improve our search engine often start in academia.
By encouraging our staff to follow up on their research work and to contribute to the research community, Google supports the growth of the next big idea. Knowing that we can follow up on our past work makes researchers at Google feel more empowered to pursue new ideas and continually exposes us to sources of inspiration. And that inspiration, which makes Google a better place to work, is exactly what I need to keep trying new ideas, which in the end will help make Google a better place to search.
Labels
2018
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
BigQuery
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Compression
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
ML Fairness
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
NeurIPS
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Sound Search
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
Year in Review
YouTube
Archive
2019
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Follow @googleai
Give us feedback in our
Product Forums
.