Skip to main content

Third Market Algorithms and Optimization Workshop at Google NYC



There are fascinating algorithmic and game theoretic challenges in designing both Google’s internal systems and our core products facing hundreds of millions of users. For example, both Google AdWords and the Ad Exchange run billions of auctions a day; showing the perfect ad to every user requires simple mechanisms to align incentives while simultaneously optimizing efficiency and revenue.

We think that research in these areas benefits from close cooperation between academia and industry. To this end, last week we held the Third Market Algorithms and Optimization Workshop at Google, immediately after STOC 2012. We invited several leading academics in these fields to meet with researchers and engineers at Google for a day of talks and discussions.

As a recent winner of the Godel prize, Éva Tardos from Cornell led off with a discussion of how to achieve efficiency in sequential auctions where bidders arrive and depart one at a time instead of all bidding simultaneously.

Eyal Manor, Google engineering director for the Ad Exchange, gave an overview of the design and functioning of the exchange. This was an opportunity to have questions answered by the absolute expert, and the participants took full advantage of it!

Costis Daskalakis and Pablo Azar from MIT and Tim Roughgarden from Stanford talked about different aspects of Optimal Auctions in Bayesian Settings. Costis talked about efficient implementation of optimal auctions in a class of combinatorial auctions. Both Tim and Pablo discussed optimal auctions in Bayesian settings with limited information. Tim, our other Godel prize winner, promoted the idea of designing simple auction rules that are independent of the distributions of buyers’ valuations, and Pablo presented optimal auction rules using only the mean and standard deviation of buyers’ valuations.

Bobby Kleinberg from Cornell and Gagan Goel from Google NYC presented recent work on pricing with budget constraints. Bobby’s talk was about procurement auctions where the auctioneer acts as a buyer with a budget constraining her procurements. Gagan, on the other hand, discussed Pareto-optimal ascending auctions where the auctioneer is selling to budget-constrained buyers. This has direct applications in Google AdWords auctions as advertisers aim to increase performance while staying within budget constraints.

With our mission of organizing all the world’s information, Google needs superior algorithmic techniques to analyze extremely large data sets. We had two talks on new algorithmic ideas for Big Data. From academia, Andrew McGregor gave an introduction to the new field of graph sketching. Though a graph on n nodes is O(n^2)-dimensional, Andy described how to find interesting properties of the graph (such as connectivity, approximate Minimum Spanning Trees, etc.) using only O(n polylog(n)) bits of information. These algorithms were based on clever use of the homomorphic properties of random projections of the graph’s adjacency matrix. In the next talk, Mohammad Mahdian from Google MTV explained a new model for evolving data; even a ‘simple’ problem like sorting becomes interesting when the order of elements changes over time. Mohammad showed that even if element swaps occur at the same rate as comparisons, one can compute an ordering with Kendall-Tau distance O(n ln ln n) from the true ordering at any time, very close to the optimal Ω(n).

Later, Mukund Sundararajan from Google MTV discussed algorithmic problems in interpreting and presenting sales data to advertisers. He challenged us to design flexible human-friendly optimization algorithms that can be adopted and tuned by humans. Toward the end of the workshop, Varun Gupta, Google NYC postdoctoral researcher, gave a short presentation about the use of primal-dual techniques for online stochastic bin packing with application in assigning jobs to data centers.

We also discussed some of the main activities in the algorithms research group in New York, like the use of primal-dual techniques in online stochastic display ad allocation at Google and large-scale graph mining techniques based on MapReduce and Pregel. Corinna Cortes, Director of Research in New York, and Alfred Spector, VP of Research and Special Projects, gave short speeches. Corinna talked about our statistics, machine learning, and NLP research groups in New York, and Alfred challenged us to design mechanisms to take into account fairness in allocations and pricing. For more details, see the blog post by our colleague, ‘Muthu’ Muthukrishnan.

Part of what makes Google a fascinating place to work is the wealth of algorithmic and economic research challenges posed by Google advertising and large-scale data analysis systems. These challenges define research directions for the computer science and economics research communities. Workshops like this and our weekly research seminars help us continue collaborations between Google and academia. We hope to post videos of this workshop shortly, and look forward to organizing many more such events in the future.
Twitter Facebook