Jump to Content

Building resources to syntactically parse the web

March 10, 2011

Posted by Slav Petrov and Ryan McDonald, Research Team



One major hurdle in organizing the world’s information is building computer systems that can understand natural, or human, language. Such understanding would advance if systems could automatically determine syntactic and semantic structures.

This analysis is an extremely complex inferential process. Consider for example the sentence, "A hearing is scheduled on the issue today." A syntactic parser needs to determine that "is scheduled" is a verb phrase, that the "hearing" is its subject, that the prepositional phrase "on the issue" is modifying the "hearing", and that today is an adverb modifying the verb phrase. Of course, humans do this all the time without realizing it. For computers, this is non-trivial as it requires a fair amount of background knowledge, typically encoded in a rich statistical model. Consider, "I saw a man with a jacket" versus "I saw a man with a telescope". In the former, we know that a "jacket" is something that people wear and is not a mechanism for viewing people. So syntactically, the "jacket" must be a property associated with the "man" and not the verb "saw", i.e., I did not see the man by using a jacket to view him. Whereas in the latter, we know that a telescope is something with which we can view people, so it can also be a property of the verb. Of course, it is ambiguous, maybe the man is carrying the telescope.


Linguistically inclined readers will of course notice that this parse tree has been simplified by omitting empty clauses and traces.

Computer programs with the ability to analyze the syntactic structure of language are fundamental to improving the quality of many tools millions of people use every day, including machine translation, question answering, information extraction, and sentiment analysis. Google itself is already using syntactic parsers in many of its projects. For example, this paper, describes a system where a syntactic dependency parser is used to make translations more grammatical between languages with different word orderings. This paper uses the output of a syntactic parser to help determine the scope of negation within sentences, which is then used downstream to improve a sentiment analysis system.

To further this work, Google is pleased to announce a gift to the Linguistic Data Consortium (LDC) to create new annotated resources that can facilitate research progress in the area of syntactic parsing. The primary purpose of the gift is to generate data sets that language technology researchers can use to evaluate the robustness of new parsing methods in several web domains, such as blogs and discussion forums. The goal is to move parsing beyond its current focus on carefully edited text such as print news (for which annotated resources already exist) to domains with larger stylistic and topical variability (where spelling errors and grammatical mistakes are more common).

The Linguistic Data Consortium is a non-profit organization that produces and distributes linguistic data to researchers, technology developers, universities and university libraries. The LDC is hosted by the University of Pennsylvania and directed by Mark Liberman, Christopher H. Browne Distinguished Professor of Linguistics.

The LDC is the leader in building linguistic data resources and will annotate several thousand sentences with syntactic parse trees like the one shown in the figure. The annotation will be done manually by specially trained linguists who will also have access to machine analysis and can correct errors the systems make. Once the annotation is completed, the corpus will be released to the research community through the LDC catalog. We look forward to seeing what they produce and what the natural language processing research community can do with the rich annotation resource.