Skip to main content

An Interactive, Automated 3D Reconstruction of a Fly Brain



The goal of connectomics research is to map the brain’s "wiring diagram" in order to understand how the nervous system works. A primary target of recent work is the brain of the fruit fly (Drosophila melanogaster), which is a well-established research animal in biology. Eight Nobel Prizes have been awarded for fruit fly research that has led to advances in molecular biology, genetics, and neuroscience. An important advantage of flies is their size: Drosophila brains are relatively small (one hundred thousand neurons) compared to, for example, a mouse brain (one hundred million neurons) or a human brain (one hundred billion neurons). This makes fly brains easier to study as a complete circuit.

Today, in collaboration with the Howard Hughes Medical Institute (HHMI) Janelia Research Campus and Cambridge University, we are excited to publish “Automated Reconstruction of a Serial-Section EM Drosophila Brain with Flood-Filling Networks and Local Realignment”, a new research paper that presents the automated reconstruction of an entire fruit fly brain. We are also making the full results available for anyone to download or to browse online using an interactive, 3D interface we developed called Neuroglancer.
A 40-trillion pixel fly brain reconstruction, open to anyone for interactive viewing. Bottom right: smaller datasets that Google AI analyzed in publications in 2016 and 2018.
Automated Reconstruction of 40 Trillion Pixels
Our collaborators at HHMI sectioned a fly brain into thousands of ultra-thin 40-nanometer slices, imaged each slice using a transmission electron microscope (resulting in over forty trillion pixels of brain imagery), and then aligned the 2D images into a coherent, 3D image volume of the entire fly brain. Using thousands of Cloud TPUs we then applied Flood-Filling Networks (FFNs), which automatically traced each individual neuron in the fly brain.

While the algorithm generally performed well, we found performance degraded when the alignment was imperfect (image content in consecutive sections was not stable) or when occasionally there were multiple consecutive slices missing due to difficulties associated with the sectioning and imaging process. In order to compensate for these issues we combined FFNs with two new procedures. First, we estimated the slice-to-slice consistency everywhere in the 3D image and then locally stabilized the image content as the FFN traced each neuron. Second, we used a “Segmentation-Enhanced CycleGAN” (SECGAN) to computationally “hallucinate” missing slices in the image volume. SECGANs are a type of generative adversarial network specialized for image segmentation. We found that the FFN was able to trace through locations with multiple missing slices much more robustly when using the SECGAN-hallucinated image data.
Interactive Visualization of the Fly Brain with Neuroglancer
When working with 3D images that contain trillions of pixels and objects with complicated shapes, visualization is both essential and difficult. Inspired by Google’s history of developing new visualization technologies, we designed a new tool that was scalable and powerful, but also accessible to anybody with a web browser that supports WebGL. The result is Neuroglancer, an open-source project (github) that enables viewing of petabyte-scale 3D volumes, and supports many advanced features such as arbitrary-axis cross-sectional reslicing, multi-resolution meshes, and the powerful ability to develop custom analysis workflows via integration with Python. This tool has become heavily used by collaborators at the Allen Institute for Brain Science, Harvard University, HHMI, Max Planck Institute, MIT, Princeton University, and elsewhere.
A recorded demonstration of Neuroglancer. Interactive version available here.
Next Steps
Our collaborators at HHMI and Cambridge University have already begun using this reconstruction to accelerate their studies of learning, memory, and perception in the fly brain. However, the results described above are not yet a true connectome since establishing a connectome requires the identification of synapses. We are working closely with the FlyEM team at Janelia Research Campus to create a highly verified and exhaustive connectome of the fly brain using images acquired with “FIB-SEM” technology.

Acknowledgements
We would like to acknowledge core contributions from Tim Blakely, Viren Jain, Michal Januszewski, Laramie Leavitt, Larry Lindsey, Mike Tyka (Google), as well as Alex Bates, Davi Bock, Greg Jefferis, Feng Li, Mathew Nichols, Eric Perlman, Istvan Taisz, and Zhihao Zheng (Cambridge University, HHMI Janelia, Johns Hopkins University, and University of Vermont).
Twitter Facebook