Connect with us

TOP SCEINCE

Neuroscientists roll out first comprehensive atlas of brain cells: BRAIN initiative consortium takes census of motor cortex cells in mice, marmoset and humans

Published

on

Neuroscientists roll out first comprehensive atlas of brain cells: BRAIN initiative consortium takes census of motor cortex cells in mice, marmoset and humans

When you clicked to read this story, a band of cells across the top of your brain sent signals down your spine and out to your hand to tell the muscles in your index finger to press down with just the right amount of pressure to activate your mouse or track pad.

A slew of new studies now shows that the area of the brain responsible for initiating this action — the primary motor cortex, which controls movement — has as many as 116 different types of cells that work together to make this happen.

The 17 studies, appearing online Oct. 6 in the journal Nature, are the result of five years of work by a huge consortium of researchers supported by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative to identify the myriad of different cell types in one portion of the brain. It is the first step in a long-term project to generate an atlas of the entire brain to help understand how the neural networks in our head control our body and mind and how they are disrupted in cases of mental and physical problems.

“If you think of the brain as an extremely complex machine, how could we understand it without first breaking it down and knowing the parts?” asked cellular neuroscientist Helen Bateup, a University of California, Berkeley, associate professor of molecular and cell biology and co-author of the flagship paper that synthesizes the results of the other papers. “The first page of any manual of how the brain works should read: Here are all the cellular components, this is how many of them there are, here is where they are located and who they connect to.”

Individual researchers have previously identified dozens of cell types based on their shape, size, electrical properties and which genes are expressed in them. The new studies identify about five times more cell types, though many are subtypes of well-known cell types. For example, cells that release specific neurotransmitters, like gamma-aminobutyric acid (GABA) or glutamate, each have more than a dozen subtypes distinguishable from one another by their gene expression and electrical firing patterns.

While the current papers address only the motor cortex, the BRAIN Initiative Cell Census Network (BICCN) — created in 2017 — endeavors to map all the different cell types throughout the brain, which consists of more than 160 billion individual cells, both neurons and support cells called glia. The BRAIN Initiative was launched in 2013 by then-President Barack Obama.


“Once we have all those parts defined, we can then go up a level and start to understand how those parts work together, how they form a functional circuit, how that ultimately gives rise to perceptions and behavior and much more complex things,” Bateup said.

Together with former UC Berkeley professor John Ngai, Bateup and UC Berkeley colleague Dirk Hockemeyer have already used CRISPR-Cas9 to create mice in which a specific cell type is labeled with a fluorescent marker, allowing them to track the connections these cells make throughout the brain. For the flagship journal paper, the Berkeley team created two strains of “knock-in” reporter mice that provided novel tools for illuminating the connections of the newly identified cell types, she said.

“One of our many limitations in developing effective therapies for human brain disorders is that we just don’t know enough about which cells and connections are being affected by a particular disease and therefore can’t pinpoint with precision what and where we need to target,” said Ngai, who led UC Berkeley’s Brain Initiative efforts before being tapped last year to direct the entire national initiative. “Detailed information about the types of cells that make up the brain and their properties will ultimately enable the development of new therapies for neurologic and neuropsychiatric diseases.”

Ngai is one of 13 corresponding authors of the flagship paper, which has more than 250 co-authors in all.

Bateup, Hockemeyer and Ngai collaborated on an earlier study to profile all the active genes in single dopamine-producing cells in the mouse’s midbrain, which has structures similar to human brains. This same profiling technique, which involves identifying all the specific messenger RNA molecules and their levels in each cell, was employed by other BICCN researchers to profile cells in the motor cortex. This type of analysis, using a technique called single-cell RNA sequencing, or scRNA-seq, is referred to as transcriptomics.


The scRNA-seq technique was one of nearly a dozen separate experimental methods used by the BICCN team to characterize the different cell types in three different mammals: mice, marmosets and humans. Four of these involved different ways of identifying gene expression levels and determining the genome’s chromatin architecture and DNA methylation status, which is called the epigenome. Other techniques included classical electrophysiological patch clamp recordings to distinguish cells by how they fire action potentials, categorizing cells by shape, determining their connectivity, and looking at where the cells are spatially located within the brain. Several of these used machine learning or artificial intelligence to distinguish cell types.

“This was the most comprehensive description of these cell types, and with high resolution and different methodologies,” Hockemeyer said. “The conclusion of the paper is that there’s remarkable overlap and consistency in determining cell types with these different methods.”

A team of statisticians combined data from all these experimental methods to determine how best to classify or cluster cells into different types and, presumably, different functions based on the observed differences in expression and epigenetic profiles among these cells. While there are many statistical algorithms for analyzing such data and identifying clusters, the challenge was to determine which clusters were truly different from one another — truly different cell types — said Sandrine Dudoit, a UC Berkeley professor and chair of the Department of Statistics. She and biostatistician Elizabeth Purdom, UC Berkeley associate professor of statistics, were key members of the statistical team and co-authors of the flagship paper.

“The idea is not to create yet another new clustering method, but to find ways of leveraging the strengths of different methods and combining methods and to assess the stability of the results, the reproducibility of the clusters you get,” Dudoit said. “That’s really a key message about all these studies that look for novel cell types or novel categories of cells: No matter what algorithm you try, you’ll get clusters, so it is key to really have confidence in your results.”

Bateup noted that the number of individual cell types identified in the new study depended on the technique used and ranged from dozens to 116. One finding, for example, was that humans have about twice as many different types of inhibitory neurons as excitatory neurons in this region of the brain, while mice have five times as many.

“Before, we had something like 10 or 20 different cell types that had been defined, but we had no idea if the cells we were defining by their patterns of gene expression were the same ones as those defined based on their electrophysiological properties, or the same as the neuron types defined by their morphology,” Bateup said.

“The big advance by the BICCN is that we combined many different ways of defining a cell type and integrated them to come up with a consensus taxonomy that’s not just based on gene expression or on physiology or morphology, but takes all of those properties into account,” Hockemeyer said. “So, now we can say this particular cell type expresses these genes, has this morphology, has these physiological properties, and is located in this particular region of the cortex. So, you have a much deeper, granular understanding of what that cell type is and its basic properties.”

Dudoit cautioned that future studies could show that the number of cell types identified in the motor cortex is an overestimate, but the current studies are a good start in assembling a cell atlas of the whole brain.

“Even among biologists, there are vastly different opinions as to how much resolution you should have for these systems, whether there is this very, very fine clustering structure or whether you really have higher level cell types that are more stable,” she said. “Nevertheless, these results show the power of collaboration and pulling together efforts across different groups. We’re starting with a biological question, but a biologist alone could not have solved that problem. To address a big challenging problem like that, you want a team of experts in a bunch of different disciplines that are able to communicate well and work well with each other.”

Other members of the UC Berkeley team included postdoctoral scientists Rebecca Chance and David Stafford, graduate student Daniel Kramer, research technician Shona Allen of the Department of Molecular and Cell Biology, doctoral student Hector Roux de Bézieux of the School of Public Health and postdoctoral fellow Koen Van den Berge of the Department of Statistics. Bateup is a member of the Helen Wills Neuroscience Institute, Hockemeyer is a member of the Innovative Genomics Institute, and both are investigators funded by the Chan Zuckerberg Biohub.

Source link

Continue Reading
Click to comment

Leave a Reply

TOP SCEINCE

Charge your laptop in a minute or your EV in 10? Supercapacitors can help

Published

on

By

Neuroscientists roll out first comprehensive atlas of brain cells: BRAIN initiative consortium takes census of motor cortex cells in mice, marmoset and humans


Imagine if your dead laptop or phone could charge in a minute or if an electric car could be fully powered in 10 minutes.

While not possible yet, new research by a team of CU Boulder scientists could potentially lead to such advances.

Published today in the Proceedings of the National Academy of Sciences, researchers in Ankur Gupta’s lab discovered how tiny charged particles, called ions, move within a complex network of minuscule pores. The breakthrough could lead to the development of more efficient energy storage devices, such as supercapacitors, said Gupta, an assistant professor of chemical and biological engineering.

“Given the critical role of energy in the future of the planet, I felt inspired to apply my chemical engineering knowledge to advancing energy storage devices,” Gupta said. “It felt like the topic was somewhat underexplored and as such, the perfect opportunity.”

Gupta explained that several chemical engineering techniques are used to study flow in porous materials such as oil reservoirs and water filtration, but they have not been fully utilized in some energy storage systems.

The discovery is significant not only for storing energy in vehicles and electronic devices but also for power grids, where fluctuating energy demand requires efficient storage to avoid waste during periods of low demand and to ensure rapid supply during high demand.

Supercapacitors, energy storage devices that rely on ion accumulation in their pores, have rapid charging times and longer life spans compared to batteries.

“The primary appeal of supercapacitors lies in their speed,” Gupta said. “So how can we make their charging and release of energy faster? By the more efficient movement of ions.”

Their findings modify Kirchhoff’s law, which has governed current flow in electrical circuits since 1845 and is a staple in high school students’ science classes. Unlike electrons, ions move due to both electric fields and diffusion, and the researchers determined that their movements at pore intersections are different from what was described in Kirchhoff’s law.

Prior to the study, ion movements were only described in the literature in one straight pore. Through this research, ion movement in a complex network of thousands of interconnected pores can be simulated and predicted in a few minutes.

“That’s the leap of the work,” Gupta said. “We found the missing link.”



Source link

Continue Reading

TOP SCEINCE

AI headphones let wearer listen to a single person in a crowd, by looking at them just once

Published

on

By

Neuroscientists roll out first comprehensive atlas of brain cells: BRAIN initiative consortium takes census of motor cortex cells in mice, marmoset and humans


Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer’s environment through the erasure still challenges researchers. The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens.

A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The code for the proof-of-concept device is available for others to build on. The system is not commercially available.

“We tend to think of AI now as web-based chatbots that answer questions,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. The headphones send that signal to an on-board embedded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.

The team tested its system on 21 subjects, who rated the clarity of the enrolled speaker’s voice nearly twice as high as the unfiltered audio on average.

This work builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes — such as birds or voices — that they wanted to hear and canceled other sounds in the environment.

Currently the TSH system can enroll only one speaker at a time, and it’s only able to enroll a speaker when there is not another loud voice coming from the same direction as the target speaker’s voice. If a user isn’t happy with the sound quality, they can run another enrollment on the speaker to improve the clarity.

The team is working to expand the system to earbuds and hearing aids in the future.

Additional co-authors on the paper were Bandhav Veluri, Malek Itani and Tuochao Chen, UW doctoral students in the Allen School, and Takuya Yoshioka, director of research at AssemblyAI. This research was funded by a Moore Inventor Fellow award, a Thomas J. Cabel Endowed Professorship and a UW CoMotion Innovation Gap Fund.



Source link

Continue Reading

TOP SCEINCE

Theory and experiment combine to shine a new light on proton spin

Published

on

By

Neuroscientists roll out first comprehensive atlas of brain cells: BRAIN initiative consortium takes census of motor cortex cells in mice, marmoset and humans


Nuclear physicists have long been working to reveal how the proton gets its spin. Now, a new method that combines experimental data with state-of-the-art calculations has revealed a more detailed picture of spin contributions from the very glue that holds protons together. It also paves the way toward imaging the proton’s 3D structure.

The work was led by Joseph Karpie, a postdoctoral associate in the Center for Theoretical and Computational Physics (Theory Center) at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility.

He said that this decades-old mystery began with measurements of the sources of the proton’s spin in 1987. Physicists originally thought that the proton’s building blocks, its quarks, would be the main source of the proton’s spin. But that’s not what they found. It turned out that the proton’s quarks only provide about 30% of the proton’s total measured spin. The rest comes from two other sources that have so far proven more difficult to measure.

One is the mysterious but powerful strong force. The strong force is one of the four fundamental forces in the universe. It’s what “glues” quarks together to make up other subatomic particles, such as protons or neutrons. Manifestations of this strong force are called gluons, which are thought to contribute to the proton’s spin. The last bit of spin is thought to come from the movements of the proton’s quarks and gluons.

“This paper is sort of a bringing together of two groups in the Theory Center who have been working toward trying to understand the same bit of physics, which is how do the gluons that are inside of it contribute to how much the proton is spinning around,” he said.

He said this study was inspired by a puzzling result that came from initial experimental measurements of the gluons’ spin. The measurements were made at the Relativistic Heavy Ion Collider, a DOE Office of Science user facility based at Brookhaven National Laboratory in New York. The data at first seemed to indicate that the gluons may be contributing to the proton’s spin. They showed a positive result.

But as the data analysis was improved, a further possibility appeared.

“When they improved their analysis, they started to get two sets of results that seemed quite different, one was positive and the other was negative,” Karpie explained.

While the earlier positive result indicated that the gluons’ spins are aligned with that of the proton, the improved analysis allowed for the possibility that the gluons’ spins have an overall negative contribution. In that case, more of the proton spin would come from the movement of the quarks and gluons, or from the spin of the quarks themselves.

This puzzling result was published by the Jefferson Lab Angular Momentum (JAM) collaboration.

Meanwhile, the HadStruc collaboration had been addressing the same measurements in a different way. They were using supercomputers to calculate the underlying theory that describes the interactions among quarks and gluons in the proton, Quantum Chromodynamics (QCD).

To equip supercomputers to make this intense calculation, theorists somewhat simplify some aspects of the theory. This somewhat simplified version for computers is called lattice QCD.

Karpie led the work to bring together the data from both groups. He started with the combined data from experiments taken in facilities around the world. He then added the results from the lattice QCD calculation into his analysis.

“This is putting everything together that we know about quark and gluon spin and how gluons contribute to the spin of the proton in one dimension,” said David Richards, a Jefferson Lab senior staff scientist who worked on the study.

“When we did, we saw that the negative things didn’t go away, but they changed dramatically. That meant that there’s something funny going on with those,” Karpie said.

Karpie is lead author on the study that was recently published in Physical Review D. He said the main takeaway is that combining the data from both approaches provided a more informed result.

“We’re combining both of our datasets together and getting a better result out than either of us could get independently. It’s really showing that we learn a lot more by combining lattice QCD and experiment together in one problem analysis,” said Karpie. “This is the first step, and we hope to keep doing this with more and more observables as well as we make more lattice data.”

The next step is to further improve the datasets. As more powerful experiments provide more detailed information on the proton, these data begin painting a picture that goes beyond one dimension. And as theorists learn how to improve their calculations on ever-more powerful supercomputers, their solutions also become more precise and inclusive.

The goal is to eventually produce a three-dimensional understanding of the proton’s structure.

“So, we learn our tools do work on the simpler one-dimension scenario. By testing our methods now, we hopefully will know what we need to do when we want to move up to do 3D structure,” Richards said. “This work will contribute to this 3D image of what a proton should look like. So it’s all about building our way up to the heart of the problem by doing this easier stuff now.”



Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.