Connect with us

TOP SCEINCE

Study explains why the brain can robustly recognize images, even without color

Published

on

Study explains why the brain can robustly recognize images, even without color


Even though the human visual system has sophisticated machinery for processing color, the brain has no problem recognizing objects in black-and-white images. A new study from MIT offers a possible explanation for how the brain comes to be so adept at identifying both color and color-degraded images.

Using experimental data and computational modeling, the researchers found evidence suggesting the roots of this ability may lie in development. Early in life, when newborns receive strongly limited color information, the brain is forced to learn to distinguish objects based on their luminance, or intensity of light they emit, rather than their color. Later in life, when the retina and cortex are better equipped to process colors, the brain incorporates color information as well but also maintains its previously acquired ability to recognize images without critical reliance on color cues.

The findings are consistent with previous work showing that initially degraded visual and auditory input can actually be beneficial to the early development of perceptual systems.

“This general idea, that there is something important about the initial limitations that we have in our perceptual system, transcends color vision and visual acuity. Some of the work that our lab has done in the context of audition also suggests that there’s something important about placing limits on the richness of information that the neonatal system is initially exposed to,” says Pawan Sinha, a professor of brain and cognitive sciences at MIT and the senior author of the study.

The findings also help to explain why children who are born blind but have their vision restored later in life, through the removal of congenital cataracts, have much more difficulty identifying objects presented in black and white. Those children, who receive rich color input as soon as their sight is restored, may develop an overreliance on color that makes them much less resilient to changes or removal of color information.

MIT postdocs Marin Vogelsang and Lukas Vogelsang, and Project Prakash research scientist Priti Gupta, are the lead authors of the study, which appears today in Science. Sidney Diamond, a retired neurologist who is now an MIT research affiliate, and additional members of the Project Prakash team are also authors of the paper.

Seeing in black and white

The researchers’ exploration of how early experience with color affects later object recognition grew out of a simple observation from a study of children who had their sight restored after being born with congenital cataracts. In 2005, Sinha launched Project Prakash (the Sanskrit word for “light”), an effort in India to identify and treat children with reversible forms of vision loss.

Many of those children suffer from blindness due to dense bilateral cataracts. This condition often goes untreated in India, which has the world’s largest population of blind children, estimated between 200,000 and 700,000.

Children who receive treatment through Project Prakash may also participate in studies of their visual development, many of which have helped scientists learn more about how the brain’s organization changes following restoration of sight, how the brain estimates brightness, and other phenomena related to vision.

In this study, Sinha and his colleagues gave children a simple test of object recognition, presenting both color and black-and-white images. For children born with normal sight, converting color images to grayscale had no effect at all on their ability to recognize the depicted object. However, when children who underwent cataract removal were presented with black-and-white images, their performance dropped significantly.

This led the researchers to hypothesize that the nature of visual inputs children are exposed to early in life may play a crucial role in shaping resilience to color changes and the ability to identify objects presented in black-and-white images. In normally sighted newborns, retinal cone cells are not well-developed at birth, resulting in babies having poor visual acuity and poor color vision. Over the first years of life, their vision improves markedly as the cone system develops.

Because the immature visual system receives significantly reduced color information, the researchers hypothesized that during this time, the baby brain is forced to gain proficiency at recognizing images with reduced color cues. Additionally, they proposed, children who are born with cataracts and have them removed later may learn to rely too much on color cues when identifying objects, because, as they experimentally demonstrated in the paper, with mature retinas, they commence their post-operative journeys with good color vision.

To rigorously test that hypothesis, the researchers used a standard convolutional neural network, AlexNet, as a computational model of vision. They trained the network to recognize objects, giving it different types of input during training. As part of one training regimen, they initially showed the model grayscale images only, then introduced color images later on. This roughly mimics the developmental progression of chromatic enrichment as babies’ eyesight matures over the first years of life.

Another training regimen comprised only color images. This approximates the experience of the Project Prakash children, because they can process full color information as soon as their cataracts are removed.

The researchers found that the developmentally inspired model could accurately recognize objects in either type of image and was also resilient to other color manipulations. However, the Prakash-proxy model trained only on color images did not show good generalization to grayscale or hue-manipulated images.

“What happens is that this Prakash-like model is very good with colored images, but it’s very poor with anything else. When not starting out with initially color-degraded training, these models just don’t generalize, perhaps because of their over-reliance on specific color cues,” Lukas Vogelsang says.

The robust generalization of the developmentally inspired model is not merely a consequence of it having been trained on both color and grayscale images; the temporal ordering of these images makes a big difference. Another object-recognition model that was trained on color images first, followed by grayscale images, did not do as well at identifying black-and-white objects.

“It’s not just the steps of the developmental choreography that are important, but also the order in which they are played out,” Sinha says.

The advantages of limited sensory input

By analyzing the internal organization of the models, the researchers found that those that begin with grayscale inputs learn to rely on luminance to identify objects. Once they begin receiving color input, they don’t change their approach very much, since they’ve already learned a strategy that works well. Models that began with color images did shift their approach once grayscale images were introduced, but could not shift enough to make them as accurate as the models that were given grayscale images first.

A similar phenomenon may occur in the human brain, which has more plasticity early in life, and can easily learn to identify objects based on their luminance alone. Early in life, the paucity of color information may in fact be beneficial to the developing brain, as it learns to identify objects based on sparse information.

“As a newborn, the normally sighted child is deprived, in a certain sense, of color vision. And that turns out to be an advantage,” Diamond says.

Researchers in Sinha’s lab have observed that limitations in early sensory input can also benefit other aspects of vision, as well as the auditory system. In 2022, they used computational models to show that early exposure to only low-frequency sounds, similar to those that babies hear in the womb, improves performance on auditory tasks that require analyzing sounds over a longer period of time, such as recognizing emotions. They now plan to explore whether this phenomenon extends to other aspects of development, such as language acquisition.

The research was funded by the National Eye Institute of NIH and the Intelligence Advanced Research Projects Activity.



Source link

Continue Reading
Click to comment

Leave a Reply

TOP SCEINCE

Early dark energy could resolve cosmology’s two biggest puzzles

Published

on

By

Study explains why the brain can robustly recognize images, even without color


A new study by MIT physicists proposes that a mysterious force known as early dark energy could solve two of the biggest puzzles in cosmology and fill in some major gaps in our understanding of how the early universe evolved.

One puzzle in question is the “Hubble tension,” which refers to a mismatch in measurements of how fast the universe is expanding. The other involves observations of numerous early, bright galaxies that existed at a time when the early universe should have been much less populated.

Now, the MIT team has found that both puzzles could be resolved if the early universe had one extra, fleeting ingredient: early dark energy. Dark energy is an unknown form of energy that physicists suspect is driving the expansion of the universe today. Early dark energy is a similar, hypothetical phenomenon that may have made only a brief appearance, influencing the expansion of the universe in its first moments before disappearing entirely.

Some physicists have suspected that early dark energy could be the key to solving the Hubble tension, as the mysterious force could accelerate the early expansion of the universe by an amount that would resolve the measurement mismatch.

The MIT researchers have now found that early dark energy could also explain the baffling number of bright galaxies that astronomers have observed in the early universe. In their new study, reported in the Monthly Notices of the Royal Astronomical Society, the team modeled the formation of galaxies in the universe’s first few hundred million years. When they incorporated a dark energy component only in that earliest sliver of time, they found the number of galaxies that arose from the primordial environment bloomed to fit astronomers’ observations.

You have these two looming open-ended puzzles,” says study co-author Rohan Naidu, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “We find that in fact, early dark energy is a very elegant and sparse solution to two of the most pressing problems in cosmology.”

The study’s co-authors include lead author and Kavli postdoc Xuejian (Jacob) Shen, and MIT professor of physics Mark Vogelsberger, along with Michael Boylan-Kolchin at the University of Texas at Austin, and Sandro Tacchella at the University of Cambridge.

Big city lights

Based on standard cosmological and galaxy formation models, the universe should have taken its time spinning up the first galaxies. It would have taken billions of years for primordial gas to coalesce into galaxies as large and bright as the Milky Way.

But in 2023, NASA’s James Webb Space Telescope (JWST) made a startling observation. With an ability to peer farther back in time than any observatory to date, the telescope uncovered a surprising number of bright galaxies as large as the modern Milky Way within the first 500 million years, when the universe was just 3 percent of its current age.

“The bright galaxies that JWST saw would be like seeing a clustering of lights around big cities, whereas theory predicts something like the light around more rural settings like Yellowstone National Park,” Shen says. “And we don’t expect that clustering of light so early on.”

For physicists, the observations imply that there is either something fundamentally wrong with the physics underlying the models or a missing ingredient in the early universe that scientists have not accounted for. The MIT team explored the possibility of the latter, and whether the missing ingredient might be early dark energy.

Physicists have proposed that early dark energy is a sort of antigravitational force that is turned on only at very early times. This force would counteract gravity’s inward pull and accelerate the early expansion of the universe, in a way that would resolve the mismatch in measurements. Early dark energy, therefore, is considered the most likely solution to the Hubble tension.

Galaxy skeleton

The MIT team explored whether early dark energy could also be the key to explaining the unexpected population of large, bright galaxies detected by JWST. In their new study, the physicists considered how early dark energy might affect the early structure of the universe that gave rise to the first galaxies. They focused on the formation of dark matter halos — regions of space where gravity happens to be stronger, and where matter begins to accumulate.

“We believe that dark matter halos are the invisible skeleton of the universe,” Shen explains. “Dark matter structures form first, and then galaxies form within these structures. So, we expect the number of bright galaxies should be proportional to the number of big dark matter halos.”

The team developed an empirical framework for early galaxy formation, which predicts the number, luminosity, and size of galaxies that should form in the early universe, given some measures of “cosmological parameters.” Cosmological parameters are the basic ingredients, or mathematical terms, that describe the evolution of the universe.

Physicists have determined that there are at least six main cosmological parameters, one of which is the Hubble constant — a term that describes the universe’s rate of expansion. Other parameters describe density fluctuations in the primordial soup, immediately after the Big Bang, from which dark matter halos eventually form.

The MIT team reasoned that if early dark energy affects the universe’s early expansion rate, in a way that resolves the Hubble tension, then it could affect the balance of the other cosmological parameters, in a way that might increase the number of bright galaxies that appear at early times. To test their theory, they incorporated a model of early dark energy (the same one that happens to resolve the Hubble tension) into an empirical galaxy formation framework to see how the earliest dark matter structures evolve and give rise to the first galaxies.

“What we show is, the skeletal structure of the early universe is altered in a subtle way where the amplitude of fluctuations goes up, and you get bigger halos, and brighter galaxies that are in place at earlier times, more so than in our more vanilla models,” Naidu says. “It means things were more abundant, and more clustered in the early universe.”

“A priori, I would not have expected the abundance of JWST’s early bright galaxies to have anything to do with early dark energy, but their observation that EDE pushes cosmological parameters in a direction that boosts the early-galaxy abundance is interesting,” says Marc Kamionkowski, professor of theoretical physics at Johns Hopkins University, who was not involved with the study. “I think more work will need to be done to establish a link between early galaxies and EDE, but regardless of how things turn out, it’s a clever — and hopefully ultimately fruitful — thing to try.”

We demonstrated the potential of early dark energy as a unified solution to the two major issues faced by cosmology. This might be an evidence for its existence if the observational findings of JWST get further consolidated,” Vogelsberger concludes. “In the future, we can incorporate this into large cosmological simulations to see what detailed predictions we get.”

This research was supported, in part, by NASA and the National Science Foundation.



Source link

Continue Reading

TOP SCEINCE

Plant-derived secondary organic aerosols can act as mediators of plant-plant interactions

Published

on

By

Study explains why the brain can robustly recognize images, even without color


A new study published in Science reveals that plant-derived secondary organic aerosols (SOAs) can act as mediators of plant-plant interactions. This research was conducted through the cooperation of chemical ecologists, plant ecophysiologists and atmospheric physicists at the University of Eastern Finland.

It is well known that plants release volatile organic compounds (VOCs) into the atmosphere when damaged by herbivores. These VOCs play a crucial role in plant-plant interactions, whereby undamaged plants may detect warning signals from their damaged neighbours and prepare their defences. “Reactive plant VOCs undergo oxidative chemical reactions, resulting in the formation of secondary organic aerosols (SOAs). We wondered whether the ecological functions mediated by VOCs persist after they are oxidated to form SOAs,” said Dr. Hao Yu, formerly a PhD student at UEF, but now at the University of Bern.

The study showed that Scots pine seedlings, when damaged by large pine weevils, release VOCs that activate defences in nearby plants of the same species. Interestingly, the biological activity persisted after VOCs were oxidized to form SOAs. The results indicated that the elemental composition and quantity of SOAs likely determines their biological functions.

“A key novelty of the study is the finding that plants adopt subtly different defence strategies when receiving signals as VOCs or as SOAs, yet they exhibit similar degrees of resistance to herbivore feeding,” said Professor James Blande, head of the Environmental Ecology Research Group. This observation opens up the possibility that plants have sophisticated sensing systems that enable them to tailor their defences to information derived from different types of chemical cue.

“Considering the formation rate of SOAs from their precursor VOCs, their longer lifetime compared to VOCs, and the atmospheric air mass transport, we expect that the ecologically effective distance for interactions mediated by SOAs is longer than that for plant interactions mediated by VOCs,” said Professor Annele Virtanen, head of the Aerosol Physics Research Group. This could be interpreted as plants being able to detect cues representing close versus distant threats from herbivores.

The study is expected to open up a whole new complex research area to environmental ecologists and their collaborators, which could lead to new insights on the chemical cues structuring interactions between plants.



Source link

Continue Reading

TOP SCEINCE

Folded or cut, this lithium-sulfur battery keeps going

Published

on

By

Study explains why the brain can robustly recognize images, even without color


Most rechargeable batteries that power portable devices, such as toys, handheld vacuums and e-bikes, use lithium-ion technology. But these batteries can have short lifetimes and may catch fire when damaged. To address stability and safety issues, researchers reporting in ACS Energy Letters have designed a lithium-sulfur (Li-S) battery that features an improved iron sulfide cathode. One prototype remains highly stable over 300 charge-discharge cycles, and another provides power even after being folded or cut.

Sulfur has been suggested as a material for lithium-ion batteries because of its low cost and potential to hold more energy than lithium-metal oxides and other materials used in traditional ion-based versions. To make Li-S batteries stable at high temperatures, researchers have previously proposed using a carbonate-based electrolyte to separate the two electrodes (an iron sulfide cathode and a lithium metal-containing anode). However, as the sulfide in the cathode dissolves into the electrolyte, it forms an impenetrable precipitate, causing the cell to quickly lose capacity. Liping Wang and colleagues wondered if they could add a layer between the cathode and electrolyte to reduce this corrosion without reducing functionality and rechargeability.

The team coated iron sulfide cathodes in different polymers and found in initial electrochemical performance tests that polyacrylic acid (PAA) performed best, retaining the electrode’s discharge capacity after 300 charge-discharge cycles. Next, the researchers incorporated a PAA-coated iron sulfide cathode into a prototype battery design, which also included a carbonate-based electrolyte, a lithium metal foil as an ion source, and a graphite-based anode. They produced and then tested both pouch cell and coin cell battery prototypes.

After more than 100 charge-discharge cycles, Wang and colleagues observed no substantial capacity decay in the pouch cell. Additional experiments showed that the pouch cell still worked after being folded and cut in half. The coin cell retained 72% of its capacity after 300 charge-discharge cycles. They next applied the polymer coating to cathodes made from other metals, creating lithium-molybdenum and lithium-vanadium batteries. These cells also had stable capacity over 300 charge-discharge cycles. Overall, the results indicate that coated cathodes could produce not only safer Li-S batteries with long lifespans, but also efficient batteries with other metal sulfides, according to Wang’s team.

The authors acknowledge funding from the National Natural Science Foundation of China; the Natural Science Foundation of Sichuan, China; and the Beijing National Laboratory for Condensed Matter Physics.



Source link

Continue Reading

Trending