Connect with us

TOP SCEINCE

New research reveals the genetic basis for daytime napping — ScienceDaily

Published

on

New research reveals the genetic basis for daytime napping — ScienceDaily

How often a person takes daytime naps, if at all, is partly regulated by their genes, according to new research led by investigators at Massachusetts General Hospital (MGH) and published in Nature Communications. In this study, the largest of its kind ever conducted, the MGH team collaborated with colleagues at the University of Murcia in Spain and several other institutions to identify dozens of gene regions that govern the tendency to take naps during the day. They also uncovered preliminary evidence linking napping habits to cardiometabolic health.

“Napping is somewhat controversial,” says Hassan Saeed Dashti, PhD, RD, of the MGH Center for Genomic Medicine, co-lead author of the report with Iyas Daghlas, a medical student at Harvard Medical School (HMS). Dashti notes that some countries where daytime naps have long been part of the culture (such as Spain) now discourage the habit. Meanwhile, some companies in the United States now promote napping as a way to boost productivity. “It was important to try to disentangle the biological pathways that contribute to why we nap,” says Dashti.Previously, co-senior author Richa Saxena, PhD, principal investigator at the Saxena Lab at MGH, and her colleagues used massive databases of genetic and lifestyle information to study other aspects of sleep. Notably, the team has identified genes associated with sleep duration, insomnia, and the tendency to be an early riser or “night owl.” To gain a better understanding of the genetics of napping, Saxena’s team and co-senior author Marta Garaulet, PhD, of the department of Physiology at the University of Murcia, performed a genome-wide association study (GWAS), which involves rapid scanning of complete sets of DNA, or genomes, of a large number of people. The goal of a GWAS is to identify genetic variations that are associated with a specific disease or, in this case, habit.For this study, the MGH researchers and their colleagues used data from the UK Biobank, which includes genetic information from 452,633 people. All participants were asked whether they nap during the day “never/rarely,” “sometimes” or “usually.” The GWAS identified 123 regions in the human genome that are associated with daytime napping. A subset of participants wore activity monitors called accelerometers, which provide data about daytime sedentary behavior, which can be an indicator of napping. This objective data indicated that the self-reports about napping were accurate. “That gave an extra layer of confidence that what we found is real and not an artifact,” says Dashti.

Several other features of the study bolster its results. For example, the researchers independently replicated their findings in an analysis of the genomes of 541,333 people collected by 23andMe, the consumer genetic-testing company. Also, a significant number of the genes near or at regions identified by the GWAS are already known to play a role in sleep. One example is KSR2, a gene that the MGH team and collaborators had previously found plays a role in sleep regulation.

Digging deeper into the data, the team identified at least three potential mechanisms that promote napping:

  • Sleep propensity: Some people need more shut-eye than others.
  • Disrupted sleep: A daytime nap can help make up for poor quality slumber the night before.
  • Early morning awakening: People who rise early may “catch up” on sleep with a nap.

“This tells us that daytime napping is biologically driven and not just an environmental or behavioral choice,” says Dashti. Some of these subtypes were linked to cardiometabolic health concerns, such as large waist circumference and elevated blood pressure, though more research on those associations is needed. “”Future work may help to develop personalized recommendations for siesta,” says Garaulet.

Furthermore, several gene variants linked to napping were already associated with signaling by a neuropeptide called orexin, which plays a role in wakefulness. “This pathway is known to be involved in rare sleep disorders like narcolepsy, but our findings show that smaller perturbations in the pathway can explain why some people nap more than others,” says Daghlas.

Saxena is the Phyllis and Jerome Lyle Rappaport MGH Research Scholar at the Center for Genomic Medicine and an associate professor of Anesthesia at HMS.

The work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Heart, Lung, and Blood Institute, MGH Research Scholar Fund, Spanish Government of Investigation, Development and Innovation, the Autonomous Community of the Region of Murcia through the Seneca Foundation, Academy of Finland, Instrumentarium Science Foundation, Yrjö Jahnsson Foundation, and Medical Research Council.

Source link

Continue Reading
1 Comment

1 Comment

  1. Pingback: Hidden magma pools pose eruption risks that we can't yet detect |

Leave a Reply

TOP SCEINCE

AI systems are already skilled at deceiving and manipulating humans

Published

on

By

New research reveals the genetic basis for daytime napping — ScienceDaily


Many artificial intelligence (AI) systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest. In a review article publishing in the journal Patterns on May 10, researchers describe the risks of deception by AI systems and call for governments to develop strong regulations to address this issue as soon as possible.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

Park and colleagues analyzed literature focusing on ways in which AI systems spread false information — through learned deception, in which they systematically learn to manipulate others.

The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be “largely honest and helpful” and to “never intentionally backstab” its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn’t play fair.

“We found that Meta’s AI had learned to be a master of deception,” says Park. “While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly.”

Other AI systems demonstrated the ability to bluff in a game of Texas hold ’em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations.

While it may seem harmless if AI systems cheat at games, it can lead to “breakthroughs in deceptive AI capabilities” that can spiral into more advanced forms of AI deception in the future, Park added.

Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.

“By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” says Park.

The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections, warns Park. Eventually, if these systems can refine this unsettling skill set, humans could lose control of them, he says.

“We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” says Park. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

While Park and his colleagues do not think society has the right measure in place yet to address AI deception, they are encouraged that policymakers have begun taking the issue seriously through measures such as the EU AI Act and President Biden’s AI Executive Order. But it remains to be seen, Park says, whether policies designed to mitigate AI deception can be strictly enforced given that AI developers do not yet have the techniques to keep these systems in check.

“If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems be classified as high risk,” says Park.

This work was supported by the MIT Department of Physics and the Beneficial AI Foundation.



Source link

Continue Reading

TOP SCEINCE

ONe novae stellar explosion may be source of our phosphorus

Published

on

By

New research reveals the genetic basis for daytime napping — ScienceDaily


Astronomers have proposed a new theory to explain the origin of phosphorus, one of the elements important for life on Earth. The theory suggests a type of stellar explosion known as ONe novae as a major source of phosphorus.

After the Big Bang, almost all of the matter in the Universe was comprised of hydrogen. Other elements were formed later, by nuclear reactions inside stars or when stars exploded in events known as novae or supernovae. But there are a variety of stars and a variety of ways they can explode. Astronomers are still trying to figure out which processes were important in creating the abundances of elements we see in the Universe.

In this study, Kenji Bekki, at The University of Western Australia, and Takuji Tsujimoto, at the National Astronomical Observatory of Japan, proposed a new model based on oxygen-neon novae, denoted as “ONe novae,” to explain the abundance of phosphorus. A ONe nova occurs when matter builds up on the surface of an oxygen-neon-magnesium rich white dwarf star and is heated to the point to ignite explosive run-away nuclear fusion.

The model predicts that a large amount of phosphorus will be released in a ONe nova and that the number of novae will depend on the chemical composition, specifically the iron content, of the stars. The researchers estimate that the rate of ONe novae peaked around 8 billion years ago, meaning that phosphorus would have been readily available when the Solar System started to form around 4.6 billion years ago.

The model predicts that ONe novae will produce a chlorine enhancement similar to the phosphorus enhancement. There is not yet enough observational data for chlorine to confirm this and it provides a testable hypothesis to check the validity of the ONe novae model. Future observations of stars in the outer part of the Milky Way Galaxy will provide the data needed to see if the predicted iron dependency and chlorine enhancement match reality, or if a rethink is needed.



Source link

Continue Reading

TOP SCEINCE

How the brain is flexible enough for a complex world (without being thrown into chaos)

Published

on

By

New research reveals the genetic basis for daytime napping — ScienceDaily


Every day our brains strive to optimize a trade-off: With lots of things happening around us even as we also harbor many internal drives and memories, somehow our thoughts must be flexible yet focused enough to guide everything we have to do. In a new paper in Neuron, a team of neuroscientists describes how the brain achieves the cognitive capacity to incorporate all the information that’s relevant without becoming overwhelmed by what’s not.

The authors argue that the flexibility arises from a key property observed in many neurons: “mixed selectivity.” While many neuroscientists used to think each cell had just one dedicated function, more recent evidence has shown that many neurons can instead participate in a variety of computational ensembles, each working in parallel. In other words, when a rabbit considers nibbling on some lettuce in a garden, a single neuron might be involved in not only assessing how hungry it feels but also whether it can hear a hawk overhead or smell a coyote in the trees and how far away the lettuce is.

The brain does not multitask, said paper co-author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory at MIT and a pioneer of the mixed selectivity idea, but many cells do have the capacity to be roped into multiple computational efforts (essentially “thoughts”). In the new paper the authors describe specific mechanisms the brain employs to recruit neurons into different computations and to ensure that those neurons represent the right number of dimensions of a complex task.

“These neurons wear multiple hats,” Miller said. “With mixed selectivity you can have a representational space that’s as complex as it needs to be and no more complex. That’s what flexible cognition is all about.”

Co-author Kay Tye, Professor at The Salk Institute and the University of California at San Diego, said mixed selectivity among neurons particularly in the medial prefrontal cortex is key to enabling many mental abilities.

“The mPFC is like a hum of whispers that represents so much information through highly flexible and dynamic ensembles,” Tye said. “Mixed selectivity is the property that endows us with our flexibility, cognitive capacity, and ability to be creative. It is the secret to maximizing computational power which is essentially the underpinnings of intelligence.”

Origins of an idea

The idea of mixed selectivity germinated in 2000 when Miller and colleague John Duncan defended a surprising result from a study of cognition in Miller’s lab. As animals sorted images into categories, about 30 percent of the neurons in the prefrontal cortex of the brain seemed to be involved. Skeptics who believed that every neuron had a dedicated function scoffed that the brain would devote so many cells to just one task. Miller and Duncan’s answer was that perhaps cells had the flexibility to be involved in many computations. The ability to serve on one cerebral task force, as it were, did not preclude them from being able to serve many others.

But what benefit does mixed selectivity convey? In 2013 Miller teamed up with two co-authors of the new paper, Mattia Rigotti of IBM Research and Stefano Fusi of Columbia University, to show how mixed selectivity endows the brain with powerful computational flexibility. Essentially, an ensemble of neurons with mixed selectivity can accommodate many more dimensions of information about a task than a population of neurons with invariant functions.

“Since our original work, we’ve made progress understanding the theory of mixed selectivity through the lens of classical machine learning ideas,” Rigotti said. “On the other hand, questions dear to experimentalists about the mechanisms implementing it at a cellular level had been comparatively under-explored. This collaboration and this new paper set out to fill that gap.”

In the new paper the authors imagine a mouse who is considering whether to eat a berry. It might smell delicious (that’s one dimension). It might be poisonous (that’s another). Yet another dimension or two of the problem could come in the form of a social cue. If the mouse smells the berry scent on a fellow mouse’s breath, then the berry is probably OK to eat (depending on the apparent health of the fellow mouse). A neural ensemble with mixed selectivity would be able to integrate all that.

Recruiting neurons

While mixed selectivity has the backing of copious evidence — it has been observed across the cortex and in other brain areas such as the hippocampus and amygdala — there are still open questions. For instance, how are neurons recruited to tasks and how do neurons that are so “open-minded” remain tuned only to what really matters to the mission?

In the new study, the researchers who also include Marcus Benna of UC San Diego and Felix Taschbach of The Salk Institute, define the forms of mixed selectivity that researchers have observed, and argue that when oscillations (also known as “brain waves”) and neuromodulators (chemicals such as serotonin or dopamine that influence neural function) recruit neurons into computational ensembles, they also help them “gate” what’s important for that purpose.

To be sure, some neurons are dedicated to a specific input, but the authors note they are an exception rather than the rule. The authors say these cells have “pure selectivity.” They only care if the rabbit sees lettuce. Some neurons exhibit “linear mixed selectivity,” which means their response predictably depends on multiple inputs adding up (the rabbit sees lettuce and feels hungry). The neurons that add the most dimensional flexibility are the “nonlinear mixed selectivity” ones that can account for multiple independent variables without necessarily summing them. Instead they might weigh a whole set of independent conditions (e.g. there’s lettuce, I’m hungry, I hear no hawks, I smell no coyotes, but the lettuce is far and I see a pretty sturdy fence).

So what brings neurons into the fold to focus on the salient factors, however many there are? One mechanism is oscillations, which are produced in the brain when many neurons all maintain their electrical activity at the same rhythm. This coordinated activity enables information sharing, essentially tuning them together like a bunch of cars all playing the same radio station (maybe the broadcast is about a hawk circling overhead). Another mechanism the authors highlight is neuromodulators. These are chemicals that upon reaching receptors within cells can influence their activity as well. A burst of acetylcholine, for instance, might similarly attune neurons with the right receptors to certain activity or information (like maybe that feeling of hunger).

“These two mechanisms likely work together to dynamically form functional networks,” the authors write.

Understanding mixed selectivity, they continue, is critical to understanding cognition.

“Mixed selectivity is ubiquitous,” they conclude. “It is present across species and across functions from high-level cognition to ‘automatic’ sensorimotor processes such as object recognition. The widespread presence of mixed selectivity underscores its fundamental role in providing the brain with the scalable processing power needed for complex thought and action.”



Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.