Connect with us

TOP SCEINCE

Blueprints of self-assembly

Published

on

Blueprints of self-assembly


Many biological structures of impressive beauty and sophistication arise through processes of self-assembly. Indeed, the natural world is teeming with intricate and useful forms that come together from many constituent parts, taking advantage of the built-in features of molecules.

Scientists hope to gain a better understanding of how this process unfolds and how such bottom-up construction can be used to advance technologies in computer science, materials science, medical diagnostics and other areas.

In new research, Arizona State University Assistant Professor Petr Sulc and his colleagues have taken a step closer to replicating nature’s processes of self-assembly. Their study describes the synthetic construction of a tiny, self-assembled crystal known as a “pyrochlore,” which bears unique optical properties.

The key to creating the crystal is the development of a new simulation method that can predict and guide the self-assembly process, avoiding unwanted structures and ensuring the molecules come together in just the right arrangement.

The advance provides a steppingstone to the eventual construction of sophisticated, self-assembling devices at the nanoscale — roughly the size of a single virus.

The new methods were used to engineer the pyrochlore nanocrystal, a special type of lattice that could eventually function as an optical metamaterial, “a special type of material that only transmits certain wavelengths of light,” Sulc says. “Such materials can then be used to produce so-called optical computers and more sensitive detectors, for a range of applications.”

Sulc is a researcher in the Biodesign Center for Molecular Design and Biomimetics, the School of Molecular Sciences and the Center for Biological Physics at Arizona State University.

The research appears in the current issue of the journal Science.

From chaos to complexity

Imagine placing a disassembled watch into a box, which you then shake vigorously for several minutes. When you open the box, you find an assembled, fully functional watch inside. Intuitively, we know that such an event is nearly impossible, as watches, like all other devices we manufacture, must be assembled progressively, with each component placed in its specific location by a person or a robotic assembly line.

Biological systems, such as bacteria, living cells or viruses, can construct highly ingenious nanostructures and nanomachines — complexes of biomolecules, like the protective shell of a virus or bacterial flagella that function similarly to a ship’s propeller, helping bacteria move forward.

These and countless other natural forms, comparable in size to a few dozen nanometers — one nanometer is equal to one-billionth of a meter, or roughly the length your fingernail grows in one second — arise through self-assembly. Such structures are formed from individual building blocks (biomolecules, such as proteins) that move chaotically and randomly within the cell, constantly colliding with water and other molecules, like the watch components in the box you vigorously shake.

Despite the apparent chaos, evolution has found a way to bring order to the unruly process.

Molecules interact in specific ways that lead them to fit together in just the right manner, creating functional nanostructures inside or on the cell’s surface. These include various intricate complexes inside cells, such as machinary that can replicate entire genetic material. Less intricate examples, but quite complex nevertheless, include self-assembly of the tough outer shells of viruses, whose assembly process Sulc also previously studied with his colleague, Banu Ozkan from ASU’s Department of Physics.

Crafting with DNA

For several decades, the field of bionanotechnology has worked to craft tiny structures in the lab, replicating the natural assembly process seen in living organisms. The technique generally involves mixing molecular components in water, gradually cooling them and hoping that when the solution reaches room temperature, all the pieces will fit together correctly.

One of the most successful strategies, known as DNA bionanotechnology, uses artificially synthesized DNA as the basic building block. This molecule of life is not only capable of storing vast troves of genetic information — strands of DNA can also be designed in the lab to connect with each other in such a way that a clever 3D structure is formed.

The resulting nanostructures, known as DNA origami, have a range of promising applications, from diagnostics to therapy, where, for example, they are being tested as a new method of vaccine delivery.

A significant challenge lies in engineering molecule interactions to form only the specific, pre-designed nanostructures. In practice, unexpected structures often result due to the unpredictable nature of particle collisions and interactions. This phenomenon, known as a kinetic trap, is akin to hoping for an assembled watch after shaking a box of its parts, only to find a jumbled heap instead.

Maintaining order

To attempt to overcome kinetic traps and ensure the proper structure self-assembles from the DNA fragments, the researchers developed new statistical methods that can simulate the self-assembly process of nanostructures.

The challenges for achieving useful simulations of such enormously complex processes are formidable. During the assembly phase, the chaotic dance of molecules can last several minutes to hours before the target nanostructure is formed, but the most powerful simulations in the world can only simulate a few milliseconds at most.

“Therefore, we developed a whole new range of models that can simulate DNA nanostructures with different levels of precision,” Sulc says. “Instead of simulating individual atoms, as is common in protein simulations, for example, we represent 12,000 DNA bases as one complex particle.”

This approach allows researchers to pinpoint problematic kinetic traps by combining computer simulations with different degrees of accuracy. Using their optimization method, researchers can fine-tune the blizzard of molecular interactions, compelling the components to assemble correctly into the intended structure.

The computational framework established in this research will guide the creation of more complex materials and the development of nanodevices with intricate functions, with potential uses in both diagnostics and treatment.

The research work was carried out in collaboration with researchers from Sapienza University of Rome, Ca’ Foscari University of Venice and Columbia University in New York.



Source link

Continue Reading
Click to comment

Leave a Reply

TOP SCEINCE

New AI can ID brain patterns related to specific behavior

Published

on

By

Blueprints of self-assembly


Maryam Shanechi, the Sawchuk Chair in Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new AI algorithm that can separate brain patterns related to a particular behavior. This work, which can improve brain-computer interfaces and discover new brain patterns, has been published in the journal Nature Neuroscience.

As you are reading this story, your brain is involved in multiple behaviors.

Perhaps you are moving your arm to grab a cup of coffee, while reading the article out loud for your colleague, and feeling a bit hungry. All these different behaviors, such as arm movements, speech and different internal states such as hunger, are simultaneously encoded in your brain. This simultaneous encoding gives rise to very complex and mixed-up patterns in the brain’s electrical activity. Thus, a major challenge is to dissociate those brain patterns that encode a particular behavior, such as arm movement, from all other brain patterns.

For example, this dissociation is key for developing brain-computer interfaces that aim to restore movement in paralyzed patients. When thinking about making a movement, these patients cannot communicate their thoughts to their muscles. To restore function in these patients, brain-computer interfaces decode the planned movement directly from their brain activity and translate that to moving an external device, such as a robotic arm or computer cursor.

Shanechi and her former Ph.D. student, Omid Sani, who is now a research associate in her lab, developed a new AI algorithm that addresses this challenge. The algorithm is named DPAD, for “Dissociative Prioritized Analysis of Dynamics.”

“Our AI algorithm, named DPAD, dissociates those brain patterns that encode a particular behavior of interest such as arm movement from all the other brain patterns that are happening at the same time,” Shanechi said. “This allows us to decode movements from brain activity more accurately than prior methods, which can enhance brain-computer interfaces. Further, our method can also discover new patterns in the brain that may otherwise be missed.”

“A key element in the AI algorithm is to first look for brain patterns that are related to the behavior of interest and learn these patterns with priority during training of a deep neural network,” Sani added. “After doing so, the algorithm can later learn all remaining patterns so that they do not mask or confound the behavior-related patterns. Moreover, the use of neural networks gives ample flexibility in terms of the types of brain patterns that the algorithm can describe.”

In addition to movement, this algorithm has the flexibility to potentially be used in the future to decode mental states such as pain or depressed mood. Doing so may help better treat mental health conditions by tracking a patient’s symptom states as feedback to precisely tailor their therapies to their needs.

“We are very excited to develop and demonstrate extensions of our method that can track symptom states in mental health conditions,” Shanechi said. “Doing so could lead to brain-computer interfaces not only for movement disorders and paralysis, but also for mental health conditions.”



Source link

Continue Reading

TOP SCEINCE

Formation of super-Earths is limited near metal-poor stars

Published

on

By

Blueprints of self-assembly


In a new study, astronomers report novel evidence regarding the limits of planet formation, finding that after a certain point, planets larger than Earth have difficulty forming near low-metallicity stars.

Using the sun as a baseline, astronomers can measure when a star formed by determining its metallicity, or the level of heavy elements present within it. Metal-rich stars or nebulas formed relatively recently, while metal-poor objects were likely present during the early universe.

Previous studies found a weak connection between metallicity rates and planet formation, noting that as a star’s metallicity goes down, so, too, does planet formation for certain planet populations, like sub-Saturns or sub-Neptunes.

Yet this work is the first to observe that under current theories, the formation of super-Earths near metal-poor stars becomes significantly more difficult, suggesting a strict cut-off for the conditions needed for one to form, said lead author Kiersten Boley, who recently received a PhD in astronomy at The Ohio State University.

“When stars cycle through life, they enrich the surrounding space until you have enough metals or iron to form planets,” said Boley. “But even for stars with lower metallicities, it was widely thought that the number of planets it could form would never reach zero.”

Other studies posited that planet formation in the Milky Way should begin when stars fall between negative 2.5 to negative 0.5 metallicity, but until now, that theory was left unproven.

To test this prediction, the team developed and then searched a catalog of 10,000 of the most metal-poor stars observed by NASA’s Transiting Exoplanet Survey Satellite (TESS) mission. If correct, extrapolating known trends to search for small, short-period planets around one region of 85,000 metal-poor stars would have led them to discover about 68 super-Earths.

Surprisingly, researchers in this work detected none, said Boley. “We essentially found a cliff where we expected to see a slow or a gradual slope that keeps going,” she said. “The expected occurrence rates do not match up at all.”

The study was published in The Astronomical Journal.

This cliff, which provides scientists with a time frame during which metallicity was too low for planets to form, extends to about half the age of the universe, meaning that super-Earths did not form early in its history. “Seven billion years ago is probably the sweet spot where we begin to see a decent bit of super-Earth formation,” Boley said.

Moreover, as the majority of stars formed before that era have low metallicities and would have needed to wait until the Milky Way had been enriched by generations of dying stars to create the right conditions for planet formation, the results successfully propose an upper limit on the number and distribution of small planets in our galaxy.

“In a similar stellar type as our sample, we now know not to expect planet formation to be abundant once you pass a negative 0.5 metallicity region,” said Boley. “That’s kind of striking because we actually have data to show that now.”

What’s also striking is the study’s implications for those searching for life beyond Earth, as having a more precise grasp on the intricacies of planet formation can supply scientists with detailed knowledge about where in the universe life might have flourished.

“You don’t want to search areas where life wouldn’t be conducive or in areas where you don’t even think you’re going to find a planet,” Boley said. “There’s just a plethora of questions that you can ask if you know these things.”

Such inquiries could include determining if these exoplanets hold water, the size of their core, and if they’ve developed a strong magnetic field, all conditions conducive for generating life.

To apply their work to other types of planet formation processes, the team will likely need to study different types of super-Earths for longer periods than they can today. Fortunately, future observations could be attained with the help of upcoming projects like NASA’s Nancy Grace Roman Space Telescope and the European Space Agency’s PLATO mission, both of which will widen the search for terrestrial planets in habitable zones like our own.

“Those instruments will be really vital in terms of figuring out how many planets are out there and getting as many follow-up observations as we can,” said Boley.

Other co-authors include Ji Wang from Ohio State; Jessie Christiansen, Philip Hopkins and Jon Zink from The California Institute of Technology; Kevin Hardegree-Ullman and Galen Bergsten from The University of Arizona; Eve Lee from McGill University; Rachel Fernandes from The Pennsylvania State University; and Sakhee Bhure from the University of Southern Queensland. This study was supported by the National Science Foundation and NASA.



Source link

Continue Reading

TOP SCEINCE

New research sinks an old theory for the doldrums, a low-wind equatorial region that stranded sailors for centuries

Published

on

By

Blueprints of self-assembly


During the Age of Sail, sailors riding the trade winds past the equator dreaded becoming stranded in the doldrums, a meteorologically distinct region in the deep tropics. For at least a century, scientists have thought that the doldrums’ lack of wind was caused by converging and rising air masses. Now, new research suggests that the opposite may be true.

“The idea of what causes the doldrums came from a time where we didn’t know a lot about how air actually moves in the tropics,” said Julia Windmiller, an atmospheric scientist at the Max Planck Institute for Meteorology and the study’s author. “We have forgotten about the doldrums to such a degree that nobody has taken the trouble of thinking through this original argument again.”

Instead, Windmiller proposes that low wind speeds throughout the doldrums are created by large areas of sinking air that diverge at the surface, creating clear and windless days. Her explanation challenges the conventional explanation for the tropical, oceanic phenomenon that has stranded sailors, inspired poets and largely slipped out of scientific literature.

Traditionally, areas of low to no wind around the equator have been explained by converging and rising air masses. And while those air masses do create low-pressure, slow-wind areas at the surface, that idea can only explain the doldrums’ extended regions of low winds when many areas of convergence are averaged together over days or weeks. On the shorter timescales, converging air masses do not cover enough area to create large windless regions that can last for days — the doldrums.

The research was published in Geophysical Research Letters, an open-access AGU journal that publishes high-impact, short-format reports with immediate implications spanning all Earth and space sciences.

Deciphering the doldrums

The doldrums, also known as the Intertropical Convergence Zone, was named by early 19th century sailors marooned at sea by bouts of little or no wind. The term, originally defined as a period of despondency or depression, has come to describe the sometimes-stormy, sometimes-calm equatorial region. The oceanic area was even referenced in Samuel Taylor Coleridge’s 1834 poem, “The Rime of the Ancient Mariner”:

Day after day, day after day, We stuck, nor breath nor motion; As idle as a painted ship Upon a painted ocean.

The Intertropical Convergence Zone is usually characterized as a region of converging trade winds and rising air masses near the equator. The air masses, warmed by equatorial heat, float up like balloons, form clouds and whip up storms over the equator. They then sink back down at approximately 30 degrees North and South of the equator, completing what is known as Hadley Cell circulation. This pattern of converging and rising air near the equator has traditionally been accepted as the cause for the doldrums, as pockets of low to no winds are generally created under rising air masses.

However, little modern research has focused on proving the root cause of the doldrums. The accepted explanation for the doldrums could not be completely correct, Windmiller said, unless the regions of uplifting air were averaged over time.

“There’s this fascinating break in reasoning because this upward circulation of air doesn’t work for short time scales and large areas of still wind,” said Windmiller. “To some degree, because we’ve historically forgotten about the doldrums, this flaw in the logic never really came up.”

Windmiller analyzed Intertropical Convergence Zone meteorological data for the Atlantic Ocean between 2001 and 2021 and buoy data ranging from 1998 to 2018 to define the edges of the Intertropical Convergence Zone and investigate low wind speed events in the region. Low wind speed events are characterized by winds blowing slower than three meters per second, or five knots, for at least six hours. Windmiller examined the data on multi-day, hourly and minute-by-minute timescales, and considered how the low wind speed events evolved over time.

She found that low wind speed events coincided with clear weather conditions, lowered air temperatures and a lack of precipitation: conditions that point to sinking air masses diverging at the surface rather than rising air masses. Windmiller also found that low wind speed events mainly happen in the inner regions of the Intertropical Convergence Zone, and that they only occur on average in about 5% of the region at any given time (but can occur as often as 21% of the time in the eastern Atlantic during the Northern Hemisphere’s summer). Low wind speed locations also varied based on the season and region of the Atlantic Ocean.

“Most of the air inside the Intertropical Convergence Zone is actually going down rather than up,” said Windmiller. “It’s not just on average that we have low wind speeds in this region, but that we have these moments in time when the wind has just gone away over very large areas.”

Her idea is supported not just by scientific evidence, but by the next verse in Coleridge’s poem, which famously describes a ship’s stranding in a windless, rainless region within the doldrums:

Water, water, every where, And all the boards did shrink; Water, water, every where, Nor any drop to drink.

Upending an old explanation

For years, Windmiller has queried other atmospheric scientists about the doldrums: What really causes the wind to occasionally disappear around the equator?

“They would start to explain this upward circulation of air, but as they were explaining it, they often realized it didn’t actually make sense,” said Windmiller. “I was always surprised. It’s such a basic phenomenon, so why wouldn’t we have a theory for it?”

Some questions do remain. Windmiller is not certain what causes the Intertropical Convergence Zone’s large regions of sinking air. While most of the air in the tropics is slowly sinking, that effect alone may not be strong enough to cause the doldrums. Other possible causes include large convective systems that leave downdrafts in their wakes, or humidity gradients that cause local air to cool and sink, she said.

And while modern mariners are unlikely to be stranded in the doldrums thanks to diesel engines, understanding the doldrums’ true cause could still have present-day impacts. New, high-resolution climate models struggle to simulate regions of low wind speeds, so better understanding the doldrums could improve model predictions of precipitation and wind patterns.

“We can no longer explain these low wind speed events in the way we’ve done before,” said Windmiller. “I hope that this is something that people will see and read, and realize that the explanation is really upside down from what we’ve had.”



Source link

Continue Reading

Trending