Connect with us

TOP SCEINCE

Breaking ground: Could geometry offer a new explanation for why earthquakes happen?

Published

on

Breaking ground: Could geometry offer a new explanation for why earthquakes happen?


Findings published in Nature by a team of Brown-led researchers challenge traditional beliefs about the cause of earthquakes and suggest that it depends not on friction, but on the ways faults are aligned.

By taking a close look at the geometrical makeup of rocks where earthquakes originate, researchers at Brown University are adding a new wrinkle to a long-held belief about what causes seismic quakes in the first place.

The work, described in the journal Nature, reveals that the way fault networks are aligned plays a critical role in determining where an earthquake will happen and its strength. The findings challenge the more traditional notion that it is primarily the type of friction happening at these faults that governs whether earthquakes happen or not, and they could improve current understandings of how earthquakes work.

“Our paper paints this very different sort of picture about why earthquakes happen,” said Brown geophysicist Victor Tsai, one of the paper’s lead authors. “And this has very important implications for where to expect earthquakes versus where to not expect earthquakes, as well as for predicting where the most damaging earthquakes will be.”

Fault lines are the visible boundaries on the planet’s surface where the rigid plates that make up the Earth’s lithosphere brush against each another. Tsai says that for decades, geophysicists have explained earthquakes as happening when stress at faults builds up to the point where the faults rapidly slip or break past each other, releasing pent-up pressure in an action known as stick-slip behavior.

Researchers theorized that the rapid slip and intense ground motions that follow are a result of unstable friction that can happen at the faults. In contrast, the thought is that when friction is stable, the plates then slide against each other slowly without an earthquake. This steady and smooth movement is also known as creep.

“People have been trying to measure these frictional properties, like whether the fault zone has unstable friction or stable friction and then, based on laboratory measurements of that, they try to predict if are you going to have an earthquake there or not,” Tsai said. “Our findings suggest that it might be more relevant to look at the geometry of the faults in these fault networks, because it may be the complex geometry of the structures around those boundaries that creates this unstable versus stable behavior.”

The geometry to consider includes complexities in the underlying rock structures such as bends, gaps and stepovers. The study is based on mathematical modeling and studying fault zones in California using data from the U.S. Geological Survey’s Quaternary Fault Database and from the California Geological Survey.

The research team, which also includes Brown graduate student Jaeseok Lee and Brown geophysicist Greg Hirth, offer a more detailed example to illustrate how earthquakes happen. They say to picture the faults that brush up against each other as having serrated teeth like the edge of a saw. When there are fewer teeth or teeth that are not as sharp, the rocks slide past each other more smoothly, allowing for creep. But when the rock structures in these faults are more complex and jagged, these structures catch on to one another and get stuck. When that happens, they build up pressure and eventually as they pull and push harder and harder, they break, jerking away from each other and leading to earthquakes.

The new study builds on previous work looking at why some earthquakes generate more ground motion compared to other earthquakes in different parts of the world, sometimes even those of similar magnitude. The study showed that blocks colliding inside a fault zone as an earthquake happens contributes significantly to the generation of high-frequency vibrations and sparked the notion that maybe geometrical complexity beneath the surface was also playing a role in where and why earthquakes happen.

Analyzing data from faults in California — which include the well-known San Andreas fault — the researchers found that fault zones that have complex geometry underneath, meaning the structures there weren’t as aligned, turned out to have stronger ground motions than less geometrically complex fault zones. This also means some of these zones would have stronger earthquakes, others would have weaker ones, and some would have no earthquakes.

The researchers determined this based on the average misalignment of the faults they analyzed. This misalignment ratio measures how closely the faults in a certain region are aligned and all going in the same direction versus going in many different directions. The analysis revealed that fault zones where the faults are more misaligned causes stick-slip episodes in the form of earthquakes. Fault zones where the geometry of the faults were more aligned facilitated smooth fault creep with no earthquakes.

“Understanding how faults behave as a system is essential to grasp why and how earthquakes happen,” said Lee, the graduate student who led the work. “Our research indicates that the complexity of fault network geometry is the key factor and establishes meaningful connections between sets of independent observations and integrates them into a novel framework.”

The researchers say more work needs to be done to fully validate the model, but this initial work suggests the idea is promising, especially because the alignment or misalignment of faults is easier to measure than fault frictional properties. If valid, the work can one day be weaved into earthquake prediction models.

That remains far off for now as the researchers begin to outline how to build upon the study.

“The most obvious thing that comes next is trying to go beyond California and see how this model holds up,” Tsai said. “This is potentially a new way of understanding how earthquakes happen.”



Source link

Continue Reading
Click to comment

Leave a Reply

TOP SCEINCE

New drug shows promise in clearing HIV from brain

Published

on

By

Breaking ground: Could geometry offer a new explanation for why earthquakes happen?


An experimental drug originally developed to treat cancer may help clear HIV from infected cells in the brain, according to a new Tulane University study.

For the first time, researchers at Tulane National Primate Research Center found that a cancer drug significantly reduced levels of SIV, the nonhuman primate equivalent of HIV, in the brain by targeting and depleting certain immune cells that harbor the virus.

Published in the journal Brain, this discovery marks a significant step toward eliminating HIV from hard-to-reach reservoirs where the virus evades otherwise effective treatment.

“This research is an important step in tackling brain-related issues caused by HIV, which still affect people even when they are on effective HIV medication,” said lead study author Woong-Ki Kim, PhD, associate director for research at Tulane National Primate Research Center. “By specifically targeting the infected cells in the brain, we may be able to clear the virus from these hidden areas, which has been a major challenge in HIV treatment.”

Antiretroviral therapy (ART) is an essential component of successful HIV treatment, maintaining the virus at undetectable levels in the blood and transforming HIV from a terminal illness into a manageable condition. However, ART does not completely eradicate HIV, necessitating lifelong treatment. The virus persists in “viral reservoirs” in the brain, liver, and lymph nodes, where it remains out of reach of ART.

The brain has been a particularly challenging area for treatment due to the blood-brain barrier — a protective membrane that shields it from harmful substances but also blocks treatments, allowing the virus to persist. In addition, cells in the brain known as macrophages are extremely long-lived, making them difficult to eradicate once they become infected.

Infection of macrophages is thought to contribute to neurocognitive dysfunction, experienced by nearly half of those living with HIV. Eradicating the virus from the brain is critical for comprehensive HIV treatment and could significantly improve the quality of life for those with HIV-related neurocognitive problems.

Researchers focused on macrophages, a type of white blood cell that harbors HIV in the brain. By using a small molecule inhibitor to block a receptor that increases in HIV-infected macrophages, the team successfully reduced the viral load in the brain. This approach essentially cleared the virus from brain tissue, providing a potential new treatment avenue for HIV.

The small molecule inhibitor used, BLZ945, has previously been studied for therapeutic use in amyotrophic lateral sclerosis (ALS) and brain cancer, but never before in the context of clearing HIV from the brain.

The study, which took place at the Tulane National Primate Research Center, utilized three groups to model human HIV infection and treatment: an untreated control group, and two groups treated with either a low or high dose of the small molecule inhibitor for 30 days. The high-dose treatment lead to a notable reduction in cells expressing HIV receptor sites, as well as a 95-99% decrease in viral DNA loads in the brain .

In addition to reducing viral loads, the treatment did not significantly impact microglia, the brain’s resident immune cells, which are essential for maintaining a healthy neuroimmune environment. It also did not show signs of liver toxicity at the doses tested.

The next step for the research team is to test this therapy in conjunction with ART to assess its efficacy in a combined treatment approach. This could pave the way for more comprehensive strategies to eradicate HIV from the body entirely.

This research was funded by the National Institutes of Health, including grants from the National Institute of Mental Health and the National Institute of Neurological Disorders and Stroke, and was supported with resources from the Tulane National Primate Research Center base grant of the National Institutes of Health, P51 OD011104.



Source link

Continue Reading

TOP SCEINCE

Chemical analyses find hidden elements from renaissance astronomer Tycho Brahe’s alchemy laboratory

Published

on

By

Breaking ground: Could geometry offer a new explanation for why earthquakes happen?


In the Middle Ages, alchemists were notoriously secretive and didn’t share their knowledge with others. Danish Tycho Brahe was no exception. Consequently, we don’t know precisely what he did in the alchemical laboratory located beneath his combined residence and observatory, Uraniborg, on the now Swedish island of Ven.

Only a few of his alchemical recipes have survived, and today, there are very few remnants of his laboratory. Uraniborg was demolished after his death in 1601, and the building materials were scattered for reuse.

However, during an excavation in 1988-1990, some pottery and glass shards were found in Uraniborg’s old garden. These shards were believed to originate from the basement’s alchemical laboratory. Five of these shards — four glass and one ceramic — have now undergone chemical analyses to determine which elements the original glass and ceramic containers came into contact with.

The chemical analyses were conducted by Professor Emeritus and expert in archaeometry, Kaare Lund Rasmussen from the Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark. Senior researcher and museum curator Poul Grinder-Hansen from the National Museum of Denmark oversaw the insertion of the analyses into historical context.

Enriched levels of trace elements were found on four of them, while one glass shard showed no specific enrichments. The study has been published in the journal Heritage Science.

“Most intriguing are the elements found in higher concentrations than expected — indicating enrichment and providing insight into the substances used in Tycho Brahe’s alchemical laboratory,” said Kaare Lund Rasmussen.

The enriched elements are nickel, copper, zinc, tin, antimony, tungsten, gold, mercury, and lead, and they have been found on either the inside or outside of the shards.

Most of them are not surprising for an alchemist’s laboratory. Gold and mercury were — at least among the upper echelons of society — commonly known and used against a wide range of diseases.

“But tungsten is very mysterious. Tungsten had not even been described at that time, so what should we infer from its presence on a shard from Tycho Brahe’s alchemy workshop?,” said Kaare Lund Rasmussen.

Tungsten was first described and produced in pure form more than 180 years later by the Swedish chemist Carl Wilhelm Scheele. Tungsten occurs naturally in certain minerals, and perhaps the element found its way to Tycho Brahe’s laboratory through one of these minerals. In the laboratory, the mineral might have undergone some processing that separated the tungsten, without Tycho Brahe ever realizing it.

However, there is also another possibility that Professor Kaare Lund Rasmussen emphasizes has no evidence whatsoever — but which could be plausible.

Already in the first half of the 1500s, the German mineralogist Georgius Agricola described something strange in tin ore from Saxony, which caused problems when he tried to smelt tin. Agricola called this strange substance in the tin ore “Wolfram” (German for Wolf’s froth, later renamed to tungsten in English).

“Maybe Tycho Brahe had heard about this and thus knew of tungsten’s existence. But this is not something we know or can say based on the analyses I have done. It is merely a possible theoretical explanation for why we find tungsten in the samples,” said Kaare Lund Rasmussen.

Tycho Brahe belonged to the branch of alchemists who, inspired by the German physician Paracelsus, tried to develop medicine for various diseases of the time: plague, syphilis, leprosy, fever, stomach aches, etc. But he distanced himself from the branch that tried to create gold from less valuable minerals and metals.

In line with the other medical alchemists of the time, he kept his recipes close to his chest and shared them only with a few selected individuals, such as his patron, Emperor Rudolph II, who allegedly received Tycho Brahe’s prescriptions for plague medicine.

We know that Tycho Brahe’s plague medicine was complicated to produce. It contained theriac, which was one of the standard remedies for almost everything at the time and could have up to 60 ingredients, including snake flesh and opium. It also contained copper or iron vitriol (sulphates), various oils, and herbs.

After various filtrations and distillations, the first of Brahe’s three recipes against plague was obtained. This could be made even more potent by adding tinctures of, for example, coral, sapphires, hyacinths, or potable gold.

“It may seem strange that Tycho Brahe was involved in both astronomy and alchemy, but when one understands his worldview, it makes sense. He believed that there were obvious connections between the heavenly bodies, earthly substances, and the body’s organs. Thus, the Sun, gold, and the heart were connected, and the same applied to the Moon, silver, and the brain; Jupiter, tin, and the liver; Venus, copper, and the kidneys; Saturn, lead, and the spleen; Mars, iron, and the gallbladder; and Mercury, mercury, and the lungs. Minerals and gemstones could also be linked to this system, so emeralds, for example, belonged to Mercury,” explained Poul Grinder-Hansen.

Kaare Lund Rasmussen has previously analyzed hair and bones from Tycho Brahe and found, among other elements, gold. This could indicate that Tycho Brahe himself had taken medicine that contained potable gold.



Source link

Continue Reading

TOP SCEINCE

Nitrogen emissions have a net cooling effect: But researchers warn against a climate solution

Published

on

By

Breaking ground: Could geometry offer a new explanation for why earthquakes happen?


An international team of researchers has found that nitrogen emissions from fertilisers and fossil fuels have a net cooling effect on the climate. But they warn increasing atmospheric nitrogen has further damaging effects on the environment, calling for an urgent reduction in greenhouse gas emissions to halt global warming.

Published today in Nature, the paper found that reactive nitrogen released in the environment through human activities cools the climate by minus 0.34 watts per square metre. While global warming would have advanced further without the input of human-generated nitrogen, the amount would not offset the level of greenhouse gasses heating the atmosphere.

The paper was led by the Max Planck Institute in Germany and included authors from the University of Sydney. It comes one day after new data from the European Union’s Copernicus Climate Change Service indicated that Sunday, 21 July was the hottest day recorded in recent history.

The net cooling effect occurs in four ways:

  • Short-lived nitrogen oxides produced by the combustion of fossil fuels pollute the atmosphere by forming fine suspended particles which shield sunlight, in turn cooling the climate;

  • ammonia (a nitrogen and hydrogen-based compound) released into the atmosphere from the application of manure and artificial fertilisers has a similar effect;

  • nitrogen applied to crops allows plants to grow more abundantly, absorbing more CO2 from the atmosphere, enabling a cooling effect;

  • nitrogen oxides also play a role in the breakdown of atmospheric methane, a potent greenhouse gas.

The researchers warned that increasing atmospheric nitrogen was not a solution for combatting climate change.

“Nitrogen fertilisers pollute water and nitrogen oxides from fossil fuels pollute the air. Therefore, increasing rates of nitrogen in the atmosphere to combat climate change is not an acceptable compromise, nor is it a solution,” said Professor Federico Maggi from the University of Sydney’s School of Civil Engineering.

Sönke Zaehle from the Max Planck Institute said: “This may sound like good news, but you have to bear in mind that nitrogen emissions have many harmful effects, for example on health, biodiversity and the ozone layer. The current findings, therefore, are no reason to gloss over the harmful effects, let alone see additional nitrogen input as a means of combatting global warming.”

Elemental nitrogen, which makes up around 78 percent of the air, is climate-neutral, but other reactive nitrogen compounds can have direct or indirect effects on the global climate — sometimes warming and at other times cooling. Nitrous oxide (N2O) is an almost 300 times more potent greenhouse gas than CO2. Other forms of nitrogen stimulate the formation of ozone in the troposphere, which is a potent greenhouse gas and enhances global warming.

Professor Maggi said the research was important as it helped the team gain an understanding of the net-effect of the distribution of nitrogen emissions from agriculture.

“This work is an extraordinary example of how complex interactions at planetary scales cannot be captured with simplistic assessment tools. It shows the importance of developing mathematical models that can show the emergence of nonlinear — or unproportional — effects across soil, land, and atmosphere,” he said.

“Even if it appears counter-intuitive, reactive nitrogen introduced in the environment, mostly as agricultural fertilisers, can reduce total warming. However, this is minor compared with the reduction in greenhouse gas emissions required to keep the planet within safe and just operational boundaries.

“New generation computational tools are helping drive new learnings in climate change science, but understanding is not enough — we must act with great urgency to reduce greenhouse gas emissions.”

Gaining a holistic understanding of the impacts of nitrogen

The scientists determined the overall impact of nitrogen from human sources by first analysing the quantities of the various nitrogen compounds that end up in soil, water and air.

They then fed this data into models that depict the global nitrogen cycle and the effects on the carbon cycle, for example the stimulation of plant growth and ultimately the CO2 and methane content of the atmosphere. From the results of these simulations, they used another atmospheric chemistry model to calculate the effect of man-made nitrogen emissions on radiative forcing, that is the radiant energy that hits one square metre of the Earth’s surface per unit of time.



Source link

Continue Reading

Trending