Connect with us

TOP SCEINCE

How fast is the universe expanding? Galaxies provide one answer: New measure of Hubble constant highlights discrepancy between estimates of our cosmic fate

Published

on

How fast is the universe expanding? Galaxies provide one answer: New measure of Hubble constant highlights discrepancy between estimates of our cosmic fate

Determining how rapidly the universe is expanding is key to understanding our cosmic fate, but with more precise data has come a conundrum: Estimates based on measurements within our local universe don’t agree with extrapolations from the era shortly after the Big Bang 13.8 billion years ago.

A new estimate of the local expansion rate — the Hubble constant, or H0 (H-naught) — reinforces that discrepancy.Using a relatively new and potentially more precise technique for measuring cosmic distances, which employs the average stellar brightness within giant elliptical galaxies as a rung on the distance ladder, astronomers calculate a rate — 73.3 kilometers per second per megaparsec, give or take 2.5 km/sec/Mpc — that lies in the middle of three other good estimates, including the gold standard estimate from Type Ia supernovae. This means that for every megaparsec — 3.3 million light years, or 3 billion trillion kilometers — from Earth, the universe is expanding an extra 73.3 ±2.5 kilometers per second. The average from the three other techniques is 73.5 ±1.4 km/sec/Mpc.

Perplexingly, estimates of the local expansion rate based on measured fluctuations in the cosmic microwave background and, independently, fluctuations in the density of normal matter in the early universe (baryon acoustic oscillations), give a very different answer: 67.4 ±0.5 km/sec/Mpc.

Astronomers are understandably concerned about this mismatch, because the expansion rate is a critical parameter in understanding the physics and evolution of the universe and is key to understanding dark energy — which accelerates the rate of expansion of the universe and thus causes the Hubble constant to change more rapidly than expected with increasing distance from Earth. Dark energy comprises about two-thirds of the mass and energy in the universe, but is still a mystery.

For the new estimate, astronomers measured fluctuations in the surface brightness of 63 giant elliptical galaxies to determine the distance and plotted distance against velocity for each to obtain H0. The surface brightness fluctuation (SBF) technique is independent of other techniques and has the potential to provide more precise distance estimates than other methods within about 100 Mpc of Earth, or 330 million light years. The 63 galaxies in the sample are at distances ranging from 15 to 99 Mpc, looking back in time a mere fraction of the age of the universe.


“For measuring distances to galaxies out to 100 megaparsecs, this is a fantastic method,” said cosmologist Chung-Pei Ma, the Judy Chandler Webb Professor in the Physical Sciences at the University of California, Berkeley, and professor of astronomy and physics. “This is the first paper that assembles a large, homogeneous set of data, on 63 galaxies, for the goal of studying H-naught using the SBF method.”

Ma leads the MASSIVE survey of local galaxies, which provided data for 43 of the galaxies — two-thirds of those employed in the new analysis.

The data on these 63 galaxies was assembled and analyzed by John Blakeslee, an astronomer with the National Science Foundation’s NOIRLab. He is first author of a paper now accepted for publication in The Astrophysical Journal that he co-authored with colleague Joseph Jensen of Utah Valley University in Orem. Blakeslee, who heads the science staff that support NSF’s optical and infrared observatories, is a pioneer in using SBF to measure distances to galaxies, and Jensen was one of the first to apply the method at infrared wavelengths. The two worked closely with Ma on the analysis.

“”The whole story of astronomy is, in a sense, the effort to understand the absolute scale of the universe, which then tells us about the physics,” Blakeslee said, harkening back to James Cook’s voyage to Tahiti in 1769 to measure a transit of Venus so that scientists could calculate the true size of the solar system. “The SBF method is more broadly applicable to the general population of evolved galaxies in the local universe, and certainly if we get enough galaxies with the James Webb Space Telescope, this method has the potential to give the best local measurement of the Hubble constant.”

The James Webb Space Telescope, 100 times more powerful than the Hubble Space Telescope, is scheduled for launch in October.


Giant elliptical galaxies

The Hubble constant has been a bone of contention for decades, ever since Edwin Hubble first measured the local expansion rate and came up with an answer seven times too big, implying that the universe was actually younger than its oldest stars. The problem, then and now, lies in pinning down the location of objects in space that give few clues about how far away they are.

Astronomers over the years have laddered up to greater distances, starting with calculating the distance to objects close enough that they seem to move slightly, because of parallax, as the Earth orbits the sun. Variable stars called Cepheids get you farther, because their brightness is linked to their period of variability, and Type Ia supernovae get you even farther, because they are extremely powerful explosions that, at their peak, shine as bright as a whole galaxy. For both Cepheids and Type Ia supernovae, it’s possible to figure out the absolute brightness from the way they change over time, and then the distance can be calculated from their apparent brightness as seen from Earth.

The best current estimate of H0 comes from distances determined by Type Ia supernova explosions in distant galaxies, though newer methods — time delays caused by gravitational lensing of distant quasars and the brightness of water masers orbiting black holes — all give around the same number.

The technique using surface brightness fluctuations is one of the newest and relies on the fact that giant elliptical galaxies are old and have a consistent population of old stars — mostly red giant stars — that can be modeled to give an average infrared brightness across their surface. The researchers obtained high-resolution infrared images of each galaxy with the Wide Field Camera 3 on the Hubble Space Telescope and determined how much each pixel in the image differed from the “average” — the smoother the fluctuations over the entire image, the farther the galaxy, once corrections are made for blemishes like bright star-forming regions, which the authors exclude from the analysis.

Neither Blakeslee nor Ma was surprised that the expansion rate came out close to that of the other local measurements. But they are equally confounded by the glaring conflict with estimates from the early universe — a conflict that many astronomers say means that our current cosmological theories are wrong, or at least incomplete.

The extrapolations from the early universe are based on the simplest cosmological theory — called lambda cold dark matter, or ?CDM — which employs just a few parameters to describe the evolution of the universe. Does the new estimate drive a stake into the heart of ?CDM?

“I think it pushes that stake in a bit more,” Blakeslee said. “But it (?CDM) is still alive. Some people think, regarding all these local measurements, (that) the observers are wrong. But it is getting harder and harder to make that claim — it would require there to be systematic errors in the same direction for several different methods: supernovae, SBF, gravitational lensing, water masers. So, as we get more independent measurements, that stake goes a little deeper.”

Ma wonders whether the uncertainties astronomers ascribe to their measurements, which reflect both systematic errors and statistical errors, are too optimistic, and that perhaps the two ranges of estimates can still be reconciled.

“The jury is out,” she said. “I think it really is in the error bars. But assuming everyone’s error bars are not underestimated, the tension is getting uncomfortable.”

In fact, one of the giants of the field, astronomer Wendy Freedman, recently published a study pegging the Hubble constant at 69.8 ±1.9 km/sec/Mpc, roiling the waters even further. The latest result from Adam Riess, an astronomer who shared the 2011 Nobel Prize in Physics for discovering dark energy, reports 73.2 ±1.3 km/sec/Mpc. Riess was a Miller Postdoctoral Fellow at UC Berkeley when he performed this research, and he shared the prize with UC Berkeley and Berkeley Lab physicist Saul Perlmutter.

MASSIVE galaxies

The new value of H0 is a byproduct of two other surveys of nearby galaxies — in particular, Ma’s MASSIVE survey, which uses space and ground-based telescopes to exhaustively study the 100 most massive galaxies within about 100 Mpc of Earth. A major goal is to weigh the supermassive black holes at the centers of each one.

To do that, precise distances are needed, and the SBF method is the best to date, she said. The MASSIVE survey team used this method last year to determine the distance to a giant elliptical galaxy, NGC 1453, in the southern sky constellation of Eridanus. Combining that distance, 166 million light years, with extensive spectroscopic data from the Gemini and McDonald telescopes — which allowed Ma’s graduate students Chris Liepold and Matthew Quenneville to measure the velocities of the stars near the center of the galaxy — they concluded that NGC 1453 has a central black hole with a mass nearly 3 billion times that of the sun.

To determine H0, Blakeslee calculated SBF distances to 43 of the galaxies in the MASSIVE survey, based on 45 to 90 minutes of HST observing time for each galaxy. The other 20 came from another survey that employed HST to image large galaxies, specifically ones in which Type Ia supernovae have been detected.

Most of the 63 galaxies are between 8 and 12 billion years old, which means that they contain a large population of old red stars, which are key to the SBF method and can also be used to improve the precision of distance calculations. In the paper, Blakeslee employed both Cepheid variable stars and a technique that uses the brightest red giant stars in a galaxy — referred to as the tip of the red giant branch, or TRGB technique — to ladder up to galaxies at large distances. They produced consistent results. The TRGB technique takes account of the fact that the brightest red giants in galaxies have about the same absolute brightness.

“The goal is to make this SBF method completely independent of the Cepheid-calibrated Type Ia supernova method by using the James Webb Space Telescope to get a red giant branch calibration for SBFs,” he said.

“The James Webb telescope has the potential to really decrease the error bars for SBF,” Ma added. But for now, the two discordant measures of the Hubble constant will have to learn to live with one another.

“I was not setting out to measure H0; it was a great product of our survey,” she said. “But I am a cosmologist and am watching this with great interest.”

Co-authors of the paper with Blakeslee, Ma and Jensen are Jenny Greene of Princeton University, who is a leader of the MASSIVE team, and Peter Milne of the University of Arizona in Tucson, who leads the team studying Type Ia supernovae. The work was supported by the National Aeronautics and Space Administration (HST-GO-14219, HST-GO-14654, HST GO-15265) and the National Science Foundation (AST-1815417, AST-1817100).

Source link

Continue Reading
Click to comment

Leave a Reply

TOP SCEINCE

Simple food swaps could cut greenhouse gas emissions from household groceries by a quarter

Published

on

By

How fast is the universe expanding? Galaxies provide one answer: New measure of Hubble constant highlights discrepancy between estimates of our cosmic fate


Switching food and drink purchases to very similar but more environmentally friendly alternatives could reduce the greenhouse gas emissions from household groceries by more than a quarter (26%), according to a new Australian study from The George Institute for Global Health and Imperial College London published today in Nature Food.

Making bigger changes — like swapping a frozen meat lasagne for the vegetarian option — could push the reduction to as much as 71%.

To make this happen will require on-pack labelling of greenhouse gas emissions for every packaged food product so that consumers can make informed choices.

This is the most detailed analysis ever conducted on the environmental impacts of a country’s food purchasing behaviour, involving comprehensive data on greenhouse gas emissions and sales for tens of thousands of supermarket products, typical of the Western diet of many countries globally.

Lead author and epidemiologist Dr Allison Gaines, who conducted the analysis for The George Institute and Imperial College London, said, “Dietary habits need to change significantly if we are to meet global emissions targets, particularly in high-income countries like Australia, the UK, and US.

“But while consumers are increasingly aware of the environmental impact of the food system and willing to make more sustainable food choices, they lack reliable information to identify the more environmentally friendly options.”

Researchers calculated the projected emissions of annual grocery purchases from 7,000 Australian households using information on ingredients, weights and production life cycles in The George Institute’s FoodSwitch database and global environmental impact datasets. More than 22,000 products were assigned to major, minor and sub-categories of foods (e.g. ‘bread and bakery’, ‘bread’ and ‘white bread’, respectively) to quantify emissions saved by switching both within and between groups.

Making switches within the same sub-categories of foods could lead to emission reductions of 26% in Australia, equivalent to taking over 1.9 million cars off the road.2 Switches within minor categories of foods could lead to even bigger emission reductions of 71%.

“The results of our study show the potential to significantly reduce our environmental impact by switching like-for-like products. This is also something consumers in the UK could, and would probably like, to do if we put emissions information onto product labels,” said Dr Gaines.

Dr Gaines added that the switches would not compromise food healthiness overall: “We showed that you can switch to lower emissions products while still enjoying nutritious foods. In fact, we found it would lead to a slight reduction in the proportion of ultra-processed foods purchased, which is a positive outcome because they’re generally less healthy,” she said.

The purchase analysis also showed that meat products contributed almost half (49%) of all greenhouse gas emissions, but only 11% of total purchases.Conversely, fruit, vegetables, nuts and legumes represented one quarter (25%) of all purchases, but were responsible for just 5% of emissions.

It is estimated that around one-third of global greenhouse gas emissions are attributable to the food and agriculture sector, and the combined health and environmental costs of the global food system are estimated to be 10-14 trillion USD (8-11 trillion GBP) per year.3,4,5 More than 12 million deaths per year could be prevented if the system transitioned to deliver healthy, low-emission diets.3

Prof Bruce Neal, Executive Director at The George Institute Australia and Professor of Clinical Epidemiology at Imperial College London, said that as a global community, we are taking too long to improve the sustainability of the food system, endangering the prospect of a net-zero future.

“There is currently no standardised framework for regulating the climate or planetary health parameters of our food supply, and voluntary measures have not been widely adopted by most countries. This research shows how innovative ways of approaching the problem could enable consumers to make a real impact,” he said.

“With this in mind, we have developed a free app called ecoSwitch, currently available in Australia, which is based on this research. Shoppers can use their device to scan a product barcode and check its ‘Planetary Health Rating’, a measure of its emissions shown as a score between half a star (high emissions) to five stars (low emissions).”

The George Institute plans to extend the ecoSwitch algorithm to integrate other environmental indicators such as land and water use, and biodiversity, and to introduce the tool to other countries.

“While ecoSwitch is a much-needed first step in providing environmental transparency for grocery shoppers, the vision is for mandatory display of a single, standardised sustainability rating system on all supermarket products,” concluded Prof Neal.



Source link

Continue Reading

TOP SCEINCE

Florida fossil porcupine solves a prickly dilemma 10-million years in the making

Published

on

By

How fast is the universe expanding? Galaxies provide one answer: New measure of Hubble constant highlights discrepancy between estimates of our cosmic fate


There’s a longstanding debate simmering among biologists who study porcupines. There are 16 porcupine species in Central and South America, but only one in the United States and Canada. DNA evidence suggests North America’s sole porcupine belongs to a group that originated 10 million years ago, but fossils seem to tell a different story. Some paleontologists think they may have evolved just 2.5 million years ago, at the beginning of the ice ages.

A new study published in the journal Current Biology claims to have reconciled the dispute, thanks to an exceptionally rare, nearly complete porcupine skeleton discovered in Florida. The authors reached their conclusion by studying key differences in bone structure between North and South American porcupines, but getting there wasn’t easy. It took an entire class of graduate and undergraduate students and several years of careful preparation and study.

“Even for a seasoned curator with all the necessary expertise, it takes an incredible amount of time to fully study and process an entire skeleton,” said lead author Natasha Vitek. While studying as a doctoral student at the Florida Museum of Natural History, Vitek teamed up with vertebrate paleontology curator Jonathan Bloch to create a college course in which students got hands-on research experience by studying porcupine fossils.

Ancient radiation gave rise to world’s largest rodents

Porcupines are a type of rodent, and their ancestors likely originated in Africa more than 30 million years ago. Their descendants have since wandered into Asia and parts of Europe by land, but their journey to South America is a particularly defining event in the history of mammals. They crossed the Atlantic Ocean — likely by rafting — when Africa and South America were much closer together than they are today. They were the first rodents to ever set foot on the continent, where they evolved into well-known groups like guinea pigs, chinchillas, capybaras and porcupines.

Some took on giant proportions. There were lumbering, rat-like animals up to five feet long, equipped with a tiny brain that weighed less than a plum. Extinct relatives of the capybara grew to the size of cows.

Porcupines remained relatively small and evolved adaptations for life in the treetops of South America’s lush rainforests. Today, they travel through the canopy with the aid of long fingers capped with blunt, sickle-shaped claws perfectly angled for gripping branches. Many also have long, prehensile tails capable of bearing their weight, which they use while climbing and reaching for fruit.

Despite their excellent track record of getting around, South America was a dead end for many millions of years. A vast seaway with swift currents separated North and South America, and most animals were unable to cross — with a few notable exceptions.

Beginning about 5 million years ago, the Isthmus of Panama rose above sea level, cutting off the Pacific from the Atlantic. This land bridge became the ancient equivalent of a congested highway a few million years later, with traffic flowing in both directions.

Prehistoric elephants, saber-toothed cats, jaguars, llamas, peccaries, deer, skunks and bears streamed from North America to South. The reverse trek was made by four different kinds of ground sloths, oversized armadillos, terror birds, capybaras and even a marsupial.

The two groups met with radically different fates. Those mammals migrating south did fairly well; many became successfully established in their new tropical environments and survived to the present. But nearly all lineages that ventured north into colder environments have gone extinct. Today, there are only three survivors: the nine-banded armadillo, the Virginia opossum and the North American porcupine.

New fossils catch evolution in the act

Animals that traveled north had to contend with new environments that bore little resemblance to the ones they left behind. Warm, tropical forests gave way to open grasslands, deserts and cold deciduous forests. For porcupines, this meant coping with brutal winters, fewer resources and coming down from the trees to walk on land. They still haven’t quite gotten the hang of the latter; North American porcupines have a maximum ground speed of about 2 mph.

South American porcupines are equipped with a menacing coat of hollow, overlapping quills, which offer a substantial amount of protection but do little to regulate body temperature. North American porcupines replaced these with a mix of insulating fur and long, needle-like quills that can be raised when they feel threatened. They also had to modify their diet, which changed the shape of their jaw.

“In winter, when their favorite foods are not around, they will bite into tree bark to get at the softer tissue underneath. It’s not great food, but it’s better than nothing,” Vitek said. “We think this type of feeding selected for a particular jaw structure that makes them better at grinding.”

They also lost their prehensile tails. Although North American porcupines still like climbing, it’s not their forte. Museum specimens often show evidence of healed bone fractures, likely caused by falling from trees.

Many of these traits can be observed in fossils. The problem is there aren’t many fossils to go around. According to Vitek, most are either individual teeth or jaw fragments, and researchers often lump them in with South American porcupines. Those that are considered to belong to the North American group lack the critical features that would provide paleontologists with clues to how they evolved.

So when Florida Museum paleontologist Art Poyer found an exquisitely preserved porcupine skeleton in a Florida limestone quarry, they were well aware of its significance.

“When they first brought it in, I was amazed,” said Bloch, senior author of the study. “It is so rare to get fossil skeletons like this with not only a skull and jaws, but many associated bones from the rest of the body. It allows for a much more complete picture of how this extinct mammal would have interacted with its environment. Right away we noticed that it was different from modern North American porcupines in having a specialized tail for grasping branches.”

By comparing the fossil skeleton with bones from modern porcupines, Bloch and Vitek were confident they could determine its identity. But the amount of work this would require was more than one person could do on their own in a short amount of time. So they co-created a paleontology college course, in which the only assignment for the entire semester was studying porcupine bones.

“It’s the kind of thing that could only be taught at a place like the Florida Museum, where you have both collections and enough students to study them,” Vitek said. “We focused on details of the jaw, limbs, feet and tails. It required a very detailed series of comparisons that you might not even notice on the first pass.”

The results were surprising. The fossil lacked the reinforced bark-gnawing jaws and possessed a prehensile tail, making it appear more closely related to South American porcupines. But, Vitek said, other traits bore a stronger similarity to North American porcupines, including the shape of the middle ear bone as well as the shapes of the lower front and back teeth.

With all the data combined, analyses consistently provided the same answer. The fossils belonged to an extinct species of North American porcupine, meaning this group has a long history that likely began before the Isthmus of Panama had formed. But questions remain as to how many species once existed in this group or why they went extinct.

“One thing that isn’t resolved by our study is whether these extinct species are direct ancestors of the North American porcupine that is alive today,” Vitek said. “It’s also possible porcupines got into temperate regions twice, once along the Gulf Coast and once out west. We’re not there yet.”

Jennifer Hoeflich, Isaac Magallanes, Sean Moran, Rachel Narducci, Victor Perez, Jeanette Pirlo, Mitchell Riegler, Molly Selba, María Vallejo-Pareja, Michael Ziegler, Michael Granatosky and Richard Hulbert of the Florida Museum of Natural History are also authors on the paper.



Source link

Continue Reading

TOP SCEINCE

Charge your laptop in a minute or your EV in 10? Supercapacitors can help

Published

on

By

How fast is the universe expanding? Galaxies provide one answer: New measure of Hubble constant highlights discrepancy between estimates of our cosmic fate


Imagine if your dead laptop or phone could charge in a minute or if an electric car could be fully powered in 10 minutes.

While not possible yet, new research by a team of CU Boulder scientists could potentially lead to such advances.

Published today in the Proceedings of the National Academy of Sciences, researchers in Ankur Gupta’s lab discovered how tiny charged particles, called ions, move within a complex network of minuscule pores. The breakthrough could lead to the development of more efficient energy storage devices, such as supercapacitors, said Gupta, an assistant professor of chemical and biological engineering.

“Given the critical role of energy in the future of the planet, I felt inspired to apply my chemical engineering knowledge to advancing energy storage devices,” Gupta said. “It felt like the topic was somewhat underexplored and as such, the perfect opportunity.”

Gupta explained that several chemical engineering techniques are used to study flow in porous materials such as oil reservoirs and water filtration, but they have not been fully utilized in some energy storage systems.

The discovery is significant not only for storing energy in vehicles and electronic devices but also for power grids, where fluctuating energy demand requires efficient storage to avoid waste during periods of low demand and to ensure rapid supply during high demand.

Supercapacitors, energy storage devices that rely on ion accumulation in their pores, have rapid charging times and longer life spans compared to batteries.

“The primary appeal of supercapacitors lies in their speed,” Gupta said. “So how can we make their charging and release of energy faster? By the more efficient movement of ions.”

Their findings modify Kirchhoff’s law, which has governed current flow in electrical circuits since 1845 and is a staple in high school students’ science classes. Unlike electrons, ions move due to both electric fields and diffusion, and the researchers determined that their movements at pore intersections are different from what was described in Kirchhoff’s law.

Prior to the study, ion movements were only described in the literature in one straight pore. Through this research, ion movement in a complex network of thousands of interconnected pores can be simulated and predicted in a few minutes.

“That’s the leap of the work,” Gupta said. “We found the missing link.”



Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.