Friday, November 29, 2019

Unique sled dogs helped the inuit thrive in the North American Arctic


UC Davis anthropologists and geneticists traced dog's DNA back 2,000 years
University of California - Davis
IMAGE
IMAGE: A team of Greenland sled dogs are descendants of the Inuit of North American Arctic. view more 
Credit: Article author Tatiana Tatiana Feuerborn
Inuit sled dogs have changed little since people migrated to the North American Arctic across the Bering Strait from Siberia with them, according to researchers who have examined DNA from the dogs from that time span. The legacy of these Inuit dogs survives today in Arctic sled dogs, making them one of the last remaining descendant populations of indigenous, pre-European dog lineages in the Americas.
The latest research is the result of nearly a decade's work by University of California, Davis, researchers in anthropology and veterinary genetics, who analyzed the DNA of hundreds of dogs' ancient skeletal remains to determine that the Inuit dog had significantly different DNA than other Arctic dogs, including malamutes and huskies.
The article, "Specialized sledge dogs accompanied the Inuit dispersal across the North American Arctic," was published Wednesday in the Proceedings of the Royal Society B: Biological Sciences. From UC Davis, authors include Christyann Darwent, professor of anthropology; Ben Sacks, adjunct professor and director of the Mammalian Ecology and Conservation Unit, Veterinary Genetics Laboratory, School of Veterinary Medicine; and Sarah Brown, a postdoctoral researcher. Lead author Carly Ameen is an archaeologist from the University of Exeter; Tatiana Feuerborn is with the Globe Institute in Denmark and Centre for Palaeogenetics in Sweden; and Allowen Evin is at the CNRS, Université de Montpellier, Institut des Sciences de l'Evolution in Montpellier, France. The list of authors includes many others from a large number of collaborating institutions.
Qimmiit (dogs in Inuktitut) were viewed by the Inuit as particularly well-suited to long-distance hauling of people and their goods across the Arctic and consuming local resources, such as sea mammals, for food.
The unique group of dogs helped the Inuit conquer the tough terrain of the North American Arctic 2,000 years ago, researchers said. Inuit dogs are the direct ancestors of modern Arctic sled dogs, and although their appearance has continued to change over time, they continue to play an important role in Arctic communities.
Experts examined the DNA from 921 dogs and wolves who lived during the last 4,500 years. Analysis of the DNA, and the locations and time periods in which they were recovered archaeologically, shows dogs from Inuit sites occupied beginning around 2,000 years ago were genetically different from dogs already in the region.
According to Sacks "the genetic profiles of ancient dogs of the American Arctic dating to 2,000 years ago were nearly identical to those of older dogs from Siberia, but contrasted starkly with those of more ancient dogs in the Americas, providing an unusually clear and definitive picture of the canine replacement event that coincided with the expansion of Thule peoples across the American Arctic two millennia ago."
Preserving an important history
Research confirms that native peoples maintained their own dogs. By analyzing the shape of elements from 391 dogs, the study also shows that the Inuit had larger dogs with a proportionally narrower cranium to earlier dogs belonging to pre-Inuit groups.
The National Science Foundation-funded portion of the research at UC Davis was inspired by Inuit activist and author Sheila Watt-Cloutier, who told Darwent about Inuit sled-dog culling undertaken by Canadian police in the 1950s and asked if there was a way to use scientific methods to tell the history and importance of sled dogs in the Arctic. Preservation of these distinctive Inuit dogs is likely a reflection of the highly specialized role that dogs played in both long-range transportation and daily subsistence practices in Inuit society.

Barbequed clams on the menu for ancient Puerto Ricans



Analysis of fossilized shells reveals cooking habits of Caribbean civilizations over 2500 years ago
Cardiff University
IMAGE
IMAGE: Photographs of all shells analyzed in this study. view more 
Credit: Cardiff University
Scientists have reconstructed the cooking techniques of the early inhabitants of Puerto Rico by analysing the remains of clams.
Led by Philip Staudigel, who conducted the analysis as a graduate student at the University of Miami Rosenstiel School and is now a postdoctoral researcher at Cardiff University, the team has used new chemical analysis techniques to identify the exact cooking temperatures at which clams were cooked over 2500 years ago.
With cooking temperatures getting up to around 200oC according to the new analysis, the team believe the early Puerto Ricans were partial to a barbeque rather than boiling their food as a soup.
The study, which also involved academics from the University of Miami and Valencia College, has been published today in the journal Science Advances.
Whilst the results throw new light on the cultural practices of the first communities to arrive on the island of Puerto Rico, they also provide at least circumstantial evidence that ceramic pottery technology was not widespread during this period of history - it's likely that this would be the only way in which the clams could have been boiled.
Lead author of the study Dr Philip Staudigel, currently at Cardiff University's School of Earth and Ocean Sciences, said: "Much of peoples' identity draws upon on where they came from, one of the most profound expressions of this is in cooking. We learn to cook from our parents, who learned from their parents.
"In many parts of the world, written records extend back thousands of years, which often includes recipes. This is not the case in the Caribbean, as there were no written texts, except for petroglyphs. By learning more about how ancient Puerto Rican natives cooked their meals, we can relate to these long-gone peoples through their food."
In their study, the team analysed over 20kg of fossilised clam shells at the University of Miami's Rosenstiel School of Marine and Atmospheric Sciences Stable Isotope Lab, which were collected from an archaeological site in Cabo Rojo, Puerto Rico.
The pre-Arawak population of Puerto Rico were the first inhabitants of the island, arriving sometime before 3000 BC, and came from Central and/or South America. They existed primarily from fishing, hunting, and gathering near the mangrove swamps and coastal areas where they had settled.
The fossilised shells, dating back to around 700 BC, were cleaned and turned into a powder, which was then analysed to determine its mineralogy, as well as the abundance of specific chemical bonds in the sample.
When certain minerals are heated, the bonds between atoms in the mineral can rearrange themselves, which can then be measured in the lab. The amount of rearrangement is proportional to the temperature the mineral is heated.
This technique, known as clumped isotope geochemistry, is often used to determine the temperature an organism formed at but in this instance was used to reconstruct the temperature at which the clams were cooked.
The abundance of bonds in the powdered fossils was then compared to clams which were cooked at known temperatures, as well as uncooked modern clams collected from a nearby beach.
Results showed that that the majority of clams were heated to temperatures greater than 100°C - the boiling point of water - but no greater than 200°C. The results also revealed a disparity between the cooking temperature of different clams, which the researchers believe could be associated with a grilling technique in which the clams are heated from below, meaning the ones at the bottom were heated more than the ones at the top.
"The clams from the archaeological site appeared to be most similar to clams which had been barbequed," continued Dr Staudigel.
"Ancient Puerto Ricans didn't use cookbooks, at least none that lasted to the present day. The only way we have of knowing how our ancestors cooked is to study what they left behind. Here, we demonstrated that a relatively new technique can be used to learn what temperature they cooked at, which is one important detail of the cooking process."

Tuesday, November 26, 2019

Human migration out of Africa may have followed monsoons in the Middle East U



Last year, scientists announced that a human jawbone and prehistoric tools found in 2002 in Misliya Cave, on the western edge of Israel, were between 177,000 and 194,000 years old.
The finding suggested that modern humans, who originated in Africa, began migrating out of the continent at least 40,000 years earlier than scientists previously thought.
But the story of how and when modern humans originated and spread throughout the world is still in draft form. That's because science hasn't settled how many times modern humans left Africa, or just how many routes they may have taken.
A new study published this week [Nov. 25, 2019] in the Proceedings of the National Academy of Sciences by American and Israeli geoscientists and climatologists provides evidence that summer monsoons from Asia and Africa may have reached into the Middle East for periods of time going back at least 125,000 years, providing suitable corridors for human migration.
The likely timing of these northward monsoon expansions corresponds with cyclical changes in Earth's orbit that would have brought the Northern Hemisphere closer to the sun and led to increased summer precipitation. With increased summer precipitation there may have been increased vegetation, supporting animal and human migration into the region.
"It could be important context for experts studying how, why, and when early modern humans were migrating out of Africa," says lead author Ian Orland, a University of Wisconsin-Madison geoscientist now at the Wisconsin Geological and Natural History Survey, in the Division of Extension. "The Eastern Mediterranean was a critical bottleneck for that route out of Africa and if our suggestion is right, at 125,000 years ago and potentially at other periods, there may have been more consistent rainfall on a year-round basis that might enhance the ability of humans to migrate."
For as long as humans have kept records, winters have been wet and summers have been hot and dry in the Levant, a region that includes Israel, Syria, Lebanon, Jordan and Palestine. Before modern times, those hot, dry summers would have presented a significant barrier to people trying to move across the landscape.
Scientists, though, have found it difficult to determine what kinds of precipitation patterns might have existed in the prehistoric Levant. Some studies examining a variety of evidence, including pollen records, ancient lake beds, and Dead Sea sediments, along with some climate modeling studies, indicate summers in the region may have, on occasion, been wet.
To try to better understand this seasonality, Orland and colleagues looked at cave formations called speleothems in Israel's Soreq Cave. Speleothems, such as stalactites and stalagmites, form when water drips into a cave and deposits a hard mineral called calcite. The water contains chemical fingerprints called isotopes that keep a record, like an archive, of the timing and environmental conditions under which speleothems have grown.
Among these isotopes are different forms of oxygen molecules -- a light form called O16 and a heavy form called O18. Today, the water contributing to speleothem growth throughout much of the year has both heavy and light oxygen, with the light oxygen predominantly delivered by rainstorms during the winter wet season.
Orland and his colleagues hypothesized they might be able to discern from speleothems whether two rainy seasons had contributed to their growth at times in the past because they might show a similar signature of light oxygen in both winter and summer growth.
But to make this comparison, the scientists had to make isotope measurements across single growth bands, which are narrower than a human hair. Using a sensitive instrument in the UW-Madison Department of Geoscience called an ion microprobe, the team measured the relative amounts of light and heavy oxygen at seasonal increments across the growth bands of two 125,000-year-old speleothems from Soreq Cave.
This was the first time that seasonal changes were directly measured in a speleothem this old.
At the same time that Orland was in pursuit of geologic answers, his UW-Madison colleague in the Nelson Institute for Environmental Studies Center for Climatic Research, Feng He, was independently using climate models to examine how vegetation on the planet has changed with seasonal fluctuations over the last 800,000 years. Colleagues since graduate school, He and Orland teamed up to combine their respective approaches after learning their studies were complementary.
A previous study in 2014 from UW-Madison climatologist and Professor Emeritus John Kutzbach showed that the Middle East may have been warmer and wetter than usual during two periods of time corresponding roughly to 125,000 years ago and 105,000 years ago. Meanwhile, at a point in between, 115,000 years ago, conditions there were more similar to today.
The wetter time periods corresponded to peak summer insolation in the Northern Hemisphere, when Earth passes closer to the sun due to subtle changes in its orbit. The drier time period corresponded to one of its farthest orbits from the sun. Monsoon seasons tend to be stronger during peak insolation.
This provided He an opportunity to study high and low insolation rainfall during summer seasons in the Middle East and to study its isotopic signatures.
The climate model "fueled the summer monsoon hypothesis" because it suggested that "under these conditions, the monsoons could have reached the Middle East and would have a low O18 signature," He, a study co-author, says. "It's a very intriguing period in terms of climate and human evolution."
His model showed that northward expansion of the African and Asian summer monsoons was possible during this time period, would have brought significant rainfall to the Levant in the summer months, would have nearly doubled annual precipitation in the region, and would have left an oxygen isotope signature similar to winter rains.
At the same time, Orland's speleothem isotype analysis also suggested summers were rainier during peak insolation at 125,000 and 105,000 years ago.
For similar reasons, the Middle East may have also been warm and humid around 176,000 years ago, the researchers say -- about when the jawbone made its way to Misliya Cave. And before the jawbone, the previous oldest modern human fossils found outside of Africa were at Israel's Skh?l Cave, dating back between 80,000 and 120,000 years ago.
Overall, the study suggests that during a period of time when humans and their ancestors were exploring beyond the African continent, conditions may have been favorable for them to traverse the Levant.
"Human migration out of Africa occurred in pulses, which is definitely consistent with our idea that every time the Earth is closer to the sun, the summer monsoon is stronger and that's the climatic window that opened and provided opportunities for human migration out of Africa," says He.

Wednesday, November 20, 2019

Only eat oysters in months with an 'r'? Rule of thumb is at least 4,000 years old



GAINESVILLE, Fla. --- Foodie tradition dictates only eating wild oysters in months with the letter "r" - from September to April - to avoid watery shellfish, or worse, a nasty bout of food poisoning. Now, a new study suggests people have been following this practice for at least 4,000 years.
An analysis of a large shell ring off Georgia's coast revealed that the ancient inhabitants of St. Catherines Island limited their oyster harvest to the non-summer months.
How can scientists know when islanders were collecting oysters? By measuring parasitic snails.
Snails known as impressed odostomes, Boonea impressa, are common parasites of oysters, latching onto a shell and inserting a stylus to slurp the soft insides. Because the snail has a predictable 12-month life cycle, its length at death offers a reliable estimate of when the oyster host died, allowing Florida Museum of Natural History researchers Nicole Cannarozzi and Michal Kowalewski to use it as a tiny seasonal clock for when people collected and ate oysters in the past.
Stowaways on discarded oyster shells, the snails offer new insights into an old question about the shell rings that dot the coasts of Florida, Georgia, South Carolina and Mississippi.
"People have been debating the purpose of these shell rings for a very long time," said Cannarozzi, the study's lead author and Florida Museum environmental archaeology collection manager. "Were they everyday food waste heaps? Temporary communal feasting sites? Or perhaps a combination? Understanding the seasonality of the rings sheds new light on their function."
Cannarozzi and Kowalewski, Thompson Chair of Invertebrate Paleontology, analyzed oysters and snails from a 230-foot-wide, 4,300-year-old shell ring on St. Catherines Island and compared them with live oysters and snails. They found that island inhabitants were primarily harvesting oysters during late fall, winter and spring, which also suggested the presence of people on the island tapered off during the summer.
The seasonality of the shell ring may be one of the earliest records of sustainable harvesting, Cannarozzi said. Oysters in the Southeast spawn from May to October, and avoiding oyster collection in the summer may help replenish their numbers.
"It's important to look at how oysters have lived in their environment over time, especially because they are on the decline worldwide," she said. "This type of data can give us good information about their ecology, how other organisms interact with them, the health of oyster populations and, on a grander scale, the health of coastal ecosystems."
Cannarozzi said using impressed odostomes to gauge what time of year oysters were harvested offers an independent way to assess ancient patterns of oyster gathering. This approach can complement other archaeological methods, including stable isotope analysis and examining shell growth rings.
Kowalewski said the method could be applied to other marine invertebrate studies if the "timepiece" organism's life cycle meets several key requirements.
"If you have species with a lifespan of one year or less, consistent growth patterns and predictable spawning behavior, you could potentially use them as clocks as well," he said. "We might be able to use this type of strategy to reconstruct population dynamics or the natural history of various species, especially those that are extinct."
Cannarozzi and Kowalewski emphasized the importance of interdisciplinary collaboration in addressing longstanding research questions in new ways. Their project combined paleontology, the study of fossils and other biological remains, with archaeology, which emphasizes human history. Cannarozzi's specialization - environmental archaeology - also explores the close connections between humans and their natural resources.
"People have affected the distributions, life cycles and numbers of organisms over time," Cannarozzi said. "Understanding how people in the past interacted with and influenced their environment can inform our conservation efforts today."

Tuesday, November 19, 2019

Scientists use modern technology to understand how ochre paint was created in pictographs



Ochre, one of Earth's oldest naturally occurring materials, was often seen as a vivid red paint
University of Missouri-Columbia
IMAGE
IMAGE: This is one of the pieces of rock art found at Babine Lake. It is representative of the rock art that was analyzed in the study. view more 
Credit: University of Missouri
Ochre, one of Earth's oldest naturally occurring materials, was often used as a vivid red paint in ancient rock art known as pictographs across the world. Despite its broad use throughout human history and a modern focus on how the artistic symbolism is interpreted, little research exists on the paint itself and how it was produced.
Now, scientists led by Brandi MacDonald at the University of Missouri are using archaeological science to understand how ochre paint was created by hunter-gatherers in North America to produce rock art located at Babine Lake in British Columbia. The study was published in Scientific Reports, a journal of Nature.
"Ochre is one of the only types of material that people have continually used for over 200,000 years, if not longer," said MacDonald, who specializes in ancient pigments. "Therefore, we have a deep history in the archeological record of humans selecting and engaging with this material, but few people study how it's actually made."
This is the first study of the rock art at Babine Lake. It shows that individuals who prepared the ochre paints harvested an aquatic, iron-rich bacteria out of the lake -- in the form of an orange-brown sediment.
In the study, the scientists used modern technology, including the ability to heat a single grain of ochre and watch the effects of temperature change under an electron microscope at MU's Electron Microscopy Core facility. They determined that individuals at Babine Lake deliberately heated this bacteria to a temperature range of approximately 750°C to 850°C to initiate the color transformation.
"It's common to think about the production of red paint as people collecting red rocks and crushing them up," MacDonald said. "Here, with the help of multiple scientific methods, we were able to reconstruct the approximate temperature at which the people at Babine Lake were deliberately heating this biogenic paint over open-hearth fires. So, this wasn't a transformation done by chance with nature. Today, engineers are spending a lot of money trying to determine how to produce highly thermo-stable paints for ceramic manufacturing or aerospace engineering without much known success, yet we've found that hunter-gatherers had already discovered a successful way to do this long ago."

Friday, November 15, 2019

Early DNA lineages from Finland shed light on the diverse origins of the contemporary population



IMAGE
IMAGE: Medieval burial site of Kalmistomäki in Kylälahti, Hiitola in Russia. view more 
Credit: Stanislav Belskiy
A new genetic study carried out at the University of Helsinki and the University of Turku demonstrates that, at the end of the Iron Age, Finland was inhabited by separate and differing populations, all of them influencing the gene pool of modern Finns. The study is so far the most extensive investigation of the ancient DNA of people inhabiting the region of Finland.
In the study, genes were investigated from archaeological bone samples of more than one hundred individuals who lived between the 4th and 19th centuries AD. Most of the samples originated in the Iron Age and the Middle Ages. Mitochondrial DNA (mtDNA), which is passed down by mothers to all of their offspring, was extracted from the individuals, thus uncovering the population history of women.
Based on the findings, the people who inhabited Finland in the Iron Age (approximately 300-1300 AD) and the Middle Ages (approximately 1200-1500 AD) shared mitochondrial lineages with today's Finns. However, significant differences were seen in the genome of individuals buried in different burial sites in the Iron Age in particular. mtDNA lineages typical of Stone Age hunter-gatherers were common among those buried in Luistari, Eura (southwest Finland), and Kirkkailanmaki, Hollola (southern Finland). In Kylalahti, Hiitola (Republic of Karelia, Russia) and Tuukkala, Mikkeli (eastern Finland), the most common findings were lineages characteristic of ancient European farmer populations. The fifth Iron Age burial site included in the study is located in Levanluhta, western Finland. Many of the individuals buried there represented mtDNA lineages associated with the modern Sami.
"All of the above originally independent lineages remain common in Finland to this day. This indicates that the studied Iron Age populations have had an impact on the gene pool of contemporary Finns," says doctoral student Sanni Oversti from the Faculty of Biological and Environmental Sciences, University of Helsinki, Finland.
The researchers posit that the differences found in the Iron Age populations of western and eastern Finland are opposite to those found in today's Finns: the lineages associated with ancient farmers were more common in the east, while the lineages inherited from hunter-gatherers were more prevalent in the west. Farmer populations arriving in Finland not only from the west and south but also from the east provides a potential explanation for this.

Thursday, November 14, 2019

Alpine rock axeheads became social and economic exchange fetishes in the Neolithic

Universitat Autonoma de Barcelona
IMAGE
IMAGE: Alpine rock axehead found at Harras, Thuringia, from the Michelsberg Culture (c. 4300-2800 ANE). view more 
Credit: Juraj Lipták, State Office for Heritage Management and Archaeology Saxony-Anhalt.
Axeheads made out of Alpine rocks had strong social and economic symbolic meaning in the Neolithic, given their production and use value. Their resistance to friction and breakage, which permitted intense polishing and a re-elaboration of the rocks, gave these artefacts an elevated exchange value, key to the formation of long-distance exchange networks among communities of Western Europe. Communities who had already begun to set the value of exchange of a product according to the time and effort invested in producing them.
This is what a study led by a research group at the Universitat Autònoma de Barcelona (UAB) indicates in regards to the mechanical and physical parameters characterising the production, circulation and use of a series of rock types used in the manufacturing of sharp-edged polished artefacts in Europe during the Neolithic (5600-2200 BCE).
The objective of the study was to answer a long debated topic: the criteria by which Alpine rocks formed part of an unprecedented pan-European phenomenon made up of long-distance exchange networks, while others were only used locally. Was the choice based on economic, functional or perhaps subjective criteria? Stone axeheads were crucial to the survival and economic reproduction of societies in the Neolithic. Some of the rocks used travelled over 1000 kilometres from their Alpine regions to northern Europe, Andalusia in southern Spain and the Balkans.
This is the first time a study includes in the specialised bibliography comparative data obtained by testing the resistance to friction and breakage of the rocks. These mechanical parameters have led to the definition of production and use values, which were then correlated with the distances and volumes of the rocks exchanged in order to obtain their exchange value. The results help understand the basic principles underlying the supply and distribution system of stone materials during the Neolithic in Western Europe, as well as its related economic logic.
"The reasons favouring the integration of specific rock types into these long-distance networks depended on a complex pattern of technological and functional criteria. This pattern was not solely based on economic aspects, their use value, but rather on the mechanical capacity to resist successive transformation processes, i.e. their production value, and remain unaltered throughout time", explains Selina Delgado-Raack, researcher at the Department of Prehistory, UAB, and first author of the article.
Supply System and Economic Logic
The study points to the diverging economic conception between the manufacturing of tools using other rocks and Alpine rock axeheads. Neolithic communities selected the most suitable raw materials available from all the resources in their region and knew each of their mechanical and physical characteristics. These tools normally travelled in a radius of 200 kilometres from where they originated and rarely went farther than 400-500 kilometres. Only Alpine rocks travelled further than those regional and economic limits.
"The circulation of these rocks at larger distances did not respond to a functional and cost-efficient logic, in which each agent takes into account the costs of manufacturing and transport when selecting the different rock types, all of them viable in being converted into fully functioning tools", indicates Roberto Risch, also researcher at the Department of Prehistory, UAB, and coordinator of the research. "It rather obeys the emergence of a very different economic reasoning, based on the ability to transform one material through ever greater amounts of work, something which many centuries later Adam Smith used to define the British economy of the 18th century. In the case of Alpine axeheads, their exceptional exchange value was due to the increase in manufacturing costs, a result of the intense polishing of these stones as they passed from one community to another".
A Primitive Form of Currency?
For the research team, the fact that the Alpine axeheads are categorised as the most commonly crafted and modified artefact in different periods and regions during the Neolithic rules out their role as symbols of power or ceremonial elements. "The economic pattern points towards more of a fetish object used in social and economic interactions among European communities of highly different socio-political productions and orientations", Selina Delgado-Raack states.
The exceptional exchange value reached by some rock types, such as the omphacitites and jadeitites, leads the team to think that they may have been used as a primitive form of currency, although they admit that there is a need for more studies before this topic can be clarified.

Wednesday, November 13, 2019

Ancient Egyptians gathered birds from the wild for sacrifice and mummification



In ancient Egypt, Sacred Ibises were collected from their natural habitats to be ritually sacrificed, according to a study released November 13, 2019 in the open-access journal PLOS ONE by Sally Wasef of Griffith University, Australia and colleagues.
Egyptian catacombs are famously filled with the mummified bodies of Sacred Ibises. Between around 664BC and 250AD, it was common practice for the birds to be sacrificed, or much more rarely worshipped in ritual service to the god Thoth, and subsequently mummified. In ancient sites across Egypt, these mummified birds are stacked floor to ceiling along kilometers of catacombs, totaling many millions of birds. But how the Egyptians got access to so many birds has been a mystery; some ancient texts indicate that long-term farming and domestication may have been employed.
In this study, Wasef and colleagues collected DNA from 40 mummified Sacred Ibis specimens from six Egyptian catacombs dating to around 2500 years ago and 26 modern specimens from across Africa. 14 of the mummies and all of the modern specimens yielded complete mitochondrial genome sequences. These data allowed the researchers to compare genetic diversity between wild populations and the sacrificed collections.
If the birds were being domesticated and farmed, the expected result would be low genetic diversity due to interbreeding of restricted populations, but in contrast, this study found that the genetic diversity of mummified Ibises within and between catacombs was similar to that of modern wild populations. This suggests that the birds were not the result of centralized farming, but instead short-term taming. The authors suggest the birds were likely tended in their natural habitats or perhaps farmed only in the times of year they were needed for sacrifice.
The authors add: "We report the first complete ancient genomes of the Egyptian Sacred Ibis mummies, showing that priests sustained short-term taming of the wild Sacred Ibis in local lakes or wetlands contrary to centralised industrial scale farming of sacrificial birds."

World's oldest glue used from prehistoric times till the days of the Gauls


Birch bark tar, the oldest glue in the world, was in use for at least 50,000 years, from the Palaeolithic Period up until the time of the Gauls. Made by heating birch bark, it served as an adhesive for hafting tools and decorating objects. Scientists mistakenly thought it had been abandoned in western Europe at the end of the Iron Age (800-25 BC) and replaced by conifer resins, around which a full-fledged industry developed during the Roman period. But by studying artefacts that date back to the first six centuries AD through the lens of chemistry, archaeology, and textual analysis, researchers[1] from the CNRS, Université Nice Sophia Antipolis / Université Côte d'Azur, and Inrap have discovered birch tar was being used right up to late antiquity, if not longer. The artefacts in question--found in a region where birch is scarce, thus raising the question of how it was procured--are testimony to the strength of tradition among the Gauls. The scientists' findings are published in Antiquity (13 November 2019).

Megadrought likely triggered the fall of the Assyrian Empire

Climate change influenced rise and fall of Northern Iraq's Neo-Assyrian Empire


Changes in climate may have contributed to both the rise and collapse of the Neo-Assyrian Empire in northern Iraq, which was considered the most powerful empire of its time, according to a new study. The results suggest that multi-decade megadroughts aligned with the timing of the empire's collapse in 609 BCE, triggering declines in the region's agricultural productivity that led to political and economic demise within 60 years. Previous explanations for the empire's collapse have focused on political instability and wars; the role of climate change was largely "ignored," the authors say, in part because of a lack of high-resolution paleoclimate records from the region. Ashish Sinha et al. gathered oxygen and carbon isotopic data from two stalagmites found in Kuna Ba Cave in northern Iraq, which provide a precisely dated record of precipitation over the last 4,000 years. These records indicate that the interval between 850 and 740 BCE (when the empire was at its zenith) was one of the wettest periods in 4,000 years, with precipitation levels during the cool season 15 to 30% higher than during the modern 1980-2007 period. However, the record also suggests that cool season precipitation during a seventh century BCE megadrought may have fallen below the level required for productive farming. Since the empire was highly dependent on agriculture, Sinha and colleagues conclude the megadrought would have likely exacerbated political unrest and may have encouraged invading armies that ultimately led to Assyrian collapse. The authors also note that their data suggest that the recent multi-year droughts, if they were to continue over a century, would constitute the worst episodes of regional drought in the last four millennia.

New research suggests it was climate-related drought that built the foundation for the collapse of the Assyrian Empire (whose heartland was based in today's northern Iraq)--one of the most powerful civilizations in the ancient world. The Science Advances paper, led by Ashish Sinha at California State University, Dominguez Hills and coauthored by CIRES affiliate Adam Schneider, details how megadroughts in the 7th century BC triggered a decline in Assyria's way of life that contributed to its ultimate collapse.
Keep reading for a Q&A with Schneider, who was a CIRES researcher from 2015 to 2017:
Q: What role did the Assyrian Empire play in global history?
A: There are people in the archaeological community who say Neo-Assyria was the first super power in the history of the world. The Neo-Assyrian empire (912-609 BC) was the third and final phase of Assyrian civilization. It was by far the largest empire in the region up to that time, controlling much of the territory from the Persian Gulf to Cyprus. The Assyrians were basically like the Empire in Star Wars, they are the all-devouring machine.
They also had incredible skill as hydro-engineers. The Assyrians were largely responsible for the way that the Tigris River Basin drainage now works, they completely remade the natural water flows of that landscape using aqueducts and other hydraulic infrastructure. Amazingly, some of these features are still functioning today.
Q: How did a culture this powerful collapse?
A: In the final decades of the Neo-Assyrian empire, the civilization was littered with political instability, civil wars, and invasion by outside armies. Our study shows that climate-related factors were underlying all of this.
The Assyrian empire was built during a time of heavy precipitation and successful harvests. But now we can tell, from climate records, that the civilization experienced a series of megadroughts that likely triggered the collapse of the empire--weakening agriculture and amplifying conflict. The impact of drought in this region was dependent on where the Assyrians were located in northern Iraq. The Tigris River is so deeply cut into the surrounding soil that you can't do large scale-irrigation there. That's why rainfall was so crucial to their lives. The Assyrians were much more vulnerable to the impacts of prolonged and severe drought than people downriver.
Q: How are these findings different than previous research?
A: Our team analyzed drip water that got fossilized in two stalagmites in Kuna Ba Cave in northern Iraq. Because the oxygen and carbon isotope composition in different layers of the cave formations can be used to infer changes in precipitation at a high temporal resolution, we get a much better proxy than anything else we had previously. And because the isotope record went all the way up to 2007 CE, we were able to correlate the stable carbon and oxygen isotope ratios with modern instrumental climate information from the region. This has enabled us to compare the modern isotope data with ancient layers. We now know that the Assyrian droughts started decades earlier than we had previously thought, and also that the period prior to the onset of drought was one of the wettest in the entire roughly 3800-year sequence. It changes some of the hypotheses we have made.
For example: King Sennacherib, who ruled from 705 to 681 BC, was well-known for building massive canals and other structures. In our earlier work on the question of drought in ancient Assyria, I and my colleague Dr. Selim Adal? had initially viewed him as a short-sighted ruler who had pursued short-term political goals at the expense of long-term drought resilience, and set in motion a catastrophic chain of events as a result. But with this new data, we now think thatSennacherib probably was already experiencing drought when he was king, and in fact he may well have been trying to do something about the environmental calamity during that time. So my colleagues and I have joked about issuing an apology letter to Sennacherib for the misunderstanding!
Q: How did you find yourself in this area of research: the crossroads between climate and history?
A: Archeology has been my passion since I was a small child. The climate angle I wanted no part of it, because that was the family business--my father was a climatologist, and I didn't want to compete. But in the summer of 2010 he suddenly passed away. At that point I was without a clear dissertation project and started to rethink the idea of looking at climatic impacts on ancient people. So it started as a tribute to my father. I ended up going to a research center in Turkey and I got hooked. In fact, I very quickly earned the nickname "Climate Guy" as historians would come ask me if their research had any climatic basis.
Q: Have there been other times in history, in other places, where climate events impacted political structure like in Assyria?
A: The French Revolution is one example. In the two years prior to the French Revolution, poor weather led to a series of bad harvests, which alongside other factors helped to cause the price of bread to skyrocket, especially in Paris. Another example is the U.S. Dust Bowl in the 1930's. We saw a mass migration resulting from both climatic and economic factors during the Great Depression, causing huge changes. It drove the development and agriculture in southern California. The question is not, "Did climate have an impact?"--it's: "How, why, and how important was climate alongside the other factors?"
Q: What about modern day?
A: If you look at the record, the Assyrian Megadrought and what I have called the Late Assyrian Dry Phase is one of two of the most extreme periods of dry conditions in the entire 3800 year sequence for northern Iraq. The other one is the present day. Our working assumption here is that the latter is being driven at least in part by anthropogenic climate change.
Obviously, today Iraq is a very different place than it was in 700 B.C. But it's not hard to look at that country's problems with internal political stability and sectarian strife to think about the additional issue of drought leading to further trouble in that region.

Thursday, November 7, 2019

The medieval Catholic church's influence on psychology of Western, industrialized societies


The Western Catholic Church's influence on marriage and family structures during the Middle Ages shaped the cultural evolution of the beliefs and behaviors now common among Western Europeans and their cultural descendants, researchers report. The greater individualism, lower conformity and increased stranger trust behaviors commonly observed among these populations, long exposed to the church, are at least in part due to the Medieval Western Church's policies, the authors say. Their study highlights how cultural changes more than 500 years ago can evolve and seed significant and long-lasting psychological variation, both within and across nations. "Illuminating the ways in which cultures vary - and why they have evolved in different ways given certain socioenvironmental forces - can help us to empathize with those who are different," writes Michele Gelfand in a related Perspective.

Substantial variation exists in the psychological beliefs and behaviors of populations across the globe. In particular, the proclivities of individuals in western, industrialized countries are unique. Previous research has shown that these societies, more recently characterized as Western, Educated, Industrialized, Rich and Democratic - or "WEIRD" - tend to be more individualistic, analytically oriented and trustful of others whilst demonstrating less conformity, obedience and solidarity.

Whether the driver of these traits is formal political institutions, for example, or something else, has been unclear. Jonathan Schultz and colleagues hypothesized that the Western Catholic Church's marriage and family program dissolved strong, cohesive kin networks, which then impacted psychology. To test this, they combined anthropological, historical and psychological data. For example, analyzing records kept by the Vatican, which informed the rate of cousin marriages, helped them to evaluate kinship intensity. To capture human psychology, they drew on a very broad set of data, including survey data, behavioral data, and ecologically relevant observational data such as voluntary blood donation.

Schulz et al.'s analysis points to the expanding Church's religious decrees on marriage systematically replacing extended kin-based family networks with smaller, more independent nuclear households with weak family ties. To rule out alternative hypotheses that could explain their results, they controlled for variables including geographic factors, income, wealth, and education.

Ancient roman DNA reveals genetic crossroads of Europe and Mediterranean


All roads may lead to Rome, and in ancient times, a great many European genetic lineages did too, according to a new study. Its results, perhaps the most detailed analysis of changing genetic variation patterns in the region to date, reveal a dynamic population history from the Mesolithic (~10,000 BCE) into modern times, and spanning the rise and fall of the Roman Empire. At its height, the ancient Roman Empire sprawled across three continents, encompassing the entirety of the Mediterranean and the lives of tens of millions across Europe, the Near East and North Africa. The size of the city at its center, Rome - the first to reach more than one million residents in the ancient world - would remain unrivaled in Europe until the dawn of the industrial revolution nearly 1,500 years later. Even long before the rise of Imperial Rome, the region was an important cultural crossroads between Europe and the Mediterranean. However, while Rome and central Italy's antiquity is well-documented in a rich archaeological and historical record, little is known about the region's genetic history.

Margaret Antonio and colleagues present a new genetic record, built from genome data of 127 ancient individuals from 29 archaeological sites in and around Rome, spanning nearly 12,000 years of Roman prehistory and history. Antonio et al. revealed two major prehistoric ancestry shifts - one occurring as Neolithic farmers replaced Mesolithic hunter-gathers roughly 7,000 years ago, and another during the Bronze age likely coinciding with increased trade and interaction with populations from across the Mediterranean. The results suggest that by Rome's founding, the genetics of ancient central Italy were much the same as that seen in modern populations. However, throughout the historic period (the past 3,000 years), genetic ancestry was greatly diverse, with genetic contributions from individuals from across the Near East, Europe and North Africa, and changes largely reflected major Roman historical events, the authors say.

Stanford scientists link Neanderthal extinction to human diseases


Stanford University -- School of Humanities and Sciences
IMAGE
IMAGE: This is an illustration of modern humans overcoming disease burden before Neanderthals. view more 
Credit: Vivian Chen Wong
Growing up in Israel, Gili Greenbaum would give tours of local caves once inhabited by Neanderthals and wonder along with others why our distant cousins abruptly disappeared about 40,000 years ago. Now a scientist at Stanford, Greenbaum thinks he has an answer.
In a new study published in the journal Nature Communications, Greenbaum and his colleagues propose that complex disease transmission patterns can explain not only how modern humans were able to wipe out Neanderthals in Europe and Asia in just a few thousand years but also, perhaps more puzzling, why the end didn't come sooner.
"Our research suggests that diseases may have played a more important role in the extinction of the Neanderthals than previously thought. They may even be the main reason why modern humans are now the only human group left on the planet," said Greenbaum, who is the first author of the study and a postdoctoral researcher in Stanford's Department of Biology.
The slow kill
Archeological evidence suggests that the initial encounter between Eurasian Neanderthals and an upstart new human species that recently strayed out of Africa -- our ancestors -- occurred more than 130,000 years ago in the Eastern Mediterranean in a region known as the Levant.
Yet tens of thousands of years would pass before Neanderthals began disappearing and modern humans expanded beyond the Levant. Why did it take so long?
Employing mathematical models of disease transmission and gene flow, Greenbaum and an international team of collaborators demonstrated how the unique diseases harbored by Neanderthals and modern humans could have created an invisible disease barrier that discouraged forays into enemy territory. Within this narrow contact zone, which was centered in the Levant where first contact took place, Neanderthals and modern humans coexisted in an uneasy equilibrium that lasted tens of millennia.
Ironically, what may have broken the stalemate and ultimately allowed our ancestors to supplant Neanderthals was the coming together of our two species through interbreeding. The hybrid humans born of these unions may have carried immune-related genes from both species, which would have slowly spread through modern human and Neanderthal populations.
As these protective genes spread, the disease burden or consequences of infection within the two groups gradually lifted. Eventually, a tipping point was reached when modern humans acquired enough immunity that they could venture beyond the Levant and deeper into Neanderthal territory with few health consequences.
At this point, other advantages that modern humans may have had over Neanderthals -- such as deadlier weapons or more sophisticated social structures -- could have taken on greater importance. "Once a certain threshold is crossed, disease burden no longer plays a role, and other factors can kick in," Greenbaum said.
Why us?
To understand why modern humans replaced Neanderthals and not the other way around, the researchers modeled what would happen if the suite of tropical diseases our ancestors harbored were deadlier or more numerous than those carried by Neanderthals.
"The hypothesis is that the disease burden of the tropics was larger than the disease burden in temperate regions. An asymmetry of disease burden in the contact zone might have favored modern humans, who arrived there from the tropics," said study co-author Noah Rosenberg, the Stanford Professor of Population Genetics and Society in the School of Humanities and Sciences.
According to the models, even small differences in disease burden between the two groups at the outset would grow over time, eventually giving our ancestors the edge. "It could be that by the time modern humans were almost entirely released from the added burden of Neanderthal diseases, Neanderthals were still very much vulnerable to modern human diseases," Greenbaum said. "Moreover, as modern humans expanded deeper into Eurasia, they would have encountered Neanderthal populations that did not receive any protective immune genes via hybridization."
The researchers note that the scenario they are proposing is similar to what happened when Europeans arrived in the Americas in the 15th and 16th centuries and decimated indigenous populations with their more potent diseases.
If this new theory about the Neanderthals' demise is correct, then supporting evidence might be found in the archeological record. "We predict, for example, that Neanderthal and modern human population densities in the Levant during the time period when they coexisted will be lower relative to what they were before and relative to other regions," Greenbaum said.

Wednesday, November 6, 2019

The genetic imprint of Palaeolithic has been detected in North African populations


IMAGE
IMAGE: The origin and history of the population of North Africa are different from the rest of the continent and are more similar to the demographic history of regions outside Africa. view more 
Credit: Michael Gaida, Pixabay
An international team of scientists has for the first time performed an analysis of the complete genome of the population of North Africa. They have identified a small genetic imprint of the inhabitants of the region in Palaeolithic times, thus ruling out the theory that recent migrations from other regions completely erased the genetic traces of ancient North Africans. The study was led by David Comas, principal investigator at UPF and at the Institute of Evolutionary Biology (IBE: CSIC-UPF) and it has been published in the journal Current Biology.
The field of genomics has evolved greatly in recent years. DNA sequencing is increasingly affordable and there are major projects studying genomes at population level. However, some human populations like those of North Africa have been systematically ignored. This is the first genomic study to contextualize this region of the world.
The origin and history of the population of North Africa are different from the rest of the continent and are more similar to the demographic history of regions outside Africa: the Middle East, Europe or Asia. Palaeontological remains exist that prove the existence of humans in the region more than 300,000 years ago. In any case, previous genetic studies had shown that current populations of North Africa originated as a result of a Back to Africa process, that is, recent migrations from the Middle East that populated northern Africa.
Hence, the debate that arises is one of continuity versus replacement. On the one hand, the continuity hypothesis posits that current North African populations descend from Palaeolithic groups, i.e., that such ancient humans are the ancestors of present human populations. Meanwhile, other hypotheses argue that the populations that existed in Palaeolithic times were replaced, and that the humans that currently inhabit North Africa are the result of recent migrations that arrived there as of the Neolithic.
In this study, the researchers compared genetic data from current North African individuals with data recently published on the DNA of fossil remains found at different sites in Morocco. "We see that the current populations of North Africa are the result of this replacement but we detect small traces of this continuity from Palaeolithic times, i.e., total replacement did not take place in the populations of North Africa", reveals David Comas, full professor of Biological Anthropology at the Department of Experimental and Health Sciences (DCEXS) at UPF. "We do not know whether the first settlers 300,000 years ago are their ancestors, but we do detect imprints of this continuity at least since Palaeolithic times, since 15,000 years ago or more", he adds.
"We have seen that the genetic imprint of Palaeolithic populations of North Africa is unique to the current North African populations and is decreasingly distributed from west to east in the region, inversely proportionally to the Neolithic component coming from the Middle East, which had a greater effect on the eastern region, which is geographically closer", says Gerard Serra-Vidal, first author of the article.
"Therefore, our results confirm that migrations from other regions such as Europe, the Middle East and sub-Saharan Africa to this area did not completely erase the genetic traces of the ancient North Africans", explains David Comas, head of the Human Genome Diversity research group of the IBE.
These results of the populations of North Africa are in contrast with what is known about the European continent, in whose current populations a strong Palaeolithic component is found, i.e., more continuity and less replacement than in North Africa.
Many genomic data are still missing, both of current populations and of fossil remains, to be able to establish the population history of the human species. "This is or particular concern in populations such as those of North Africa about which we have very little information compared to other populations in the world. In order to have a complete picture of human genome diversity still have to do a considerable amount of research", David Comas concludes.

How Human Population came from our ability to cooperate



Humans may owe their place as Earth's dominating species to their ability to share and cooperate with each other, according to a new study published in the Journal of Anthropological Research.
In "How There Got to Be So Many of Us: The Evolutionary Story of Population Growth and a Life History of Cooperation," Karen L. Kramer explores the deep past to discover the biological and social underpinnings that allowed humans to excel as reproducers and survivors. She argues that the human tendency to bear many children, engage in food sharing, division of labor, and cooperative childcare duties, sets us apart from our closest evolutionary counterparts, the apes.
In terms of population numbers, few species can compare to the success of humans. Though much attention on population size focuses on the past 200 years, humans were incredibly successful even before the industrial revolution, populating all of the world's environments with more than a billion people. Kramer uses her research on Maya agriculturalists of Mexico's Yucatan Peninsula and the Savanna Pumé hunter-gatherers of Venezuela to illustrate how cooperative childrearing increases the number of children that mothers can successfully raise and--in environments where beneficial--even speed up maturation and childbearing. Kramer argues that intergenerational cooperation, meaning that adults help support children, but children also share food and many other resources with their parents and other siblings, is at the center of humans' demographic success. "Together our diet and life history, coupled with an ability to cooperate, made us really good at getting food on the table, reproducing, and surviving," Kramer writes.
During her time with the Maya, Kramer constructed a demographic model that considered how much household members consume, as the family grows and matures across a mother's reproductive career, balanced against how much a mother, father, and their children contribute. She found that Maya children contributed a substantial amount of work to the family's survival, with those aged 7-14 spending on average 2 to 5 hours working each day, and children aged 15-18 spending as much as their parents, about 6.5 hours a day. Labor type varied, with younger children doing much of the childcare, older children and fathers fill in much of the day-today cost of growing and processing food and running the household. "If mothers and juveniles did not cooperate, mothers could support far fewer children over their reproductive careers," Kramer writes. "It is the strength of intergenerational cooperation that allows parents to raise more children than they would otherwise be able to on their efforts alone."
Kramer's second research population were the Savanna Pumé, hunter-gatherers in west-central Venezuela. The Savanna Pumé live in a high-mortality environment, with challenges such as seasonal undernutrition, high immunological stress, chronic intestinal parasite loads, endemic malaria, and no access to healthcare or immunization. Despite all this--or perhaps, in part, because of it--Savanna Pumé girls mature quickly and begin childbearing in their mid-teens. This pattern conforms with theoretic predictions that fast maturation optimizes fitness in a high-mortality environment. Early childbearing is also, however, associated with a higher probability of mothers losing their firstborn.
Kramer found that intergenerational cooperation mitigated these risks. "In this challenging environment, young Pumé females are buffered against seasonal fluctuations because food is shared with them, she writes. "If young Pumé mothers relied solely on their own efforts, they would have to delay childbearing until they matured as foragers and caretakers.
Humans' ability to reproduce more successfully than other great apes can be traced to differences in evolutionary strategy: humans bear more children, at a faster rate. They also provision food for juveniles, whereas other great apes stop helping children find food as soon as they have been weaned. Humans are able to shoulder the greater childcare burden through cooperation.
"Combined, these fertility parameters mean that if a natural fertility mother survives her reproductive career, she can have almost twice as many offspring as nonhuman great ape mothers," she writes. "Humans are an astonishingly successful ape."

Tuesday, November 5, 2019

Study reveals that humans migrated from Europe to the Levant 40,000 years ago


IMAGE
IMAGE: A view of Manot cave and a close up of the area where some of the teeth were found. view more 
Credit: Prof. Israel Hershkovitz/American Friends of Tel Aviv University.
Who exactly were the Aurignacians, who lived in the Levant 40,000 years ago? Researchers from Tel Aviv University, the Israel Antiquities Authority, and Ben-Gurion University now report that these culturally sophisticated yet mysterious humans migrated from Europe to the Levant some 40,000 years ago, shedding light on a significant era in the region's history.
The Aurignacian culture first appeared in Europe some 43,000 years ago and is known for having produced bone tools, artifacts, jewelry, musical instruments, and cave paintings. For years, researchers believed that modern man's entry into Europe led to the rapid decline of the Neanderthals, either through violent confrontation or wresting control of food sources. But recent genetic studies have shown that Neanderthals did not vanish. Instead, they assimilated into modern human immigrant populations. The new study adds further evidence to substantiate this theory.
Through cutting-edge dental research on six human teeth discovered at Manot Cave in the Western Galilee, Dr. Rachel Sarig of TAU's School of Dental Medicine and Dan David Center Center for Human Evolution and Biohistory Research, Sackler Faculty of Medicine in collaboration with Dr. Omry Barzilai of the Israel Antiquities Authority and colleagues in Austria and the United States, have demonstrated that Aurignacians arrived in modern-day Israel from Europe some 40,000 years ago -- and that these Aurignacians comprised Neanderthals and Homo sapiens alike.
A report on the new findings was published in the Journal of Human Evolution on October 11.
"Unlike bones, teeth are preserved well because they're made of enamel, the substance in the human body most resistant to the effects of time," Dr. Sarig explains. "The structure, shape, and topography or surface bumps of the teeth provided important genetic information. We were able to use the external and internal shape of the teeth found in the cave to associate them with typical hominin groups: Neanderthal and Homo sapiens."
The researchers performed in-depth lab tests using micro-CT scans and 3D analyses on four of the teeth. The results surprised the researchers: Two teeth showed a typical morphology for Homo sapiens; one tooth showed features characteristic of Neanderthals; the last tooth showed a combination of Neanderthal and Homo sapiens features.
This combination of Neanderthal and modern human features has, to date, been found only in European populations from the early Paleolithic period, suggesting their common origin.
"Following the migration of European populations into this region, a new culture existed in the Levant for a short time, approximately 2,000-3,000 years. It then disappeared for no apparent reason," adds Dr. Sarig. "Now we know something about their makeup."
"Until now, we hadn't found any human remains with valid dating from this period in Israel," adds Prof. Israel Hershkovitz, head of the Dan David Center, "so the group remains a mystery. This groundbreaking study contributes to the story of the population responsible for some of the world's most important cultural contributions."

Monday, November 4, 2019

Minoan treasures found on Libyan Sea island: Experts



A team excavating on the tiny island of Chrysi south of Crete for over a decade have unearthed a 3,800-year-old Bronze Age compound containing gold jewels, glass beads and the remains of bronze talents, the common unit of value of ancient Greece. © ΥΠΠΟΑ.

ATHENS (AFP).- Archaeologists in Greece have located a "major treasure" of Minoan origin in a Bronze Age settlement on a small island in the Libyan Sea, the culture ministry said Friday. A team excavating on the tiny island of Chrysi south of Crete for over a decade have unearthed a 3,800-year-old Bronze Age compound containing gold jewels, glass beads and the remains of bronze talents, the common unit of value of ancient Greece. Some of the beads are of Egyptian origin, the culture ministry said in a statement. The archaeologists also found ancient fish tanks and large amounts of porphyry -- a prized purple pigment of the ancient world derived from sea snails, and later the colour exclusively reserved for Roman emperors.

Complete report

Friday, November 1, 2019

The last Neanderthal necklace


IMAGE
IMAGE: This is a falange of imperial eagle with marks of court from Cave Foradada view more 
Credit: Antonio Rodríguez-Hidalgo
Eagle talons are regarded as the first elements used to make jewellery by Neanderthals, a practice which spread around Southern Europe about 120,000 and 40,000 years ago. Now, for the first time, researchers found evidence of the ornamental uses of eagle talons in the Iberian Peninsula. An article published in the cover of the journal Science Advances talks about the findings, which took place in the site of the cave Foradada in Calafell. The article was led by Antonio Rodríguez-Hidalgo, researcher at the Institute of Evolution in Africa (IDEA) and member of the research team in a project of the Prehistoric Studies and Research Seminar (SERP) of the UB.
The interest in these findings lies in the fact that it is the most modern piece of the kind so far regarding the Neanderthal period and the first one found in the Iberian Peninsula. This circumstance widens the temporary and geographical limits that were estimated for this kind of Neanderthal ornaments. This would be "the last necklace made by the Neanderthals", according to Antonio Rodríguez-Hidalgo.
"Neanderthals used eagle talons as symbolic elements, probably as necklace pendants, from the beginnings of the mid Palaeolithic", notes Antonio Rodríguez-Hidalgo. In particular, what researchers found in Cova Foradada are bone remains from Spanish Imperial Eagle (Aquila Adalberti), from more than 39,000 years ago, with some marks that show these were used to take the talons so as to make pendants. The found remains correspond to the left leg of a big eagle. By the looks of the marks, and analogy regarding remains from different prehistorical sites and ethnographic documentation, researchers determined that the animal was not manipulated for consumption but for symbolic reasons. Eagle talons are the oldest ornamental elements known in Europe, even older than seashells Homo sapiens sapiens perforated in northern Africa.
The findings belong to the châtelperronian culture, typical from the last Neanderthals that lived in Europe, and coincided with the moment when this species got in touch with Homo sapiens sapiens, from Africa -and expanding from the Middle East. Actually, Juan Ignacio Morales, researcher in the program Juan de la Cierva affiliated at SERP and signer of the article, presents this use of eagle talons as ornaments could have been a cultural transmission from the Neanderthals to modern humans, who adopted this practice after reaching Europe.
Cova Foradada covers the most meridional châtelperronian culture site in Europe. The discovery involved a change in the map of the territory where the step from Middle Palaeolithic to Upper Palaeolithic took place 40,000 years ago, and where interaction between Neanderthals and Homo sapiens sapiens probably took place. Studies in Cova Foradada started in 1997. At the moment, the supervision of the excavation is led by Juan Ignacio Morales and Artur Cebrià. The archaeological study of this site is included in a SERP project funded by the Department of Culture of the Catalan Government and another funded by the Ministry of Science, Innovation and Universities, headed by UB professor and SERP director Josep M Fullola.
The first signer of the article in Science Advances is Antonio Rodríguez Hidalgo, from the Institute of Evolution in Africa (IDEA). Other participants, apart from SERP members, are the researchers from Rovira i Virgili University, the Catalan Institute of Human Paleoecology and Social Evolution (IPHES), the Natural History Museum of Paris, the University of Salamanca, the University of Calgary (Canada) and the French National Centre for Scientific Research (CNRS).