Friday, January 29, 2021

Archaeologist argues the Chumash Indians were using highly worked shell beads as currency 2,000 years ago


UNIVERSITY OF CALIFORNIA - SANTA BARBARA

Research News

As one of the most experienced archaeologists studying California's Native Americans, Lynn Gamble(link is external) knew the Chumash Indians had been using shell beads as money for at least 800 years.

But an exhaustive review(link is external) of some of the shell bead record led the UC Santa Barbara professor emerita of anthropology to an astonishing conclusion: The hunter-gatherers centered on the Southcentral Coast of Santa Barbara were using highly worked shells as currency as long as 2,000 years ago.

"If the Chumash were using beads as money 2,000 years ago," Gamble said, "this changes our thinking of hunter-gatherers and sociopolitical and economic complexity. This may be the first example of the use of money anywhere in the Americas at this time."

Although Gamble has been studying California's indigenous people since the late 1970s, the inspiration for her research on shell bead money came from far afield: the University of Tübingen in Germany. At a symposium there some years ago, most of the presenters discussed coins and other non-shell forms of money. Some, she said, were surprised by the assumptions of California archaeologists about what constituted money.

Intrigued, she reviewed the definitions and identifications of money in California and questioned some of the long-held beliefs. Her research led to "The origin and use of shell bead money in California" in the Journal of Anthropological Archaeology.

Gamble argues that archaeologists should use four criteria in assessing whether beads were used for currency versus adornment: Shell beads used as currency should be more labor-intensive than those for decorative purposes; highly standardized beads are likely currency; bigger, eye-catching beads were more likely used as decoration; and currency beads are widely distributed.

"I then compared the shell beads that had been accepted as a money bead for over 40 years by California archaeologists to another type that was widely distributed," she said. "For example, tens of thousands were found with just one individual up in the San Francisco Bay Area. This bead type, known as a saucer bead, was produced south of Point Conception and probably on the northern [Santa Barbara] Channel Islands, according to multiple sources of data, at least most, if not all of them.

"These earlier beads were just as standardized, if not more so, than those that came 1,000 years later," Gamble continued. "They also were traded throughout California and beyond. Through sleuthing, measurements and comparison of standardizations among the different bead types, it became clear that these were probably money beads and occurred much earlier than we previously thought."

As Gamble notes, shell beads have been used for over 10,000 years in California, and there is extensive evidence for the production of some of these beads, especially those common in the last 3,000 to 4,000 years, on the northern Channel Islands. The evidence includes shell bead-making tools, such as drills, and massive amounts of shell bits -- detritus -- that littered the surface of archaeological sites on the islands.

In addition, specialists have noted that the isotopic signature of the shell beads found in the San Francisco Bay Area indicate that the shells are from south of Point Conception.

"We know that right around early European contact," Gamble said, "the California Indians were trading for many types of goods, including perishable foods. The use of shell beads no doubt greatly facilitated this wide network of exchange."

Gamble's research not only resets the origins of money in the Americas, it calls into question what constitutes "sophisticated" societies in prehistory. Because the Chumash were non-agriculturists -- hunter-gatherers -- it was long held that they wouldn't need money, even though early Spanish colonizers marveled at extensive Chumash trading networks and commerce.

Recent research(link is external) on money in Europe during the Bronze Age suggests it was used there some 3,500 years ago. For Gamble, that and the Chumash example are significant because they challenge a persistent perspective among economists and some archaeologists that so-called "primitive" societies could not have had "commercial" economies.

"Both the terms 'complex' and 'primitive' are highly charged, but it is difficult to address this subject without avoiding those terms," she said. "In the case of both the Chumash and the Bronze Age example, standardization is a key in terms of identifying money. My article on the origin of money in California is not only pushing the date for the use of money back 1,000 years in California, and possibly the Americas, it provides evidence that money was used by non-state level societies, commonly identified as 'civilizations.' "


Thursday, January 28, 2021

A glimpse into the wardrobe of King David and King Solomon, 3000 years ago


Joint research by the Israel Antiquities Authority, Tel Aviv University and Bar Ilan University

TEL-AVIV UNIVERSITY

Research News

IMAGE

IMAGE: WOOL FIBERS DYED WITH ROYAL PURPLE,~1000 BCE, TIMNA VALLEY, ISRAEL. view more 

CREDIT: DAFNA GAZIT, COURTESY OF THE ISRAEL ANTIQUITIES AUTHORITY

"


King Solomon made for himself the carriage; he made it of wood from Lebanon. Its posts he made of silver, its base of gold. Its seat was upholstered with purple, its interior inlaid with love." (Song of Songs 3:9-10)

For the first time, rare evidence has been found of fabric dyed with royal purple dating from the time of King David and King Solomon.

While examining the colored textiles from Timna Valley - an ancient copper production district in southern Israel - in a study that has lasted several years, the researchers were surprised to find remnants of woven fabric, a tassel and fibers of wool dyed with royal purple. Direct radiocarbon dating confirms that the finds date from approximately 1000 BCE, corresponding to the biblical monarchies of David and Solomon in Jerusalem. The dye, which is produced from species of mollusk found in the Mediterranean, over 300 km from Timna, is often mentioned in the Bible and appears in various Jewish and Christian contexts. This is the first time that purple-dyed Iron Age textiles have been found in Israel, or indeed throughout the Southern Levant. 

The research was carried out by Dr. Naama Sukenik from the Israel Antiquities Authority and Prof. Erez Ben-Yosef, from the Jacob M. Alkow Department of Archaeology and Ancient Near Eastern Cultures at Tel Aviv University, in collaboration with Prof. Zohar Amar, Dr. David Iluz and Dr. Alexander Varvak from Bar-Ilan University and Dr. Orit Shamir from the Israel Antiquities Authority. The unexpected finds are being published today in the prestigious PLOS ONE journal.

"This is a very exciting and important discovery," explains Dr. Naama Sukenik, curator of organic finds at the Israel Antiquities Authority. "This is the first piece of textile ever found from the time of David and Solomon that is dyed with the prestigious purple dye. In antiquity, purple attire was associated with the nobility, with priests, and of course with royalty. The gorgeous shade of the purple, the fact that it does not fade, and the difficulty in producing the dye, which is found in minute quantities in the body of mollusks, all made it the most highly valued of the dyes, which often cost more than gold. Until the current discovery, we had only encountered mollusk-shell waste and potsherds with patches of dye, which provided evidence of the purple industry in the Iron Age. Now, for the first time, we have direct evidence of the dyed fabrics themselves, preserved for some 3000 years".

Prof. Erez Ben-Yosef from Tel Aviv University's Archaeology Department says, "Our archaeological expedition has been excavating continuously at Timna since 2013. As a result of the region's extremely dry climate, we are also able to recover organic materials such as textile, cords and leather from the Iron Age, from the time of David and Solomon, providing us with a unique glimpse into life in biblical times. If we excavated for another hundred years in Jerusalem, we would not discover textiles from 3000 years ago. The state of preservation at Timna is exceptional and it is paralleled only by that at much later sites such as Masada and the Judean Desert Caves. In recent years, we have been excavating a new site inside Timna known as 'Slaves' Hill'. The name may be misleading, since far from being slaves, the laborers were highly skilled metalworkers. Timna was a production center for copper, the Iron Age equivalent of modern-day oil. Copper smelting required advanced metallurgical understanding that was a guarded secret, and those who held this knowledge were the 'Hi-Tech' experts of the time. Slaves' Hill is the largest copper-smelting site in the valley and it is filled with piles of industrial waste such as slag from the smelting furnaces. One of these heaps yielded three scraps of colored cloth. The color immediately attracted our attention, but we found it hard to believe that we had found true purple from such an ancient period".

According to the researchers, true purple [argaman] was produced from three species of mollusk indigenous to the Mediterranean Sea: The Banded Dye-Murex (Hexaplex trunculus), the Spiny Dye-Murex (Bolinus brandaris) and the Red-Mouthed Rock-Shell (Stramonita haemastoma). The dye was produced from a gland located within the body of the mollusk by means of a complex chemical process that lasted several days. Today, most scholars agree that the two precious dyes, purple [argaman] and light blue, or azure [tekhelet] were produced from the purple dye mollusk under different conditions of exposure to light. When exposed to light, azure is obtained whereas without light exposure, a purple hue is obtained. These colors are often mentioned together in the ancient sources, and both have symbolic and religious significance to this day. The Temple priests, David and Solomon, and Jesus of Nazareth are all described as having worn clothing colored with purple.

The analytical tests conducted at Bar Ilan University's laboratories, together with dyes that were reconstructed by Prof. Zohar Amar and Dr. Naama Sukenik, can identify the species used to dye the Timna textiles and the desired hues. In order to reconstruct the mollusk dyeing process, Prof. Amar traveled to Italy where he cracked thousands of mollusks (which the Italians eat) and produced raw material from their dye glands that was used in hundreds of attempts to reconstruct ancient dyeing. "The practical work took us back thousands of years," says Prof. Amar, "and it has allowed us to better understand obscure historical sources associated with the precious colors of azure and purple."

The dye was identified with an advanced analytical instrument (HPLC) that indicated the presence of unique dye molecules, originating only in certain species of mollusk. According to Dr. Naama Sukenik, "Most of the colored textiles found at Timna, and in archaeological research in general, were dyed using various plant-based dyes that were readily available and easier to dye with. The use of animal-based dyes is regarded as much more prestigious, and served as an important indicator for the wearer's high economic and social status. The remnants of the purple-dyed cloth that we found are not only the most ancient in Israel, but in the Southern Levant in general. We also believe that we have succeeded in identifying the double-dyeing method in one of the fragments, in which two species of mollusk were used in a sophisticated way, to enrich the dye. This technology is described by the Roman historian Pliny the Elder, from the first century CE, and the dye it produced was considered the most prestigious."

Prof. Ben-Yosef identifies the copper-production center at Timna as part of the biblical Kingdom of Edom, which bordered the kingdom of Israel to the south. According to him, the dramatic finds should revolutionize our concepts of nomadic societies in the Iron Age. "The new finds reinforce our assumption that there was an elite at Timna, attesting to a stratified society. In addition, since the mollusks are indigenous to the Mediterranean, this society obviously maintained trade relations with other peoples who lived on the coastal plain. However, we do not have evidence of any permanent settlements in the Edomite territory. The Edomite Kingdom was a kingdom of nomads in the early Iron Age. When we think of nomads, it is difficult for us to free ourselves from comparisons with contemporary Bedouins, and we therefore find it hard to imagine kings without magnificent stone palaces and walled cities. Yet in certain circumstances, nomads can also create a complex socio-political structure, one that the biblical writers could identify as a kingdom. Of course, this whole debate has repercussions for our understanding of Jerusalem in the same period. We know that the Tribes of Israel were originally nomadic and that the process of settlement was gradual and prolonged. Archaeologists are looking for King David's palace. However, David may not have expressed his wealth in splendid buildings, but with objects more suited to a nomadic heritage such as textiles and artifacts." 

According to Ben-Yosef, "It is wrong to assume that if no grand buildings and fortresses have been found, then biblical descriptions of the United Monarchy in Jerusalem must be literary fiction. Our new research at Timna has showed us that even without such buildings, there were kings in our region who ruled over complex societies, formed alliances and trade relations, and waged war on each other. The wealth of a nomadic society was not measured in palaces and monuments made of stone, but in things that were no less valued in the ancient world - such as the copper produced at Timna and the purple dye that was traded with its copper smelters."


Wednesday, January 27, 2021

Ancient indigenous New Mexican community knew how to sustainably coexist with wildfire


Wildfires are the enemy when they threaten homes in California and elsewhere. But a new study led by SMU suggests that people living in fire-prone places can learn to manage fire as an ally to prevent dangerous blazes, just like people who lived nearly 1,000 years ago.

"We shouldn't be asking how to avoid fire and smoke," said SMU anthropologist and lead author Christopher Roos. "We should ask ourselves what kind of fire and smoke do we want to coexist with."

An interdisciplinary team of scientists published a study in the journal Proceedings of the National Academy of Sciencesdocumenting centuries of fire management by Native American farmers. The team included scientists from SMU, the University of Arizona, Harvard University, Simon Fraser University, the US Geological Survey, Baylor University, the University of Illinois, and the University of South Florida.

Jemez people learned how to live with and manage fire long ago

Ancestors of the Native American community in the Jemez Mountains of northern New Mexico lived continuously in fire-prone forests for more than five centuries. Similar to today's communities in the western U.S. forests, Pueblos of the Jemez people had relatively high population densities, and the forested landscape they managed was an area larger than the city of Chicago.

Starting in the 1100s, the Jemez people limited fire spread and improved forest resilience to climate variability by creating purposeful burning of small patches of the forest around their community, researchers found.

"The area around each village would have been a fire-free zone," Roos said. "There were no living trees within two football fields of each village, and the hundreds or thousands of trampling feet mean that fine fuels, such as grasses, herbs, and shrubs, to carry surface fires would have been rare too. The agricultural areas would have seen targeted applications of fire to clean fields after harvest, to recycle plant nutrients as fertilizer, and to clear new fields."

Roos calls those controlled burns "the right kind of fire and smoke." The Jemez practice of burning wood for heat, light, and cooking in their homes also removed much of the fuel that could burn in wildfires, he said.

Roos said the ancient Jemez model could work today. Many communities in the western United States, including those of Native Americans, still rely on wood-burning to generate heat during the winter, he said. Regularly setting small, low-intensity fires in a patchwork around where people live to clear out flammable material would also follow the Jemez model, he said.

"Some sort of public-private tribal partnership might do a lot of good, empowering tribal communities to oversee the removal of the small trees that have overstocked the forests and made them vulnerable to dangerous fires, while also providing wood fuel for people who need it," Roos said.

Since 2018, wildfires have destroyed more than 50,000 structures in California alone. Global warming is only expected to make the amount and severity of wildfires worse.

Almost every major study of fire activity over the last 10,000 years indicates that climate drives fire activity, particular larger fires. Yet, many examples from traditional societies suggest the role of climate can be blunted or buffered by a patchwork of small, purposeful burns before the peak natural fire season. In the Jemez Mountains, the climate influence was weakened and large fires were rare when Jemez farmers used fire preemptively in many small patches, effectively clearing out the materials that fuel today's megafires.

In contrast, today's forests are filled with these young trees, increasing the chances they can generate huge flames and waves of flaming embers that can catch homes on fire.

The scientists used a variety of methods to document how Jemez people handled smoke and fire centuries ago, including interviewing tribal elders at Jemez Pueblo. The team also compared tree-ring fire records with paleoclimate records, which indicated that fire activity was disconnected from climate during the time when Jemez's population was at a peak. In addition, charcoal and pollen records show that Jemez people began using fire to establish an agricultural landscape and to promote habitats for large animals, such as mule deer and elk.

Roos noted that tolerance of fire and smoke hazards probably went hand-in-hand with recognition of the benefits of fire and smoke.

"Paul Tosa, former governor of Jemez Pueblo, said 'Fire brings richness to the land,'" Roos noted. "We could do very well to learn from the wisdom of Jemez peoples and change our relationship to fire and smoke at the wildland-urban interface."

Early milk drinking in Africa

 

IMAGE

IMAGE: CATTLE GRAZING IN ENTESEKARA IN KENYA NEAR THE TANZANIAN BORDER view more 

CREDIT: A. JANZEN



Tracking milk drinking in the ancient past is not straightforward. For decades, archaeologists have tried to reconstruct the practice by various indirect methods. They have looked at ancient rock art to identify scenes of animals being milked and at animal bones to reconstruct kill-off patterns that might reflect the use of animals for dairying. More recently, they even used scientific methods to detect traces of dairy fats on ancient pots. But none of these methods can say if a specific individual consumed milk.

Now archaeological scientists are increasingly using proteomics to study ancient dairying. By extracting tiny bits of preserved proteins from ancient materials, researchers can detect proteins specific to milk, and even specific to the milk of particular species.

Where are these proteins preserved? One critical reservoir is dental calculus - dental plaque that has mineralized and hardened over time. Without toothbrushes, many ancient people couldn't remove plaque from their teeth, and so developed a lot of calculus. This may have led to tooth decay and pain for our ancestors but it also produced a goldmine of information about ancient diets, with plaque often trapping food proteins and preserving them for thousands of years.

Now, an international team led by researchers at the Max Planck Institute for the Science of Human History in Jena, Germany and the National Museums of Kenya (NMK) in Nairobi, Kenya have analyzed some of the most challenging ancient dental calculus to date. Their new study, published in Nature Communications, examines calculus from human remains in Africa, where high temperatures and humidity were thought to interfere with protein preservation.

The team analyzed dental calculus from 41 adult individuals from 13 ancient pastoralist sites excavated in Sudan and Kenya and, remarkably, retrieved milk proteins from 8 of the individuals.

The positive results were greeted with enthusiasm by the team. As lead author Madeleine Bleasdale observes, "some of the proteins were so well preserved, it was possible to determine what species of animal the milk had come from. And some of the dairy proteins were many thousands of years old, pointing to a long history of milk drinking in the continent."

The earliest milk proteins reported in the study were identified at Kadruka 21, a cemetery site in Sudan dating to roughly 6,000 years ago. In the calculus of another individual from the adjacent cemetery of Kadruka 1, dated to roughly 4,000 years ago, researchers were able to identify species-specific proteins and found that the source of the dairy had been goat's milk.

"This the earliest direct evidence to date for the consumption of goat's milk in Africa," says Bleasdale. "It's likely goats and sheep were important sources of milk for early herding communities in more arid environments."

The team also discovered milk proteins in dental calculus from an individual from Lukenya Hill, an early herder site in southern Kenya dated to between 3,600 and 3,200 years ago.

"It seems that animal milk consumption was potentially a key part of what enabled the success and long-term resilience of African pastoralists," observes coauthor Steven Goldstein.

As research on ancient dairying intensifies around the world, Africa remains an exciting place to examine the origins of milk drinking. The unique evolution of lactase persistence in Africa, combined with the fact that animal milk consumption remains critical to many communities across the continent, makes it vital for understanding how genes and culture can evolve together.

Normally, lactase - an enzyme critical for enabling the body to fully digest milk - disappears after childhood, making it much more difficult for adults to drink milk without discomfort. But in some people, lactase production persists into adulthood - in other words these individuals have 'lactase persistence.'

In Europeans, there is one main mutation linked to lactase persistence, but in different populations across Africa, there are as many as four. How did this come to be? The question has fascinated researchers for decades. How dairying and human biology co-evolved has remained largely mysterious despite decades of research.

By combining their findings about which ancient individuals drank milk with genetic data obtained from some of the ancient African individuals, the researchers were also able to determine whether early milk drinkers on the continent were lactase persistent. The answer was no. People were consuming dairy products without the genetic adaptation that supports milk drinking into adulthood.

This suggests that drinking milk actually created the conditions that favoured the emergence and spread of lactase persistence in African populations. As senior author and Max Planck Director Nicole Boivin notes, "This is a wonderful example of how human culture has - over thousands of years - reshaped human biology."

But how did people in Africa drink milk without the enzyme needed to digest it? The answer may lie in fermentation. Dairy products like yogurt have a lower lactose content than fresh milk, and so early herders may have processed milk into dairy products that were easier to digest.

Critical to the success of the research was the Max Planck scientists' close partnership with African colleagues, including those at the National Corporation of Antiquities and Museums (NCAM), Sudan, and long-term collaborators at the National Museums of Kenya (NMK). "It's great to get a glimpse of Africa's important place in the history of dairying," observes coauthor Emmanuel Ndiema of the NMK. "And it was wonderful to tap the rich potential of archaeological material excavated decades ago, before these new methods were even invented. It demonstrates the ongoing value and importance of museum collections around the world, including in Africa."

History of the Champagne vineyards

Although the reputation of Champagne is well established, the history of Champagne wines and vineyards is poorly documented. However, a research team led by scientists from the CNRS and the Université de Montpellier at the Institut des sciences de l'évolution de Montpellier* has just lifted the veil on this history by analysing the archaeological grape seeds from excavations carried out in Troyes and Reims. Dated to between the 1st and 15th centuries AD, the seeds shed light on the evolution of Champagne wine growing, prior to the invention of the famous Champagne, for the first time. 

According to the researchers, "wild"** vines were cultivated throughout the period studied. Domestic varieties, coming from the south of Gaul, appeared as early as the 1st century and became the major grape varieties of the 2nd and 3rd centuries. This archaeological series was uninterrupted until around 1000 AD, when the wild vine and the southern varieties made a strong comeback. This period corresponds both to intense economic and societal changes and to global warming spanning a few hundred years. Northern grape varieties, more adapted to the cold, appeared more than 300 years later at the beginning of a colder climatic period***, supplanting the southern grape varieties. 

Published in Scientific Reports on January 27, 2021, these results pave the way for further global analysis that will allow a better understanding of the history of viticulture by combining biological, archaeological and historical data.

Social inequality was "recorded on the bones" of Cambridge's medieval residents

 Social inequality was "recorded on the bones" of Cambridge's medieval residents, according to a new study of hundreds of human remains excavated from three very different burial sites within the historic city centre.

University of Cambridge researchers examined the remains of 314 individuals dating from the 10th to the 14th century and collected evidence of "skeletal trauma" -- a barometer for levels of hardship endured in life.

Bones were recovered from across the social spectrum: a parish graveyard for ordinary working people, a charitable "hospital" where the infirm and destitute were interred, and an Augustinian friary that buried wealthy donors alongside clergy.

Researchers carefully catalogued the nature of every break and fracture to build a picture of the physical distress visited upon the city's inhabitants by accident, occupational injury or violence during their daily lives.

Using x-ray analysis, the team found that 44% of working people had bone fractures, compared to 32% of those in the friary and 27% of those buried by the hospital. Fractures were more common in male remains (40%) than female (26%) across all burials.

The team also uncovered noteworthy cases, such as a friar who resembles a modern hit-and-run victim, and bones that hint at lives blighted by violence. The findings are published in the American Journal of Physical Anthropology.

"By comparing the skeletal trauma of remains buried in various locations within a town like Cambridge, we can gauge the hazards of daily life experienced by different spheres of medieval society," said Dr Jenna Dittmar, study lead author from the After the Plague project at the University's Department of Archaeology.

"We can see that ordinary working folk had a higher risk of injury compared to the friars and their benefactors or the more sheltered hospital inmates," she said.

"These were people who spent their days working long hours doing heavy manual labour. In town, people worked in trades and crafts such as stonemasonry and blacksmithing, or as general labourers. Outside town, many spent dawn to dusk doing bone-crushing work in the fields or tending livestock."

The University was embryonic at this time -- the first stirrings of academia occurring around 1209 -- and Cambridge was primarily a provincial town of artisans, merchants and farmhands, with a population of 2500-4000 by the mid-13th century.

While the working poor may have borne the brunt of physical labour compared to better-off people and those in religious institutions, medieval life was tough in general. In fact, the most extreme injury was found on a friar, identified as such by his burial place and belt buckle.

"The friar had complete fractures halfway up both his femurs," said Dittmar. The femur [thigh bone] is the largest bone in the body. "Whatever caused both bones to break in this way must have been traumatic, and was possibly the cause of death."

Dittmar points out that today's clinicians would be familiar with such injuries from those hit by automobiles -- it's the right height. "Our best guess is a cart accident. Perhaps a horse got spooked and he was struck by the wagon."

Injury was also inflicted by others. Another friar had lived with defensive fractures on his arm and signs of blunt force trauma to his skull. Such violence-related skeletal injuries were found in about 4% of the population, including women and people from all social groups.

One older woman buried in the parish grounds appeared to bear the marks of lifelong domestic abuse. "She had a lot of fractures, all of them healed well before her death. Several of her ribs had been broken as well as multiple vertebrae, her jaw and her foot," said Dittmar.

"It would be very uncommon for all these injuries to occur as the result of a fall, for example. Today, the vast majority of broken jaws seen in women are caused by intimate partner violence."

Of the three sites, the Hospital of St John the Evangelist contained the fewest fractures. Established at the end of the 12th century, it housed select needy Cambridge residents, providing food and spiritual care. Many had skeletal evidence of chronic illnesses such as tuberculosis, and would have been unable to work.

While most remains were "inmates," the site also included "corrodians": retired locals who paid for the privilege of living at the hospital, much like a modern old-age care home.

The Hospital was dissolved to create St John's College in 1511, and excavated by the Cambridge Archaeological Unit (CAU), part of the University, in 2010 during a renovation of the College's Divinity School building.

CAU excavated the Augustinian Friary in 2016 as part of building works on the University's New Museums Site. According to records, the friary acquired rights to bury members of the Augustinian order in 1290, and non-members in 1302 -- allowing rich benefactors to take a plot in the friary grounds.

The friary functioned until 1538, when King Henry VIII stripped the nation's monasteries of their income and assets to fortify the Crown's coffers.

The parish of All Saints by the Castle, north of the River Cam, was likely founded in the 10th century and in use until 1365, when it merged with a neighbouring parish after local populations fell in the wake of the Black Death bubonic plague pandemic.

While the church itself has never been found, the graveyard -- next to what is still called Castle Hill -- was first excavated in the 1970s. Remains were housed within the University's Duckworth Collection, allowing researchers to revisit these finds for the latest study.

"Those buried in All Saints were among the poorest in town, and clearly more exposed to incidental injury," said Dittmar. "At the time, the graveyard was in the hinterland where urban met rural. Men may have worked in the fields with heavy ploughs pulled by horses or oxen, or lugged stone blocks and wooden beams in the town.

"Many of the women in All Saints probably undertook hard physical labours such as tending livestock and helping with harvest alongside their domestic duties.

"We can see this inequality recorded on the bones of medieval Cambridge residents. However, severe trauma was prevalent across the social spectrum. Life was toughest at the bottom -- but life was tough all over."

NOTES:

  • Skeletons had to be over 25% complete for inclusion in the study. Participation in adult work often began in earnest at age twelve, so those estimated to have been younger were discounted.
  • Researchers analysed the bones from 84 individuals taken from the All Saints by the Castle parish grounds, 155 individuals from the Hospital of St John the Evangelist, and 75 individuals from the Augustinian Friary.

Monday, January 25, 2021

First people to enter the Americas likely did so with their dogs

 


DURHAM UNIVERSITY

Research News

IMAGE

IMAGE: EARLY SETTLERS IN THE AMERICAS WERE ACCOMPANIED BY THEIR DOGS view more 

CREDIT: ETTORE MAZZA

The first people to settle in the Americas likely brought their own canine companions with them, according to new research which sheds more light on the origin of dogs.

An international team of researchers led by archaeologist Dr Angela Perri, of Durham University, UK, looked at the archaeological and genetic records of ancient people and dogs.

They found that the first people to cross into the Americas before 15,000 years ago, who were of northeast Asian descent, were accompanied by their dogs.

The researchers say this discovery suggests that dog domestication likely took place in Siberia before 23,000 years ago. People and their dogs then eventually travelled both west into the rest of Eurasia, and east into the Americas.

The findings are published in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

The Americas were one of the last regions in the world to be settled by people. By this same time, dogs had been domesticated from their wolf ancestors and were likely playing a variety of roles within human societies.

Research lead author Dr Angela Perri, in the Department of Archaeology at Durham University, said: "When and where have long been questions in dog domestication research, but here we also explored the how and why, which have often been overlooked.

"Dog domestication occurring in Siberia answers many of the questions we've always had about the origins of the human-dog relationship.

"By putting together the puzzle pieces of archaeology, genetics and time we see a much clearer picture where dogs are being domesticated in Siberia, then disperse from there into the Americas and around the world."

Geneticist and co-author Laurent Frantz (Ludwig Maximilian University of Munich) said: "The only thing we knew for sure is that dog domestication did not take place in the Americas.

"From the genetic signatures of ancient dogs, we now know that they must have been present somewhere in Siberia before people migrated to the Americas."

Co-author Professor Greger Larson, Oxford University, said: "Researchers have previously suggested that dogs were domesticated across Eurasia from Europe to China, and many places in between.

"The combined evidence from ancient humans and dogs is helping to refine our understanding of the deep history of dogs, and now points toward Siberia and Northeast Asia as a likely region where dog domestication was initiated."

During the Last Glacial Maximum (from ~23,000-19,000 years ago) Beringia (the land and maritime area between Canada and Russia), and most of Siberia, was extremely cold, dry, and largely unglaciated.

The harsh climatic conditions leading up to, and during this period may have served to bring human and wolf populations into close proximity given their attraction to the same prey.

This increasing interaction, through mutual scavenging of kills from wolves drawn to human campsites, may have began a relationship between the species that eventually led to dog domestication, and a vital role in the populating of the Americas.

As co-author and archaeologist David Meltzer of Southern Methodist University (Dallas, TX) notes, "We have long known that the first Americans must have possessed well-honed hunting skills, the geological know-how to find stone and other necessary materials and been ready for new challenges.

"The dogs that accompanied them as they entered this completely new world may have been as much a part of their cultural repertoire as the stone tools they carried."

Since their domestication from wolves, dogs have played a wide variety of roles in human societies, many of which are tied to the history of cultures worldwide.

Future archaeological and genetic research will reveal how the emerging mutual relationship between people and dogs led to their successful dispersal across the globe.

Women influenced coevolution of dogs and humans

  

Man's best friend might actually belong to a woman.

In a cross-cultural analysis, Washington State University researchers found several factors may have played a role in building the mutually beneficial relationship between humans and dogs, including temperature, hunting and surprisingly - gender.

"We found that dogs' relationships with women might have had a greater impact on the dog-human bond than relationships with men," said Jaime Chambers, a WSU anthropology Ph.D. student and first author on the paper published in the Journal of Ethnobiology. "Humans were more likely to regard dogs as a type of person if the dogs had a special relationship with women. They were more likely to be included in family life, treated as subjects of affection and generally, people had greater regard for them."

While dogs are the oldest, most widespread domesticated animal, very few anthropologic studies have directly focused on the human relationship with canines. Yet when the WSU researchers searched the extensive collection of ethnographic documents in the Human Relations Area Files database, they found thousands of mentions of dogs.

Ultimately, they located data from more than 844 ethnographers writing on 144 traditional, subsistence-level societies from all over the globe. Looking at these cultures can provide insight into how the dog-human relationship developed, Chambers said.

"Our modern society is like a blip in the timeline of human history," she said. "The truth is that human-dog relationships have not looked like they do in Western industrialized societies for most of human history, and looking at traditional societies can offer a wider vision."

The researchers noted specific instances that showed dogs' utility, or usefulness, to humans, and humans' utility to dogs as well as the "personhood" of dogs--when canines were treated like people, such as being given names, allowed to sleep in the same beds or mourned when they died.

A pattern emerged that showed when women were more involved with dogs, the humans' utility to dogs went up, as did the dogs' personhood.

Another prevalent trend involved the environment: the warmer the overall climate, the less useful dogs tended to be to humans.

"Relative to humans, dogs are really not particularly energy efficient," said Robert Quinlan, WSU anthropology professor and corresponding author on the paper. "Their body temperature is higher than humans, and just a bit of exercise can make them overheat on a hot day. We saw this trend that they had less utility to humans in warmer environments."

Quinlan noted there were some exceptions to this with a few dog-loving cultures in the tropics, but it was a fairly consistent trend.

Hunting also seemed to strengthen the dog-human connection. In cultures that hunted with dogs, they were more valued by their human partners: they were higher in the measures of dogs' utility to humans and in personhood. Those values declined, however, when food production increased whether it was growing crops or keeping livestock. This finding seemed to go against the commonly held perception of herding dogs working in concert with humans, but Quinlan noted that in many cultures, herding dogs often work alone whereas hunting requires a more intense cooperation.

This study adds evidence to the evolutionary theory that dogs and humans chose each other, rather than the older theory that humans intentionally sought out wolf pups to raise on their own. Either way, there have been clear benefits for the dogs, Chambers said.

"Dogs are everywhere humans are," she said. "If we think that dogs are successful as a species if there are lots of them, then they have been able to thrive. They have hitched themselves to us and followed us all over the world. It's been a very successful relationship."

Climate change in antiquity: mass emigration due to water scarcity

 

The absence of monsoon rains at the source of the Nile was the cause of migrations and the demise of entire settlements in the late Roman province of Egypt. This demographic development has been compared with environmental data for the first time by professor of ancient history, Sabine Huebner of the University of Basel - leading to a discovery of climate change and its consequences.

The oasis-like Faiyum region, roughly 130 km south-west of Cairo, was the breadbasket of the Roman Empire. Yet at the end of the third century CE, numerous formerly thriving settlements there declined and were ultimately abandoned by their inhabitants. Previous excavations and contemporary papyri have shown that problems with field irrigation were the cause. Attempts by local farmers to adapt to the dryness and desertification of the farmland - for example, by changing their agricultural practices - are also documented.

Volcanic eruption and monsoon rains

Basel professor of ancient history Sabine R. Huebner has now shown in the US journal Studies in Late Antiquity that changing environmental conditions were behind this development. Existing climate data indicates that the monsoon rains at the headwaters of the Nile in the Ethiopian Highlands suddenly and permanently weakened. The result was lower high-water levels of the river in summer. Evidence supporting this has been found in geological sediment from the Nile Delta, Faiyum and the Ethiopian Highlands, which provides long-term climate data on the monsoons and the water level of the Nile.

A powerful tropical volcanic eruption around 266 CE, which in the following year brought a below-average flood of the Nile, presumably also played a role. Major eruptions are known from sulfuric acid deposits in ice cores from Greenland and Antarctica, and can be dated to within three years. Particles hurled up into the stratosphere lead to a cooling of the climate, disrupting the local monsoon system.

New insights into climate, environment, and society

In the third century CE, the entire Roman Empire was hit by crises that are relatively well documented in the province of Egypt by more than 26,000 preserved papyri (documents written on sheets of papyrus). In the Faiyum region, these include records of inhabitants who switched to growing vines instead of grain or to sheep farming due to the scarcity of water. Others accused their neighbors of water theft or turned to the Roman authorities for tax relief. These and other adaptive strategies of the population delayed the death of their villages for several decades.

"Like today, the consequences of climate change were not the same everywhere," says Huebner. Although regions at the edge of the desert faced the harshness of the drought, others actually benefited from the influx of people moving from the abandoned villages. "New knowledge about the interaction of climate, environmental changes and social developments is very topical." The climate change of late antiquity was not, however - unlike today - caused mainly by humans, but was based on natural fluctuations.