A new study that reconstructs the history of sea level at the Bering Strait shows that the Bering Land Bridge connecting Asia to North America did not emerge until around 35,700 years ago, less than 10,000 years before the height of the last ice age (known as the Last Glacial Maximum).
“It means that more than 50 percent of the global ice volume at the Last Glacial Maximum grew after 46,000 years ago,” said Tamara Pico, assistant professor of Earth and planetary sciences at UC Santa Cruz and a corresponding author of the paper. “This is important for understanding the feedbacks between climate and ice sheets, because it implies that there was a substantial delay in the development of ice sheets after global temperatures dropped.”
Global sea levels drop during ice ages as more and more of Earth’s water gets locked up in massive ice sheets, but the timing of these processes has been hard to pin down. During the Last Glacial Maximum, which lasted from about 26,500 to 19,000 years ago, ice sheets covered large areas of North America. Dramatically lower sea levels uncovered a vast land area known as Beringia that extended from Siberia to Alaska and supported herds of horses, mammoths, and other Pleistocene fauna. As the ice sheets melted, the Bering Strait became flooded again around 13,000 to 11,000 years ago.
The new findings are interesting in relation to human migration because they shorten the time between the opening of the land bridge and the arrival of humans in the Americas. The timing of human migration into North America remains unresolved, but some studies suggest people may have lived in Beringia throughout the height of the ice age.
“People may have started going across as soon as the land bridge formed,” Pico said.
The new study used an analysis of nitrogen isotopes in seafloor sediments to determine when the Bering Strait was flooded during the past 46,000 years, allowing Pacific Ocean water to flow into the Arctic Ocean. First author Jesse Farmer at Princeton University led the isotope analysis, measuring nitrogen isotope ratios in the remains of marine plankton preserved in sediment cores collected from the seafloor at three locations in the western Arctic Ocean. Because of differences in the nitrogen composition of Pacific and Arctic waters, Farmer was able to identify a nitrogen isotope signature indicating when Pacific water flowed into the Arctic.
Pico, whose expertise is in sea level modeling, then compared Farmer’s results with sea level models based on different scenarios for the growth of the ice sheets.
“The exciting thing to me is that this provides a completely independent constraint on global sea level during this time period,” Pico said. “Some of the ice sheet histories that have been proposed differ by quite a lot, and we were able to look at what the predicted sea level would be at the Bering Strait and see which ones are consistent with the nitrogen data.”
The results support recent studies indicating that global sea levels were much higher prior to the Last Glacial Maximum than previous estimates had suggested, she said. Average global sea level during the Last Glacial Maximum was about 130 meters (425 feet) lower than today. The actual sea level at a particular site such as the Bering Strait, however, depends on factors such as the deformation of the Earth’s crust by the weight of the ice sheets.
“It’s like punching down on bread dough—the crust sinks under the ice and rises up around the edges,” Pico said. “Also, the ice sheets are so massive they have gravitational effects on the water. I model those processes to see how sea level would vary around the world and, in this case, to look at the Bering Strait.”
The findings imply a complicated relationship between climate and global ice volume and suggest new avenues for investigating the mechanisms underlying glacial cycles.
In addition to Pico and Farmer, the coauthors include Ona Underwood and Daniel Sigman at Princeton University; Rebecca Cleveland-Stout at the University of Washington; Julie Granger at the University of Connecticut; Thomas Cronin at the U.S. Geological Survey; and François Fripiat, Alfredo Martinez-Garcia, and Gerald Haug at the Max Planck Institute for Chemistry in Germany. This work was supported by the National Science Foundation.
Analysis of more than 1,200 vessels from hunter-gatherer sites has shown that pottery-making techniques spread vast distances over a short period of time through social traditions being passed on.
The team, which includes researchers from the University of York and the British Museum, analysed the remains of 1,226 pottery vessels from 156 hunter-gatherer sites across nine countries in Northern and Eastern Europe. They combined radiocarbon dating, together with data on the production and decoration of ceramic vessels, and analysis of the remains of food found inside the pots.
Their findings, published in the journal Nature Human Behaviour, suggest that pottery-making spread rapidly westwards from 5,900 BCE onwards and took only 300–400 years to advance over 3,000 km, equivalent to 250 km in a single generation.
Professor Oliver Craig, from the University of York’s Department of Archaeology, said: “Our analysis of the ways pots were designed and decorated as well as new radiocarbon dates suggests that knowledge of pottery spread through a process of cultural transmission.
“By this we mean that the activity spread by the exchange of ideas between groups of hunter-gatherers living nearby, rather than through migration of people or an expanding population as we see for other key changes in human history such as the introduction of agriculture.”
“That methods of pottery-making spread so far and so fast through the passing on of ideas is quite surprising. Specific knowledge may have been shared through marriages or at centres of aggregation, specific points in the landscape where groups of hunter-gatherers came together perhaps at certain times of the year.”
By studying traces of organic materials left in the pots, the team demonstrated that the pottery was used for cooking, so the ideas of pottery-making may have been spread through shared culinary traditions.
Carl Heron, from the British Museum, said: “We found evidence that the vessels were used for cooking a wide range of animals, fish and plants, and this variety suggests that the drivers for making the pottery were not in response to a particular need, such as detoxifying plants or processing fish, as has previously been suggested.
“We also found patterns suggesting that pottery use was transmitted along with knowledge of their manufacture and decoration. These can be seen as culinary traditions that were rapidly transmitted with the artefacts themselves.”
The world’s earliest pottery containers come from East Asia and may have spread rapidly eastwards through Siberia, before being taken up by hunter-gatherer societies across Northern Europe, long before the arrival of farming.
This research is funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme.
Whatever disease killed Edward the Black Prince—heir apparent to the English throne in the mid 1300s, and heralded as the greatest English soldier ever to have lived—is unlikely to have been chronic dysentery, as is commonly believed, writes a military expert in the journal BMJ Military Health.
But whether it was malaria; brucellosis, caused by eating unpasteurised dairy products and raw meat; inflammatory bowel disease; or complications arising from a single bout of dysentery—all possible causes—the disease changed the course of English history, says Dr James Robert Anderson of 21 Engineer Regiment.
And what happened to the Black Prince, who pretty much continuously fought wars and was exposed to violence from the age of 16, has been endlessly repeated throughout millennia, with disease, rather than battle injury, taking the heaviest toll on life during warfare, he says.
Edward of Woodstock, the Black Prince, was never seriously injured despite the number of military campaigns he led. But he had a chronic illness that waxed and waned for almost 9 years, to which he finally succumbed in 1376 at the age of 45.
His early death changed the course of English history, because the crown passed directly to his 10 year-old son after the death of King Edward III. Young King Richard II was later deposed and murdered, sparking over a century of instability, including the Wars of the Roses and the rise of the Tudors, notes the author.
The Black Prince’s illness is thought to have started after his victory at the Battle of Nájera in Spain in 1367, writes the author. A chronicle suggested that up to 80% of his army may have died from “dysentery and other diseases.”
And most later accounts of the Black Prince’s death suggest that he died from chronic dysentery, possibly the amoebic form, which was common in medieval Europe.
Amoebic dysentery can cause long term complications, including internal scarring (amoeboma), intestinal inflammation and ulceration (colitis), and extreme inflammation and distension of the bowel (life threatening toxic megacolon), points out the author.
But if he really did have amoebic dysentery, with its symptoms of chronic diarrhoea, would he really have been well enough, or even welcomed aboard, a ship with a cargo of soldiers heading for battle in France in 1372, asks the author?
Complications from surviving a single bout of dysentery are a possibility, particularly as historical records indicate that paratyphoid—similar to typhoid, but caused by a different bug—and a recently discovered cause of dysentery, was in circulation in 1367.
Complications from this could have included long term health issues, such as anaemia, kidney damage, liver abscess and/or reactive arthritis, suggests the author.
Dehydration due to lack of water during the hot Spanish campaign is another possibility. This could have caused kidney stones which would fit with a fluctuating illness lasting several years, he says.
Another candidate is inflammatory bowel disease, which might have accounted for relapsing-remitting symptoms and gradual deterioration, suggests the author.
Brucellosis was also common in medieval Europe, and its sources (dairy products and raw meat) were often kept aside for the nobility on military campaigns, says the author. It can produce chronic symptoms of fatigue, recurrent fever, and joint and heart inflammation.
Another common disease in medieval Europe was malaria, the symptoms of which include fever, headache, myalgia (muscle aches and pains), gut problems, fatigue, chronic anaemia and susceptibility to acute infections, such as pneumonia or gastroenteritis, leading to multiorgan failure and death, he adds.
“This would fit the fluctuating nature of his illness and the decline towards the end of his life. Any anaemia would not have been helped by the purging and venesection [blood letting] treatments of the time,” he suggests.
“There are several diverse infections or inflammatory conditions that may have led to [the Black Prince’s] demise…However, chronic dysentery is probably unlikely,” he writes.
And he concludes :“Even in modern conflicts and war zones, disease has caused enormous morbidity and loss of life, something that has remained consistent for centuries. Efforts to protect and treat deployed forces are as important now as in the 1370s.”
Human bipedalism – walking upright on two legs – may have evolved in trees, and not on the ground as previously thought, according to a new study involving UCL researchers.
In the study, published today in the journal Science Advances, researchers from UCL, the University of Kent, and Duke University, USA, explored the behaviours of wild chimpanzees - our closest living relative - living in the Issa Valley of western Tanzania, within the region of the East African Rift Valley. Known as ‘savanna-mosaic’ - a mix of dry open land with few trees and patches of dense forest - the chimpanzees’ habitat is very similar to that of our earliest human ancestors and was chosen to enable the scientists to explore whether the openness of this type of landscape could have encouraged bipedalism in hominins.
The study is the first of its kind to explore if savanna-mosaic habitats would account for increased time spent on the ground by the Issa chimpanzees, and compares their behaviour to other studies on their solely forest-dwelling cousins in other parts of Africa.
Overall, the study found that the Issa chimpanzees spent as much time in the trees as other chimpanzees living in dense forests, despite their more open habitat, and were not more terrestrial (land-based) as expected.
Furthermore, although the researchers expected the Issa chimpanzees to walk upright more in open savanna vegetation, where they cannot easily travel via the tree canopy, more than 85% of occurrences of bipedalism took place in the trees.
The authors say that their findings contradict widely accepted theories that suggest that it was an open, dry savanna environment that encouraged our prehistoric human relatives to walk upright – and instead suggests that they may have evolved to walk on two feet to move around the trees.
Study co-author Dr Alex Piel (UCL Anthropology) said: “We naturally assumed that because Issa has fewer trees than typical tropical forests, where most chimpanzees live, we would see individuals more often on the ground than in the trees. Moreover, because so many of the traditional drivers of bipedalism (such as carrying objects or seeing over tall grass, for example) are associated with being on the ground, we thought we’d naturally see more bipedalism here as well. However, this is not what we found.
“Our study suggests that the retreat of forests in the late Miocene-Pliocene era around five million years ago and the more open savanna habitats were in fact not a catalyst for the evolution of bipedalism. Instead, trees probably remained essential to its evolution – with the search for food-producing trees a likely a driver of this trait.”
To establish their findings, the researchers recorded more than 13,700 instantaneous observations of positional behaviour from 13 chimpanzee adults (six females and seven males), including almost 2,850 observations of individual locomotor events (e.g., climbing, walking, hanging, etc.), over the course of the 15-month study. They then used the relationship between tree/land-based behaviour and vegetation (forest vs woodland) to investigate patterns of association. Similarly, they noted each instance of bipedalism and whether it was associated with being on the ground or in the trees.
The authors note that walking on two feet is a defining feature of humans when compared to other great apes, who “knuckle walk”. Yet, despite their study, researchers say why humans alone amongst the apes first began to walk on two feet still remains a mystery.
Study co-author Dr Fiona Stewart (UCL Anthropology) said: “To date, the numerous hypotheses for the evolution of bipedalism share the idea that hominins (human ancestors) came down from the trees and walked upright on the ground, especially in more arid, open habitats that lacked tree cover. Our data do not support that at all.
“Unfortunately, the traditional idea of fewer trees equals more terrestriality (land dwelling) just isn’t borne out with the Issa data. What we need to focus on now is how and why these chimpanzees spend so much time in the trees - and that is what we’ll focus on next on our way to piecing together this complex evolutionary puzzle.”
For over a century, one of the earliest human fossils ever discovered in Spain has been long considered a Neandertal. However, new analysis from an international research team, including scientists at Binghamton University, State University of New York, dismantles this century-long interpretation, demonstrating that this fossil is not a Neandertal; rather, it may actually represent the earliest presence of Homo sapiens ever documented in Europe.
In 1887, a fossil mandible was discovered during quarrying activities in the town of Banyoles, Spain, and has been studied by different researchers over the past century. The Banyoles fossil likely dates to between approximately 45,000-65,000 years ago, at a time when Europe was occupied by Neandertals, and most researchers have generally linked it to this species.
“The mandible has been studied throughout the past century and was long considered to be a Neandertal based on its age and location, and the fact that it lacks one of the diagnostic features of Homo sapiens: a chin,” said Binghamton University graduate student Brian Keeling.
The new study relied on virtual techniques, including CT scanning of the original fossil. This was used to virtually reconstruct missing parts of the fossil, and then to generate a 3D model to be analyzed on the computer.
The authors studied the expressions of distinct features on the mandible from Banyoles that are different between our own species, Homo sapiens, and the Neandertals, our closest evolutionary cousins.
The authors applied a methodology known as “three-dimensional geometric morphometrics” that analyzes the geometric properties of the bone’s shape. This makes it possible to directly compare the overall shape of Banyoles to Neandertals and H. sapiens.
“Our results found something quite surprising — Banyoles shared no distinct Neandertal traits and did not overlap with Neandertals in its overall shape,” said Keeling.
While Banyoles seemed to fit better with Homo sapiens in both the expression of its individual features and its overall shape, many of these features are also shared with earlier human species, complicating an immediate assignment to Homo sapiens. In addition, Banyoles lacks a chin, one of the most characteristic features of Homo sapiens mandibles.
“We were confronted with results that were telling us Banyoles is not a Neandertal, but the fact that it does not have a chin made us think twice about assigning it to Homo sapiens,” said Rolf Quam, professor of anthropology at Binghamton University, State University of New York. “The presence of a chin has long been considered a hallmark of our own species.”
Given this, reaching a scientific consensus on what species Banyoles represents is a challenge. The authors also compared Banyoles with an early Homo sapiens mandible from a site called Peştera cu Oase in Romania. Unlike Banyoles, this mandible shows a full chin along with some Neandertal features, and an ancient DNA analysis has revealed this individual had a Neandertal ancestor four to six generations ago. Since the Banyoles mandible shared no distinct features with Neandertals, the researchers ruled out the possibility of mixture between Neandertals and H. sapiens to explain its anatomy.
The authors point out that some of the earliest Homo sapiens fossils from Africa, predating Banyoles by more than 100,000 years, do show less pronounced chins than in living populations.
Thus, these scientists developed two possibilities for what the Banyoles mandible may represent: a member of a previously unknown population of Homo sapiens that coexisted with the Neandertals; or a hybrid between a member of this Homo sapiens group and a non-Neandertal unidentified human species. However, at the time of Banyoles, the only fossils recovered from Europe are Neandertals, making this latter hypothesis less likely.
“If Banyoles is really a member of our species, this prehistoric human would represent the earliest H. sapiens ever documented in Europe,” said Keeling.
Whichever species this mandible belongs to, Banyoles is clearly not a Neandertal at a time when Neandertals were believed to be the sole occupants of Europe.
The authors conclude that “the present situation makes Banyoles a prime candidate for ancient DNA or proteomic analyses, which may shed additional light on its taxonomic affinities.”
The authors plan to make the CT scan and the 3D model of Banyoles available for other researchers to freely access and include in future comparative studies, promoting open access to fossil specimens and reproducibility of scientific studies.
Nearly 10,000 years ago, humans settling in the Fertile Crescent, the areas of the Middle East surrounding the Tigris and Euphrates rivers, made the first switch from hunter-gatherers to farmers. They developed close bonds with the rodent-eating cats that conveniently served as ancient pest-control in society’s first civilizations.
A new study at the University of Missouri found this lifestyle transition for humans was the catalyst that sparked the world’s first domestication of cats, and as humans began to travel the world, they brought their new feline friends along with them.
Leslie A. Lyons, a feline geneticist and Gilbreath-McLorn endowed professor of comparative medicine in the MU College of Veterinary Medicine, collected and analyzed DNA from cats in and around the Fertile Crescent area, as well as throughout Europe, Asia and Africa, comparing nearly 200 different genetic markers.
“One of the DNA main markers we studied were microsatellites, which mutate very quickly and give us clues about recent cat populations and breed developments over the past few hundred years,” Lyons said. “Another key DNA marker we examined were single nucleotide polymorphisms, which are single-based changes all throughout the genome that give us clues about their ancient history several thousands of years ago. By studying and comparing both markers, we can start to piece together the evolutionary story of cats.”
Lyons added that while horses and cattle have seen various domestication events caused by humans in different parts of the world at various times, her analysis of feline genetics in the study strongly supports the theory that cats were likely first domesticated only in the Fertile Crescent before migrating with humans all over the world. After feline genes are passed down to kittens throughout generations, the genetic makeup of cats in western Europe, for example, is now far different from cats in southeast Asia, a process known as ‘isolation by distance.’
“We can actually refer to cats as semi-domesticated, because if we turned them loose into the wild, they would likely still hunt vermin and be able to survive and mate on their own due to their natural behaviors,” Lyons said. “Unlike dogs and other domesticated animals, we haven’t really changed the behaviors of cats that much during the domestication process, so cats once again prove to be a special animal.”
Lyons, who has researched feline genetics for more than 30 years, said studies like this also support her broader research goal of using cats as a biomedical model to study genetic diseases that impact both cats and people, such as polycystic kidney disease, blindness and dwarfism.
“Comparative genetics and precision medicine play key roles in the ‘One Health’ concept, which means anything we can do to study the causes of genetic diseases in cats or how to treat their ailments can be useful for one day treating humans with the same diseases,” Lyons said. “I am building genetic tools, genetic resources that ultimately help improve cat health. When building these tools, it is important to get a representative sample and understand the genetic diversity of cats worldwide so that our genetic toolbox can be useful to help cats all over the globe, not just in one specific region.”
Throughout her career, Lyons has worked with cat breeders and research collaborators to develop comprehensive feline DNA databases that the scientific community can benefit from, including cat genome sequencing from felines all around the world. In a 2021 study, Lyons and colleagues found that the cat’s genomic structure is more similar to humans than nearly any other non-primate mammal.
“Our efforts have helped stop the migration and passing-down of inherited genetic diseases around the world, and one example is polycystic kidney disease, as 38% of Persian cats had this disease when we first launched our genetic test for it back in 2004,” Lyons said. “Now that percentage has gone down significantly thanks to our efforts, and our overall goal is to eradicate genetic diseases from cats down the road.”
Currently, the only viable treatment for polycystic kidney disease has unhealthy side effects, including liver failure. Lyons is currently working with researchers at the University of California at Santa Barbara to develop a diet-based treatment trial for those suffering from the disease.
“If those trials are successful, we might be able to have humans try it as a more natural, healthier alternative to taking a drug that may cause liver failure or other health issues,” Lyons said. “Our efforts will continue to help, and it feels good to be a part of it.”
“Genetics of randomly bred cats support the cradle of cat domestication being in the Near East” was recently published in Heredity.
New evidence, helping to form a 15th century reconstruction of part of Westminster Abbey, demonstrates how a section of the building was once the focus for the royal family’s devotion to the cult of a disembowelled saint and likely contained gruesome images of his martyrdom.
Peer-reviewed findings, published in the Journal of the British Archaeological Association, reveal a story of how England’s ‘White Queen’ Elizabeth Woodville once worshipped at the Chapel of St Erasmus which may have even featured a whole, single tooth as part of the relics!
Today, only an intricate frame, remains from the lost chapel of St Erasmus. It was demolished in 1502 and little has been known about its role historically.
However, an extensive analysis of all available evidence to-date including a newly discovered, centuries-old royal grant by the Abbey’s archivist Matthew Payne, and John Goodall, a member of the Westminster Abbey Fabric Advisory Commission, reveal the chapel’s wider importance.
Evidence from the study has also helped to create a visual 15th century reconstruction of the east end of the Abbey church and its furnishes – crafted by illustrator Stephen Conlin.
Commenting on the prominence of the chapel, Payne says: “The White Queen wished to worship there and it appears, also, to be buried there as the grant declares prayers should be sung ‘around the tomb of our consort (Elizabeth Woodville).
“The construction, purpose and fate of the St Erasmus chapel, therefore deserves more recognition.”
Goodall adds: “Very little attention has been paid to this short-lived chapel.
“It receives only passing mention in abbey histories, despite the survival of elements of the reredos.
“The quality of workmanship on this survival suggestions that investigation of the original chapel is long overdue.”
The interment in the chapel of eight-year-old Anne Mowbray, child bride of Elizabeth’s son Richard, Duke of York, also confirms its role as a royal burial site, their study finds.
In the end, Elizabeth’s last resting place was next to her beloved husband in Windsor in St George’s Chapel which Edward IV had begun in 1475.
Future monarchs have also been buried in St George’s including Elizabeth II after her funeral this year at the Abbey.
St Erasmus was responsible for child wellbeing as well as being the patron saint of sailors and abdominal pain.
The authors suggest his link with children may have prompted the building of the St Erasmus chapel. It followed the wedding a year earlier in 1478 of Anne Mowbray to Richard when both were still infants.
Dedication of the chapel to St Erasmus ‘reflects a new and rapidly growing devotion’ to his cult, say the authors. They speculate the building may also have held relics of the Italian bishop, namely his tooth, which Westminster Abbey is known to have owned.
Although the precise location is unknown, the chapel was almost certainly built on space formerly allotted to a garden and near stalls where William Caxton sold his wares, according to the authors.
Commissioned by Elizabeth, Edward IV’s commoner wife and Henry VIII’s grandmother, St Erasmus’ chapel was demolished in 1502.
Visitors to Westminster Abbey can still view what remains, by looking above the entrance to the chapel of Our Lady of the Pew in the north ambulatory.
And what does remain is an intricately carved frame, sculptured out of the mineral Alabaster. This frame would have surrounded a reredos, which is the imagery that forms the backdrop to the altar.
Missing however, is the image. The study speculates that this was probably of the Saint being disembowelled – tied down alive to a table while his intestines were wound out on a windlass, a rotating cylinder often used on ships.
The screen would have originally been positioned behind the altar of the St Erasmus chapel and contained a panel.
The study presents further evidence that the reredos was created by an outsider to the Abbey’s design tradition. Architect Robert Stowell, the Abbey’s master mason, probably designed the chapel itself and may have helped salvage the chapel’s most ornate pieces when it was knocked down after less than 25 years.
This was on Henry VII’s orders to make way for his own and his wife’s chantry and burial place. The Lady Chapel which replaced it features a statue of St Erasmus which the authors say may be a nod to Elizabeth Woodville’s now long-forgotten chapel.
Ancient owl-shaped slate engraved plaques, dating from around 5,000 years ago in the Iberian Peninsula, may have been created by children as toys, suggests a paper published in Scientific Reports. These findings may provide insights into how children used artefacts in ancient European societies.
Around 4,000 engraved slate plaques resembling owls – with two engraved circles for eyes and a body outlined below – and dating from the Copper Age between 5,500 and 4,750 years ago have been found in tombs and pits across the Iberian Peninsula. It has been speculated that these owl plaques may have had ritualistic significance and represented deities or the dead.
Now, Juan Negro and colleagues re-examined this interpretation and suggest instead that these owl plaques may have been crafted by young people based on regional owl species, and may have been used as dolls, toys, or amulets. The authors assessed 100 plaques and rated them (on a scale of one to six) based on how many of six owl traits they displayed including two eyes, feathery tufts, patterned feathers, a flat facial disk, a beak, and wings. The authors compared these plaques to 100 modern images of owls drawn by children aged 4 to 13 years old, and observed many similarities between the depictions of owls. Owl drawings more closely resembled owls as children aged and became more skilful.
The authors observe the presence of two small holes at the top of many plaques. These holes appear impractical to pass a cord through in order to hang the plaque, and lack the expected lack wear-marks if this was their use. Instead they speculate that feathers could be inserted through the holes in order to resemble the tufts on the heads of some regional owl species, such as the long-eared owl (Asio otus).
The authors propose that, rather than being carved by skilled artisans for use in rituals, many of the owl plaques were created by children, and more closely resembled owls as the children’s carving skills increased. They may represent a glimpse into childhood behaviours in Copper Age societies.
Excavating ancient DNA from teeth, an international group of scientists peered into the lives of a once thriving medieval Ashkenazi Jewish community in Erfurt, Germany. The findings, shared today in the Journal Cell, show that the Erfurt Jewish community was more genetically diverse than modern day Ashkenazi Jews.
About half of Jews today are identified as Ashkenazi, meaning that they originate from Jews living in Central or Eastern Europe. The term was initially used to define a distinct cultural group of Jews who settled in the 10th century in Germany's Rhineland. Despite much speculation, many gaps exist in our understanding of their origins and demographic upheavals during the second millennium.
“Today, if you compare Ashkenazi Jews from the United States and Israel, they’re very similar genetically, almost like the same population regardless of where they live,” shared geneticist and co-author Professor Shai Carmi of the Hebrew University of Jerusalem (HU). But unlike today’s genetic uniformity, it turns out that the community was more diverse 600 years ago.
Digging into the ancient DNA of 33 Ashkenazi Jews from medieval Erfurt, the team discovered that the community can be categorized into what seems like two groups. One relates more to individuals from Middle Eastern populations and the other to European populations, possibly including migrants to Erfurt from the East. The findings suggest that there were at least two genetically distinct groups in medieval Erfurt. However, that genetic variability no longer exists in modern Ashkenazi Jews.
The Erfurt medieval Jewish community existed between the 11th and 15th centuries, with a short gap following a 1349 massacre. At times, it was a thriving community and one of the largest in Germany. Following the expulsion of all Jews in 1454, the city built a granary on top of the Jewish cemetery. In 2013, when the granary stood empty, the city permitted its conversion into a parking lot. This required additional construction and an archaeological rescue excavation.
“Our goal was to fill the gaps in our understanding of Ashkenazi Jewish early history through ancient DNA data,” explained Carmi. While ancient DNA data is a powerful tool to infer historical demographics, ancient Jewish DNA data is hard to come by, as Jewish law prohibits the disturbance of the dead in most circumstances. With the approval of the local Jewish community in Germany, the research team collected detached teeth from remains found in a 14th-century Jewish cemetery in Erfurt that underwent a rescue excavation.
The researchers also discovered that the founder event, which makes all Ashkenazi Jews today descendants of a small population, happened before the 14th century. For example, teasing through mitochondrial DNA, genetic materials we inherit from our mothers, they discovered that a third of the sampled Erfurt individuals share one specific sequence. The findings indicate that the early Ashkenazi Jewish population was so small that a third of Erfurt individuals descended from a single woman through their maternal lines.
At least eight of the Erfurt individuals also carried disease-causing genetic mutations common in modern-day Ashkenazi Jews but rare in other populations—a hallmark of the Ashkenazi Jewish founder event.
“Jews in Europe were a religious minority that was socially segregated, and they experienced periodic persecution,” described co-author David Reich of Harvard University. Although antisemitic violence virtually wiped-out Erfurt’s Jewish community in 1349, Jews returned five years later and flourished into one of the largest in Germany. “Our work gives us direct insight into the structure of this community.”
The team believes the current study helps to establish an ethical basis for studies of ancient Jewish DNA. Many questions remain unanswered, such as how medieval Ashkenazi Jewish communities became genetically differentiated, how early Ashkenazi Jews related to Sephardi Jews, and how modern Jews relate to ones from ancient Judea.
While this is the largest ancient Jewish DNA study so far, it is limited to one cemetery and one period of time. Nevertheless, it was able to detect previously unknown genetic subgroups in medieval Ashkenazi Jews. The researchers hope that their study will pave the way for future analyses of samples from other sites, including those from antiquity, to continue unraveling the complexities of Jewish history.
“This work also provides a template for how a co-analysis of modern and ancient DNA data can shed light on the past,” concluded Reich. “Studies like this hold great promise not only for understanding Jewish history, but also that of any population.”
The research team, of over 30 scientists, included HU’s Shamam Waldman, a doctoral student in Carmi's group, who performed most of the data analysis.
The LSU Campus Mounds sit on high ground overlooking the Mississippi River floodplain and have been a gathering place and destination for people for thousands of years. They are some of the oldest mounds in Louisiana and North America. Recent papers have offered alternate interpretations of their age. Knowing the approximate age of the mounds provides significant insight into the people who built the mounds. Archaeologists have built “culture histories” describing prehistoric ways of life and the way lifestyles have changed through time. In other words, knowing the age of the mounds provides context into the way of life of the people who built and used the LSU Campus Mounds.
Archaeological data from Louisiana and the southeastern U.S. indicate that between 5,000 and 7,000 years ago—a time archaeologists call the Middle Archaic—bands of about 20-50 people lived off the land and moved seasonally to take advantage of the availability of different food resources. This lifestyle greatly differed from those who were here 11,000 years ago.
“We know a lot about what people were doing in the North American Middle Archaic period and their lifestyle,” said Heather McKillop, the Thomas & Lillian Landrum Alumni Professor in the LSU Department of Geography & Anthropology and co-author on the paper published in SAA, the Magazine of the Society for American Archeology. “It’s very exciting that we have these earthen mounds preserved here at LSU. As we study them, we need to tie the mounds to the people who built them.”
As populations grew within the large bands of families, central meeting spots were built where multiple bands congregated to exchange information, trade and potentially find mates. And when people get together, what do they tend to do?
“They eat, they dance, they perform rituals that tie them to the past and help them see their way into the future. Building something on the landscape that symbolizes this seems to be an essentially human thing. Human beings do this all over the place, and in the southeastern United States, the landmark tends to be earthen or shell mounds,” said Rebecca Saunders, the William G. Haag Professor of Archaeology at LSU and co-author.
So, the LSU Campus Mounds fit within a way of life where many bands of people built earthen mounds as they were hunting, gathering and moving around the landscape in Louisiana and in the region visiting their mound sites as well as the mound sites of other bands of people.
“We know of at least 13 other sites in Louisiana with earthen mounds built between 5,000 to 7,000 years ago, which indicates that different communities were exploring this idea of building mounds during this period of time,” said Louisiana State Archaeologist Chip McGimsey, who is the lead author of the paper and director of the Louisiana Division of Archaeology.
Hundreds of mounds have been destroyed by landowners, farmers and corporations. These include sites, such as the Monte Sano mounds just upriver from the LSU Campus Mounds, which were built about 7,500 years ago but were destroyed in the 1960s.
“The LSU Campus Mounds are probably the best protected mounds because they are on the campus of LSU and LSU has made a very strong commitment to preserving them. Most of the other mounds are on private land and landowners can do what they’d like with them,” McGimsey said.
Differing interpretations
Earlier this year, LSU geologists published a paper, “The LSU campus mounds, with construction beginning at ∼11,000 BP, are the oldest known extant man-made structures in the Americas.” The LSU archaeologists’ new paper published this month is a response to it.
In this new paper, the archaeologists write: “[The geologists’] interpretation of the age and construction sequence for both mounds represents a significant departure from current archaeological understandings of the origins of mound building in North America. If confirmed, this site would change how archaeologists think about the early history of North America.”
“We’re not questioning the dates but we’re questioning the interpretation and the lack of inclusion of other datasets. This disagreement in no way detracts from the significance and importance of the LSU Campus Mounds. While we would argue they are not the oldest person-made earthworks in North America, they are still some of the very oldest and part of what is a remarkable history of mound-building in North America that has its origins here in Louisiana. The LSU Campus Mounds are part of that tradition,” McGimsey said.
The archaeologists raise the point that 11,000 years ago, people roamed in small groups over large expanses of land throughout North America hunting Ice Age animals such as mammoths, mastodons and ancient bison. There is no other evidence that people were building mounds at this time.
The archaeologists also question the interpretation that the first LSU Campus Mound was built to about half of its current height and then abandoned for about 1,000 years, before the final stage was added. Soil scientists and archaeologists have studied multiple sediment cores taken from the top to the bottom of both mounds. They conclude that if there had been a hiatus of about 1,000 years, there would be color, textural and chemical changes on the exposed surface. None of these changes were observed in the cores. Instead, it appears the mounds were built as a continuous process.
“This process has been shown to not necessarily take very long. If you have a group of people who were coming here seasonally and were building these mounds, it would not take thousands of years to build,” McKillop said.
While the geologists believe the microscopic fragments from burned cane and rush plants called phytoliths found in the mounds may have been remnants from intense, potentially ceremonial fires, the archaeologists point out that phytoliths are commonly found in soil in the area. High densities of phytoliths can occur naturally because wild cane must burn occasionally like forest undergrowth for the health of the ecosystem. Thus, the archaeologists argue the phytoliths could have already been in the soil that was used to build the mounds. The archaeologists also point out that if routine, intense ceremonial fires occurred at the LSU Campus Mounds, as the geologists suggest, there should be obvious changes in soil texture and color where the fires burned.
In addition to raising these points, the archaeologists encourage other researchers to pursue further study of the site and outline specific questions that can provide further clarity about the LSU Campus Mounds.
METHOD OF RESEARCH
Meta-analysis
ARTICLE TITLE
The Age and Construction of the LSU Campus Mounds: Consideration of Ellwood and Colleagues
More than 2,000 years before the Titanic sunk in the North Atlantic Ocean, another famous ship wrecked in the Mediterranean Sea off the eastern shores of Uluburun — in present-day Turkey — carrying tons of rare metal. Since its discovery in 1982, scientists have been studying the contents of the Uluburun shipwreck to gain a better understanding of the people and political organizations that dominated the time period known as the Late
Now, a team of scientists, including Michael Frachetti, professor of archaeology in Arts & Sciences at Washington University in St. Louis, have uncovered a surprising finding: small communities of highland pastoralists living in present-day Uzbekistan in Central Asia produced and supplied roughly one-third of the tin found aboard the ship — tin that was en route to markets around the Mediterranean to be made into coveted bronze metal.
The research, published on November 30 in Science Advances, was made possible by advances in geochemical analyses that enabled researchers to determine with high-level certainty that some of the tin originated from a prehistoric mine in Uzbekistan, more than 2,000 miles from Haifa, where the ill-fated ship loaded its cargo.
But how could that be? During this period, the mining regions of Central Asia were occupied by small communities of highlander pastoralists — far from a major industrial center or empire. And the terrain between the two locations — which passes through Iran and Mesopotamia — was rugged, which would have made it extremely difficult to pass tons of heavy metal.
Frachetti and other archaeologists and historians were enlisted to help put the puzzle pieces together. Their findings unveiled a shockingly complex supply chain that involved multiple steps to get the tin from the small mining community to the Mediterranean marketplace.
“It appears these local miners had access to vast international networks and — through overland trade and other forms of connectivity — were able to pass this all-important commodity all the way to the Mediterranean,” Frachetti said.
“It’s quite amazing to learn that a culturally diverse, multiregional and multivector system of trade underpinned Eurasian tin exchange during the Late Bronze Age.”
“To put it into perspective, this would be the trade equivalent of the entire United States sourcing its energy needs from small backyard oil rigs in central Kansas.”
Michael Frachetti
Adding to the mystique is the fact that the mining industry appears to have been run by small-scale local communities or free laborers who negotiated this marketplace outside of the control of kings, emperors or other political organizations, Frachetti said.
“To put it into perspective, this would be the trade equivalent of the entire United States sourcing its energy needs from small backyard oil rigs in central Kansas,” he said.
About the research
The idea of using tin isotopes to determine where metal in archaeological artifacts originates dates to the mid-1990s, according to Wayne Powell, professor of earth and environmental sciences at Brooklyn College and a lead author on the study. However, the technologies and methods for analysis were not precise enough to provide clear answers. Only in the last few years have scientists begun using tin isotopes to directly correlate mining sites to assemblages of metal artifacts, he said.
“Over the past couple of decades, scientists have collected information about the isotopic composition of tin ore deposits around the world, their ranges and overlaps, and the natural mechanisms by which isotopic compositions were imparted to cassiterite when it formed,” Powell said. “We remain in the early stages of such study. I expect that in future years, this ore deposit database will become quite robust, like that of Pb isotopes today, and the method will be used routinely.”
Aslihan K. Yener, a research affiliate at the Institute for the Study of the Ancient World at New York University and a professor emerita of archaeology at the University of Chicago, was one of the early researchers who conducted lead isotope analyses. In the 1990s, Yener was part of a research team that conducted the first lead isotope analysis of the Uluburun tin. That analysis suggested that the Uluburun tin may have come from two sources — the Kestel Mine in Turkey’s Taurus Mountains and some unspecified location in central Asia.
“But this was shrugged off since the analysis was measuring trace lead and not targeting the origin of the tin,” said Yener, who is a co-author of the present study.
Yener also was the first to discover tin in Turkey in the 1980s. At the time, she said the entire scholarly community was surprised that it existed there, right under their noses, where the earliest tin bronzes occurred.
Some 30 years later, researchers finally have a more definitive answer thanks to the advanced tin isotope analysis techniques: One-third of the tin aboard the Uluburun shipwreck was sourced from the Mušiston mine in Uzbekistan. The remaining two-thirds of the tin derived from the Kestel mine in ancient Anatolia, which is in present-day Turkey.
Findings offer glimpse into life 2,000-plus years ago
By 1500 B.C., bronze was the “high technology” of Eurasia, used for everything from weaponry to luxury items, tools and utensils. Bronze is primarily made from copper and tin. While copper is fairly common and can be found throughout Eurasia, tin is much rarer and only found in specific kinds of geological deposits, Frachetti said.
“Finding tin was a big problem for prehistoric states. And thus, the big question was how these major Bronze Age empires were fueling their vast demand for bronze given the lengths and pains to acquire tin as such a rare commodity. Researchers have tried to explain this for decades,” Frachetti said.
The Uluburun ship yielded the world’s largest Bronze Age collection of raw metals ever found — enough copper and tin to produce 11 metric tons of bronze of the highest quality. Had it not been lost to sea, that metal would have been enough to outfit a force of almost 5,000 Bronze Age soldiers with swords, “not to mention a lot of wine jugs,” Frachetti said.
“The current findings illustrate a sophisticated international trade operation that included regional operatives and socially diverse participants who produced and traded essential hard-earth commodities throughout the late Bronze Age political economy from Central Asia to the Mediterranean,” Frachetti said.
Unlike the mines in Uzbekistan, which were set within a network of small-scale villages and mobile pastoralists, the mines in ancient Anatolia during the Late Bronze Age were under the control of the Hittites, an imperial global power of great threat to Ramses the Great of Egypt, Yener explained.
The findings also show that life 2,000-plus years ago was not that different from what it is today.
“With the disruptions due to COVID-19 and the war in Ukraine, we have become aware of how we are reliant on complex supply chains to maintain our economy, military and standard of living,” Powell said. “This is true in prehistory as well. Kingdoms rose and fell, climatic conditions shifted and new peoples migrated across Eurasia, potentially disrupting or redistributing access to tin, which was essential for both weapons and agricultural tools.
“Using tin isotopes, we can look across each of these archaeologically evident disruptions in society and see connections were severed, maintained or redefined. We already have DNA analysis to show relational connections. Pottery, funerary practices, etc., illustrate the transmission and connectivity of ideas. Now with tin isotopes, we can document the connectivity of long-distance trade networks and their sustainability.”
More clues to explore
The current research findings settle decades-old debates about the origins of the metal on the Uluburun shipwreck and Eurasian tin exchange during the Late Bronze Age. But there are still more clues to explore.
After they were mined, the metals were processed for shipping and ultimately melted into standardized shapes — known as ingots — for transporting. The distinct shapes of the ingots served as calling cards for traders to know from where they originated, Frachetti said.
Many of the ingots aboard the Uluburun ship were in the “oxhide” shape, which was previously believed to have originated in Cyprus. However, the current findings suggest the oxhide shape could have originated farther east. Frachetti said he and other researchers plan to continue studying the unique shapes of the ingots and how they were used in trade.
In addition to Frachetti, Powell and Yener, the following researchers contributed to the present study: Cemal Pulakat at Texas A&M University, H. Arthur Bankoff at Brooklyn College, Gojko Barjamovic at Harvard University, Michael Johnson at Stell Environmental Enterprises, Ryan Mathur at Juniata College, Vincent C. Pigott at the University of Pennsylvania Museum and Michael Price at the Santa Fe Institute.
The study was funded in part by a Professional Staff Congress-City University of New York Research Award, in addition to a research grant from the Institute for Aegean Prehistory.