Wednesday, March 25, 2026

English history’s biggest march is a myth – King Harold sailed to the Battle of Hastings

 

New research from the University of East Anglia (UEA) reveals that King Harold’s legendary 200‑mile march to the Battle of Hastings in 1066 never happened.

Instead, the journey was made largely by sea.

The findings overturn one of the most iconic stories in English history – altering how the Norman Conquest is understood in classrooms, museums, and public memory.

The news comes as the Bayeux Tapestry prepares to travel from France to the UK for display at the British Museum later this year.

For more than two centuries, historians have repeated a misinterpretation of the Anglo‑Saxon Chronicle - one of the earliest and most complete written records of English history.

The Chronicle seems to imply that Harold dismissed his fleet in early September 1066, leaving him no choice but to rush his troops south from Stamford Bridge in Yorkshire on foot.

It records that the ships “came home” - a phrase Victorian historians mistakenly interpreted as meaning he disbanded the navy. And it was this narrative that shaped later accounts of the Norman Conquest.

Prof Tom Licence, Professor of Medieval History and Literature at UEA has now shown that the ships returned to London, their home base, and remained operational throughout the year.

He said: “I noticed multiple contemporary writers referring to Harold's fleet, while modern historians were dismissing those references or trying to explain them away.

“I checked the evidence for him having sent the fleet home and found that it was just a misunderstanding. I went looking in the sources for evidence of a forced march and found there wasn't any.”

Prof Licence is keen to present King Harold’s actions in a new light in the face of William’s invasion.

He said: “Harold’s campaign was not a desperate dash across England, it was a sophisticated land‑sea operation. The idea of a heroic march is a Victorian invention that has shaped our understanding, or misunderstanding, of 1066 for far too long.”

Contemporary sources describe Harold sending hundreds of ships to block Duke William after the Norman landing. These references previously caused confusion because historians assumed Harold had no fleet left.

Prof Licence said: “Harold's ‘missing’ fleet was used to defend the south coast, then to support his campaign against Harald Hardrada, and finally to rush back south after the Battle of Stamford Bridge ready to face Duke William of Normandy.”

Prof Michael Lewis, Head of the Portable Antiquities Scheme at the British Museum, and Curator: Bayeux Tapestry Exhibition, said: “With the Bayeux Tapestry coming to the British Museum later this year, Prof Tom Licence's research shows there is much still to be learned about the events of 1066.

“It is clearly a fascinating discovery that following the Battle of Stamford Bridge, Harold took an easier, more logical, trip south by ship to meet Duke William in battle, rather than a long trek overland, as has long been supposed.

“Hopefully this new research inspires people to also come and see the Tapestry whilst it is in London.”

Why the new research matters

The findings challenge one of the best‑known narratives in English history, altering how the Norman Conquest is understood in classrooms, museums, and public memory.

Prof Licence said: “Harold was not a reactive, exhausted commander, he was a strategist using England’s naval assets to wage a coordinated defence.

“This reframes the events of 1066 and highlights a previously overlooked aspect of Anglo‑Saxon maritime capability.”

Prof Licence re‑examined the Anglo‑Saxon Chronicle, which survives today in nine manuscript versions, alongside other 11th‑century sources, correcting the error popularised by Edward Augustus Freeman in the 19th century.

By restoring the fleet to its central role, the research reconstructs Harold’s real strategic choices - from his northern campaign against Harald Hardrada to his planned naval interception of William before Hastings.

Roy Porter, English Heritage Senior Curator of Properties, who oversees Battle Abbey and the Hastings battlefield, said: “Professor Licence’s research shows the immense value of testing received wisdoms, and his conclusions are certain to sustain debate about the circumstances of England’s most famous battle.

“What we know about Harold’s previous military campaigns fits with the idea that he used naval forces to transport soldiers, and threaten William, and there are references in accounts of the Norman invasion which also lend weight to that possibility.

“It’s exciting to consider that Harold’s response may have been far more sophisticated than previously understood, and William’s awareness of this may have informed when he chose to fight.”

Key findings:

Harold never disbanded his fleet

The research demonstrates that Harold’s ships were not dismissed in early September 1066, as long believed. The Anglo‑Saxon Chronicle states that Harold himself returned to London “off ship,” that is, from the south coast, when he heard of Harald Hardrada’s arrival.

The famous 200‑mile march is a Victorian invention

No contemporary source describes a forced march. The term was introduced by Victorian historians and became received wisdom. A sea voyage from the Humber to London was faster, safer, and far more consistent with the Chronicle’s account.

Comparative evidence shows the march is unrealistic, with even well‑equipped American Civil War forces only covering around 100 miles in five days under exceptional conditions.

Prof Licence said: “Harold’s weary, unmounted men covering nearly 200 miles in ten days and then continuing straight to the Hastings peninsula is implausible given medieval roads and the aftermath of battle.

“Only a mad general would have sent all his men on foot in this way if ship transports were available.”

Past criticism of Harold marching south with ‘reckless and impulsive haste’, as one historian puts it, is therefore unfounded. His men had time to rest.

Harold used the fleet against Harald Hardrada

The Chronicle uses the Old English term lið, normally translated as “fleet” to describe the force Harold gathered at Tadcaster before marching on Stamford Bridge.

This indicates the English king deployed both naval and land forces against Harald Hardrada – a detail that has caused much confusion because historians wrongly believed the fleet was already scattered.

Harold attempted a naval pincer movement against Duke William

Early accounts describe Harold sending hundreds of ships south after William’s landing. Far from marching alone, Harold was coordinating a land‑sea pincer designed to trap the Normans in the Hastings peninsula.

The fleet likely arrived too late, costing Harold his archers and cutting‑edge troops.

Evidence suggests a naval battle in early October 1066

The study also revives evidence for a forgotten naval clash. Both Domesday Book and the Annales Altahenses hint at an English sea engagement during the campaign.

These references were previously hard to explain, but now, reconsidered alongside this research, become plausible and historically significant. The English fleet arrived too late to save the day but may have clashed with William’s ships guarding his base at Hastings.

Making the findings public

Prof Tom Licence will present his findings at the University of Oxford on 24 March at The Maritime and Political World of 1066 conference.

In his talk, he will explain how a series of misunderstandings gave rise to the famous “forced march” story, reveal new evidence for Harold’s active fleet, and discuss how these findings reshape the story of 1066 taught in classrooms, displayed in museums, depicted in the recent BBC drama King and Conqueror, and told in the Bayeux Tapestry

Tuesday, March 24, 2026

New study challenges the age of a key human occupation site in South America

 

Following the first independent investigation in fifty years of Monte Verde – a landmark archaeological site in Chile – researchers report it may be much younger than previously believed. According to the study, Monte Verde dates from ~8000 to 4000 years old, not 14,500 years, as previously thought. The findings reshape the story of the continent’s first settlers (though they don’t rule out pre-Clovis human presence in South America, as supported by other sites); they also highlight the need for independent verification of old archeological sites. Monte Verde is one of the most important archaeological sites for understanding when humans first reached South America, the last continent colonized by humans. Excavations of the site’s Monte Verde II component uncovered stone tools and well-preserved organic materials, such as wooden artifacts, cordage, and fossil remains of extinct Pleistocene fauna. Earlier dating suggested that the site was occupied roughly 14,500 years ago, making it nearly 1,500 years older than the Clovis culture – the once-dominant benchmark for earliest human settlement in the Americas. Although widely accepted as key evidence of a pre-Clovis human presence in southern South America, the findings from Monte Verde have long been debated. Critics have questioned whether the artifacts, sediment layers, and radiocarbon dates are truly associated, raising possibilities such as redeposited ancient material or dating inaccuracies that could exaggerate the site’s age.

 

Now, Todd Surovell and colleagues show that the antiquity of Monte Verde may have been overestimated. Surovell et al. reexamined the age and geological context of Monte Verde II by describing, sampling, and dating nine sediment exposures along the banks of the nearby Chinchihuapi Creek. The analysis shows that the abandoned floodplain on which the site resides is far more complex than previously understood. According to the findings, the area contains layers of sand and gravel deposited by glacial meltwater between about 26,000 and 15,500 years ago, followed by deposits of ancient wood, marsh sediments, and a volcanic ash layer identified as the regionally widespread Lepúe Tephra, which is well-dated to roughly 11,000 years ago. The authors argue that because the floodplain deposit containing the archaeological site sits above this ash layer, it must be younger than 11,000 years. Further radiocarbon dating of wood and peat from the floodplain sediments produced ages between ~8200 and 4100 years, indicating that the deposit formed during the Middle Holocene. The authors suggest that earlier dates previously reported for the site were likely influenced by Late Pleistocene-age materials from older sediments that were redeposited into the site via erosion. The findings suggest that Monte Verde II is Middle Holocene in age or younger, challenging earlier interpretations that placed the site much earlier in the late Ice Age.

 

In a Perspective, Jason Rech discusses the study as well as the implications of the findings. “Although Monte Verde grounded chronologies for early colonization of the Americas for decades, the landscape is different now, with more sites that appear to be older than the Clovis culture,” Rech writes. “Yet as Surovell et al. conclude, their findings highlight the need for independent verification of old archeological sites.”

How Southern Andean communities adopted farming and endured crises

 


Illustration representing population movements within the Southern Andes as a resilience strategy to face crises. 

image: 

Illustration representing population movements within the Southern Andes as a resilience strategy to face crises. Credit: Mauricio Álvarez - studio FIEL®   

view more 

Credit: Credit: Mauricio Álvarez - studio FIEL®

A new interdisciplinary study published in Nature reconstructs over 2,000 years of population history in Argentina’s Uspallata Valley (UV), a southern frontier of Andean farming spread in ancient times, with broader lessons on how agriculture shaped societies and how communities endured crises. By combining ancient human and pathogen genomics with isotopic analyses, archaeology and paleoclimate records–and working in close collaboration with Huarpe Indigenous communities–, the research reveals how local hunter-gatherers adopted agriculture, how more recent intensive maize farmers experienced prolonged stress, and how kinship-based mobility may have helped communities persist through instability

A central question in studies of farming’s spread is whether agriculture expanded mainly as farmers moved into new regions, or as local hunter-gatherers adopted crops and techniques through cultural transmission. Archaeology by itself often cannot distinguish the two with confidence, since both can leave comparable traces in material culture. The Uspallata Valley, at the southern margin of Andean farming spread, offers a key setting to disentangle these scenarios, because agriculture arrived in this region much later than in major domestication centres across South America. 

This work, led by the Microbial Paleogenomics Unit (MPU) at Institut Pasteur, generated genome-wide ancient DNA data from 46 individuals spanning the earlier hunter-gatherer period to later farming populations. The results reveal strong genetic continuity between hunter-gatherers (~2,200 years ago) living in the region before farming was adopted and those living more than a millennium later as maize farming–and other crops– expanded. 

The study also helps unravel long-term population history in the southern Andes. “Beyond the local story of Uspallata, we are also filling a gap in South American human genetic diversity by documenting a genetic component that was previously only suggested by analysing present-day populations, and that now proves to have a very deep divergence and current persistence in the region” explains Pierre Luisi, co-first author of the study, researcher in CONICET, Argentina, who started this work as postdoc in the MPU at Institut Pasteur, France. “The persistence of this ancestral genetic component in populations today has important implications, since it argues against narratives claiming the extinction of indigenous descendants in the region since the establishment and growth of the Argentine state-nation.” 

To reconstruct how people lived, the team combined genetics with chemical signals preserved in bones and teeth, called stable isotopes. Carbon and nitrogen isotopes reveal an average of the foods eaten over a lifetime, while strontium isotopes reflect the area where a person lived, and can thus indicate whether individuals moved during life. These analyses show that maize consumption fluctuated in UV over time, consistent with flexible farming rather than a progressive transition into strong farming dependence. But between ~800 and 600 years ago, the record shows a different story at one major cemetery site called Potrero Las Colonias: most individuals show an exceptionally high maize reliance–among the highest documented for the southern Andes and non-local strontium signatures, indicating that they were migrants. Who were these non-local farmers and where were they coming from? 

Isotopic and genetic data indicated that these migrations occurred within a constrained geographic range rather than across distant, previously unrelated areas. Migrants were genetically close to local groups and belonged to the same metapopulation. Yet genomic analyses show that this migrant group experienced strong and sustained demographic decline, suggesting a shrinking population under persistent stress over many generations. 

Multiple lines of evidence indicate that these farmers faced a multidimensional crisis. At a larger temporal scale, paleoclimate records point to prolonged climatic instability, coinciding with the demographic decline. At shorter temporal scales (individual’s lives), skeletons show markers consistent with nutritional stress during childhood and infection. Indeed, ancient DNA revealed the presence of tuberculosis in the site, with the detected strain falling within a lineage known from pre-contact South America. Its presence far south of previously documented contexts in Peru and Colombia raises new questions about routes of spread and the ecological conditions that sustained this infectious disease. “Detecting tuberculosis this far south in a pre-contact context is striking,” says Nicolás Rascovan, head of the Microbial Paleogenomics Unit at Institut Pasteur. “It expands the geographic frame for understanding how tuberculosis circulated in the past and highlights the value of integrating pathogen genomics into broader reconstructions of human history.” 

Genomic kinship analyses add another key layer: many of the migrants were closely related but not buried at the same time–consistent with sustained, concerted and transgenerational movement into UV over decades. A large kinship network is structured mainly through maternal links and a single mitochondrial lineage dominated across migrants, suggesting an important role of women by maintaining family continuity and organizing mobility. Importantly, there is no evidence of violence, and locals and migrants were occasionally buried within shared mortuary contexts, pointing to peaceful coexistence between groups in the region. 

Together, these findings suggest that kinship-based migration and strong family bonds functioned as resilience strategies during a period of concurring pressures–environmental instability, food insecurity and disease. “No farming community abandons fields and homes lightly,” says the archaeologist and co-first author Ramiro Barberena, a researcher at CONICET. “Our results are most consistent with people moving under force majeure, relying on family networks to navigate crisis.” Barberena adds: “Understanding how these transitions unfolded and what they meant for demography, economy, and health helps us better grasp the pathways that shaped today’s societies–and to think about risks and challenges of climate change and demographic pressures.” 

The study also highlights the ethical and integrative value of the research, which was done in close interaction with indigenous communities. Huarpe community members were actively involved throughout the project, contributed to interpretation and narrative framing, and three of them co-authored the article (Claudia Herrera, Graciela Coz and Matías Candito). Regular meetings with the research team allowed discussing permissions, uncertainties and how results would be communicated. A Spanish translation with non-specialist explanations accompanies the study to facilitate local access. 

Archaeology and paleogenomics are not neutral when they involve the ancestors of living people,” says Rascovan. “Working with communities changes how we do science: it shapes the questions we ask, how we interpret evidence, and how we communicate what we can–and cannot–conclude.” 

More broadly, the study highlights that one of the most transformative processes in human history–the adoption of agriculture–did not unfold in a single, universal way, but followed diverse paths shaped by local environments and social networks. By combining genetics, isotopes, archaeology, climate records and pathogen evidence, this work shows how past communities coped with overlapping pressures of environmental instability, food stress and disease. Understanding how people navigated crisis in the past–including the role of family ties and cooperation networks–offers a deep-time and wider perspective that can inform how we think about resilience in the face of today’s climate and health challenges. 
 


Neanderthals used birch tar for its anti-bacterial properties

 Neanderthals probably used birch tar for multiple functions, including treating their wounds, according to a study published March 18, 2026 in the open-access journal PLOS One by a team of researchers led by Tjaark Siemssen of the University of Cologne, Germany, and the University of Oxford, U.K.

Birch tar is commonly found at Neanderthal archaeological sites, and in some cases this tar is known to have been used as an adhesive to assemble tools. Recently, some researchers have raised the question of whether Neanderthals had multiple uses for this substance. For instance, Indigenous communities in northern Europe and Canada use birch tar to treat wounds, and there is growing evidence that Neanderthals also employed a variety of medical practices.

To investigate the medicinal potential of birch tar, Siemssen and colleagues extracted tar from modern birch tree bark, specifically targeting species known from Neanderthal sites. They used multiple extraction methods, including distillation of tar in a clay pit and condensation of tar against a stone surface, both of which would have been methods available to Neanderthals. When exposed to different strains of bacteria, all of the tar samples were found to be effective at hindering the growth of Staphylococcus bacteria known to cause wound infections.

These experiments not only support the efficacy of Indigenous medicinal practices, but also reinforce the possibility that Neanderthals used birch tar to treat wounds. The authors note that there are other potential uses of birch tar, such as insect repellent, as well as other plants to which Neanderthals had access. Further exploration of the multiple potential uses of these natural ingredients will enable a more thorough understanding of Neanderthal culture.

The authors add: “We found that the birch tar produced by Neanderthals and early humans had antibacterial properties. This has important implications for how Neanderthals may have mitigated disease burden during the last Ice Ages, and adds to a growing set of evidence on healthcare in these early human communities.”


“By bringing together research on indigenous pharmacology and experimental archaeology, we begin to understand the medicinal practices of our distant human ancestors and their closest cousins. Additionally, this study of 'palaeopharmacology' can contribute to the rediscovery of antibiotic remedies whilst we face an ever more pressing antimicrobial resistance crisis.”


“The messiness of birch tar production deserves a special mention. Every step of the production is a sensory experience in itself, and getting the tar off our hands after spending hours at the fire has been a challenge every time.”

 

Monday, March 23, 2026

Democracy has deep global roots—not just Greece and Rome

 


Peer-Reviewed Publication

Field Museum

Teotihuacan 

image: 

Wide open plaza and avenues in the ancient Mexican city of Teotihuacan, a society in which people had more voice. Photo by Linda Nicholas, Field Museum.

view more 

Credit: Photo by Linda Nicholas, Field Museum.

A new study on ancient societies from around the world is rewriting what we thought we knew about democracy. A team of researchers analyzed archaeological and historical evidence from 31 ancient societies across Europe, Asia, and the Americas and found that shared, inclusive governance was far more common than was once believed.

“People often assume that democratic practices started in Greece and Rome,” said Gary Feinman, the study’s lead author and the MacArthur Curator of Mesoamerican and Central American Anthropology at the Field Museum’s Negaunee Integrative Research Center. “But our research shows that many societies around the world developed ways to limit the power of rulers and give ordinary people a voice.”

In an autocracy, just one person or a small group holds all the power; examples of autocracy can include absolute monarchies and dictatorships. In a democracy, decision-making power is shared among the people. Elections often go hand-in-hand with democracy, but not always—many autocrats have been freely elected.

“Elections aren’t exactly the greatest metric for what counts as a democracy, so with this study, we tried to draw on historical examples of human political organization,” says Feinman. “We defined two key dimensions of governance. One of them is the degree to which power is concentrated in just one individual or just one institution. The other is the degree of inclusiveness—how much the bulk of the citizens have access to power and can participate in some aspects of governance.”

Feinman and his colleagues examined 40 cases from 31 different political units across Europe, North America, and Asia, spanning thousands of years. These societies all had different methods of record-keeping, and not all of them left behind written records. So, the team had to find different ways to infer what the governments in these historical contexts were like.

“I think the use of space is very telling,” says Feinman. “When you find urban areas with broad, open spaces, or when you see public buildings that have wide spaces where people can get together and exchange information, those societies tend to be more democratic.”

On the other hand, some architectural and city-planning remnants indicate a society where fewer people concentrated power. “If you see pyramids with a tiny space at the top, or urban plans where all the roads run toward the ruler’s residence, or societies where there’s very little space where people could get together for exchanging information, those are all proxies for more autocratic cases,” says Feinman.

The team examined the 40 cases that had been documented by generations of archaeologists and historians, and systematically analyzed different aspects of the places' architecture, art, and urban planning. For instance, artwork depicting rulers as larger than life and monumental gravesites associated with rulers both point towards greater autocracy, whereas open plazas and rare portrayals of rulers are indicators of less concentrated power.

The study uses buildings, inscriptions, city layouts, administrative systems, and signs of wealth inequality to measure how societies balanced political power and what factors contributed to the axes of variation in governance that they recorded. The team created an “autocracy index” to place each society along a spectrum—from highly autocratic to strongly collective.

“Among archaeologists, there’s entrenched thought that Athens and Republican Rome were the only two democracies in the ancient world, and that in Asia and the Americas, governance was tyrannical or autocratic,” Feinman says. “In our analysis, we saw societies in other parts of the world that were equally democratic to Athens and Rome.”

“These findings show that both democracy and autocracy were widespread in the ancient world,” observes New York University Professor David Stasavage.

Coauthor Linda Nicholas, Adjunct Curator of Anthropology at the Field Museum, notes that “societies also developed ways for people to share power and facilitate inclusiveness, revealing that democracy has deep and widespread historical roots. I think a lot of people would find that surprising.”

The researchers found that population size and the number of political levels did not account for whether a society would be autocratic, which challenges the established idea that demographic and political scale naturally leads to strong rulers. Instead, notes Feinman, “the strongest factor shaping how much power rulers held was how they financed their authority.” Societies that depended heavily on revenue that was controlled or monopolized by leaders—such as mines, long-distance trade routes, slave labor, or war plunder—tended to become more autocratic. In contrast, societies funded mainly through broad internal taxes or community labor were more likely to distribute power and maintain systems of shared governance.

The study also shows that societies with more inclusive political systems generally had lower levels of economic inequality. “These findings challenge the idea that autocracy and great inequality are natural or inevitable outcomes of complexity or growth,” said Feinman. “History shows that people across the world have created inclusive political systems—even under difficult conditions.”

That bigger picture is especially relevant because today, we are experiencing a concentration of wealth and power among a very small number of individuals. A better understanding of the hallmarks of autocracy and democracy can help us identify threats and pump the brakes on burgeoning totalitarian regimes. “When you do archaeology, you’re looking for patterns that contain potential lessons for the world today,” says Feinman. “Our findings in this study give us a perspective and guidance that we didn’t have before, and they're extremely relevant to our lives.”

This study was contributed to by Gary M. Feinman, David Stasavage, David M. Carballo, Sarah B. Barber, Adam Green, Jacob Holland-Lulewicz, Dan Lawrence, Jessica Munson, Linda M. Nicholas, Francesca Fulminante, Sarah Klassen, Keith W. Kintigh, and John Douglass.

 

###

15,000 years ago, children shaped clay, long before pottery or farming


A butterfly clay bead from the Final Natufian period 

image: 

A butterfly clay bead from the Final Natufian period in Eynan-Mallaha (Upper Jordan Valley), colored red with ochre and marked with the fingerprints of the child (≈10 years old) who modeled it 12,000 years ago. Four other beads discovered in other villages were also modeled by children. The study presents the largest collection of Paleolithic fingerprints known today. 

 

view more 

Credit: Laurent Davin


Long before pottery, before agriculture, when the first villages took shape, people in the Levant were already molding clay with their hands, carefully, deliberately, and sometimes playfully. Some of those hands belonged to children.

Link to pictures:  https://drive.google.com/drive/folders/17O5vHUq6flnwqxNFzYejs0PqDB6JmBUz?usp=sharing

An international team of archaeologist led by Laurent Davin, a postdoctoral researcher at the Institute of Archaeology at Hebrew University of Jerusalem under the supervision of Prof. Leore Grosman, has uncovered the earliest known clay ornaments in Southwest Asia, revealing a forgotten chapter in the story of how humans began to express identity, belonging, and meaning through material culture. The findings, published this week in Science Advances, push back the symbolic use of clay in the region by thousands of years.

The ornaments, 142 beads and pendants, were made some 15,000 years ago by Natufian hunter-gatherers living in what is now Israel. These communities were the first in the world to settle permanently in one place, millenia before the rise of agriculture. Until now, clay in this period was thought to play little or no ornamental role. In fact, only five clay beads from this era were previously known worldwide.

“This discovery completely changes how we understand the relationship between clay, symbolism, and the emergence of settled life,” said Laurent Davin.

A Hidden Tradition Emerges

The ornaments were found at four Natufian sites: el-Wad, Nahal Oren, Hayonim, and Eynan-Mallaha, spanning more than three millennia of occupation. Small enough to fit in the palm of a hand, the beads were carefully shaped from unbaked clay into cylinders, discs, and ellipses. Many were coated in red ochre, using a technique known as engobe, a thin layer of liquid clay smoothed onto the surface.

This is the earliest known use of this coloring technique anywhere in the world.

The sheer number and diversity of the beads reveal something unexpected: this was not an isolated experiment, but a sustained tradition. Clay, it turns out, had already become a medium for visual communication, long before it was used for bowls or jars. 

Inspired by Plants and Daily Life

Nineteen distinct bead types were identified, many echoing the shapes of plants that were central to Natufian life: wild barley, einkorn wheat, lentils, and peas. These were the same plants the Natufians harvested, processed, and consumed intensively, plants that would later form the backbone of agriculture.

Traces of plant fibers preserved on some beads show how they were strung and worn, offering rare insight into organic materials that usually disappear from the archaeological record.

Together, the ornaments suggest that nature, especially the plant world, was not just a source of food, but a source of meaning.

Made by Children and Adults

Perhaps the most striking discovery lies not in the shapes of the beads, but in their surfaces.

Preserved fingerprints, 50 in total, allowed researchers to identify who made them. The prints belong to individuals of different ages: children, adolescents, and adults. It is the first time archaeologists have been able to directly identify the makers of Paleolithic ornaments, and the largest such fingerprint assemblage ever documented from this period.

Some objects appear to have been designed specifically for children, including a tiny clay ring just 10 millimeters wide.

The findings suggest that making ornaments was a shared, everyday activity, one that played a role in learning, imitation, and the transmission of social values from one generation to the next.

A Quiet Symbolic Revolution

For decades, archaeologists believed that symbolic uses of clay in Southwest Asia emerged only with farming and the Neolithic way of life. This study and the recent discovery of a clay figurine in  Nahal Ein Gev II overturns that assumption.

Instead, it shows that a “symbolic revolution” began earlier, during the first stages of sedentarization, when communities were still hunting and gathering but beginning to live in permanent settlements. Clay ornaments became a way to express identity, affiliation, and social relationships, visually and publicly.

“These objects show that profound social and cognitive changes were already underway,” said Prof. Leore Grosman. “The roots of the Neolithic lie deeper than we once thought.”

By documenting one of the world’s oldest traditions of clay adornment, the study reframes the Natufians not just as forerunners of agriculture, but as innovators of symbolic culture, people who used clay to say something about who they were, and who they were becoming.

 

10.1126/sciadv.aea2158