Feed aggregator

Pioneering Spanish experience in climate shelters practice

Nature Climate Change - Fri, 03/20/2026 - 12:00am

Nature Climate Change, Published online: 20 March 2026; doi:10.1038/s41558-026-02587-z

As cities heat up, climate shelters are increasingly vital for protecting people from extreme heat. Beyond temporary emergency stopgaps, Spain’s pioneering experience shows how climate, health and governance align to turn these spaces into enduring infrastructures of care and resilience.

Preserving Keres

MIT Latest News - Thu, 03/19/2026 - 3:35pm

Growing up in the village of Kewa — located between Santa Fe and Albuquerque in New Mexico — William Pacheco, a member of the Santo Domingo Pueblo, learned the value of his language, its history, and the traditions it carries.

“We speak Keres, a language isolate found in seven villages and communities in central New Mexico,” he says. “It’s an endangered language with fewer than 10,000 speakers.” The Pueblos’ conception of ‘language,’ according to Pacheco, evokes the idea that speaking “comes from deep within.”

Pacheco is a graduate student in the MIT Indigenous Languages Initiative, a special master’s program in linguistics for members of communities whose languages are threatened. The two-year program provides its graduates with the linguistic knowledge to help them keep their communities’ languages alive. The initiative also offers expanded opportunities for students and faculty to become involved in Indigenous and endangered languages, working with both native speaker linguists in the master’s program and outside groups, ideas that appealed to him.

“There’s some complexity to our language that defies traditional instruction,” says Pacheco, who will complete his studies this spring. “I want to develop the linguistic tools I need to improve my understanding of its construction and how best to teach and preserve it.” Pacheco is keenly aware of cultural differences in how language transmission occurs. Language, he believes, evolves over time and is best learned experientially; the Western model of language learning prioritizes immediacy and test-taking.

A variety of factors complicate efforts to preserve and potentially teach Keres. Each of the villages where it’s spoken has its own distinct dialect. These dialects are mutually intelligible to various degrees based on where they’re being spoken. Additionally, the last three decades have seen a significant increase in English usage by young Pueblos, which further endangers Keres’ existence.

Furthermore, Keres isn’t a written language. For centuries, the Pueblo have relied on daily use within their homes and communities to maintain its vitality. “The community doesn’t want it written,” Pacheco says. 

Contact with the wider world has previously imperiled Indigenous ideas, an outcome Pacheco wants to avoid. “We believe [Keres] is a form of intellectual property, a tradition and artifact that is best served by empowering our people to preserve it,” he says.

From the Southwest to MIT

While he’s now passionate about linguistics, languages weren’t Pacheco’s first choice when considering an educational path. “I always admired [MIT alumnus and Nobel laureate] Richard Feynman,” he recalls. “I wanted to study physics.”

After earning an undergraduate degree from the University of New Mexico, Pacheco, who’d been working as a K-12 educator, began efforts to preserve Keres, increasing the language’s vitality and preserving its usefulness for, and value to, future generations. He sought permission and certification from the tribe to teach the language at the Santa Fe Indian School, an off-reservation boarding school. He soon discovered that a traditional Western approach to language learning wouldn’t suffice.

“Students weren’t taking the course to be scholars of the language; they wanted to learn it to build community and create opportunities to connect with elders,” Pacheco says. It was students’ advocacy, he notes, that led to the Keres learning initiative. While designing the course, however, he found gaps in his knowledge that led him to consider graduate study. 

“There are fascinating idiosyncrasies in Keres, including, for example, verb morphology — the ways in which verbs and verb sounds change,” he notes. “I wasn’t sure about how to teach them.” He sought to improve his understanding and ability by earning a master’s degree in learning design, innovation, and technology from Harvard University. While completing his studies there, he had another burst of inspiration.

“I thought a background in linguistics would prove useful,” he says. “An advisor told me about the Indigenous Languages Initiative at MIT and recommended I apply.” Pacheco knew of Professor Emeritus Noam Chomsky’s pioneering work in generative linguistics at the Institute and sought to learn more about the field’s potential to help him become a better, more effective educator and linguist. 

Upon arriving at MIT in 2024, Pacheco found himself embraced by faculty and students alike. “[MIT linguists] Adam Albright and Norvin Richards have been wonderfully supportive mentors, offering enthusiasm and expertise” he says. “I’ve benefited from MIT’s approach to linguistics and its use of scientific inquiry as a tool to explore language.” Engaging with other students working to preserve languages at risk of extinction continues to drive his work.

“MIT continually encourages us to use its resources, to collaborate, and to help one another find solutions to our unique challenges,” he says. “Networking, gathering good ideas, and having access to professors and students from a variety of disciplines is incredibly valuable.” 

MIT’s scholars, Pacheco says, are experienced with Indigenous language learning, education, and pedagogy.

Developing an organized approach to Keres research and instruction

While gratified that his work created opportunities for him to preserve and teach Keres, Pacheco marvels at his path to the Institute and its impact on his life. “It was my language, not my interest in physics, which led me to Harvard and MIT,” he says. “How did I end up at these places?”

An advantage of language and linguistics education at MIT is the rigor with which it explores language acquisition modeling and allows for alternatives to established systems. Pacheco is after new ideas for Keres language learning and education, working to develop an effective course based on generative linguistics that both preserves the Pueblos’ approach to community and offers an educational model students are likely to embrace. He’s already had opportunities to test novel theories and practices as an educator back home. 

“I was teaching students to use Keres as a programming tool,” he says. “We modeled a robot as a member of the community navigating a maze, and students would have to teach it to accept commands in Keres.” 

Pacheco also wants to explore community-centered language issues. He wants to standardize the development and education of community linguists, creating a cohort of scholars trained to use the tools he designs that are deeply invested in Keres’ preservation and instruction.

“We want to drive inquiries into Keres and how it’s taught,” he says, “while also centering Indigenous knowledge systems and expanding access to linguistics study for Indigenous scholars.”

Pacheco believes there’s value in exposing scholars and communities to the cultural and ideological exchanges he’s enjoyed between the sciences, humanities, Indigenous ideas, and experiences. “Indigenous scholars exist at MIT,” he says. “We’re here, and the Institute’s support helps preserve languages like Keres as important communal and cultural artifacts.” 

Pacheco is grateful for the opportunities his research at MIT have afforded him. While his education as a linguist and scholar continues, Pacheco’s community, culture, and support for Keres language learning remain top priorities.

“I want to amplify the impact in tribal language policy and Indigenous-centered education,” he says. “Language, its study, and its transmission is both science and art.”

24 states sue over Trump’s climate rollback

ClimateWire News - Thu, 03/19/2026 - 1:19pm
The president is “choosing Big Oil profits over our health,” California's attorney general said in a statement.

Improving cartilage repair through cell therapy

MIT Latest News - Thu, 03/19/2026 - 9:50am

Researchers have developed a new method for monitoring iron flux — the movement and rate at which cells take in, store, use and release iron — in stem cells known as mesenchymal stromal cells (MSCs). The system can provide insights within a minute about a cell’s ability to grow cartilage tissue for cartilage repair. 

The breakthrough offers a promising pathway toward more consistent and efficient manufacturing of high‑quality MSCs for regenerative therapies to treat joint diseases such as osteoarthritis, chronic joint degeneration conditions, and cartilage injuries.

The work was led by researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) group within the Singapore-MIT Alliance for Research and Technology (SMART), and was supported by the SMART Antimicrobial Resistance (AMR) research group, in collaboration with MIT and the National University of Singapore (NUS).

A paper describing the work, “Cellular iron flux measurement by micromagnetic resonance relaxometry as a critical quality attribute of mesenchymal stromal cells,” was published in February in the journal Stem Cells Translational Medicine.

Regenerative therapies hold significant promise for patients with the potential to repair damaged tissues rather than simply manage symptoms. However, one of the biggest challenges in bringing these therapies to patients lies in the unpredictable quality of the MSC’s chondrogenic potential — a cell’s ability to develop and form cartilage tissue — during the in vitro manufacturing process.

Even when grown under controlled laboratory conditions, MSCs are prone to losing some of their potential and ability to form cartilage tissue, leading to inconsistent cartilage repair outcomes due to the varying quality of MSC batches. Existing tests that evaluate the quality of MSCs’ cartilage‑forming potential are destructive in nature, which causes irreversible damage to the cells being tested and renders them unusable for further therapeutic or manufacturing purposes.

In addition, the tests require a prolonged — up to 21-day — period for cells to grow. This slows decision‑making, extends production timelines, and can hinder the timely translation of MSC-based therapies into clinical use and delay treatment for patients. As MSCs can lose chondrogenic potential during this process, early assessment is essential for manufacturers to determine whether a batch should be continued or discontinued. Hence, there is a need for a reliable and rapid method to predict MSCs’ chondrogenic potential during the cell manufacturing process.

The new developement represents a rapid, non-destructive method to monitor iron flux in MSCs by measuring iron changes in spent media — residual components in the culture medium after cell growth. Using an inexpensive benchtop micromagnetic resonance relaxometry (µMRR) device, the approach enables real‑time monitoring of cellular iron changes without damaging the cells. The inexpensive µMRR device can be easily integrated into existing laboratories and manufacturing workflows, enabling routine, real‑time quality monitoring without significant infrastructure or cost barriers.

Iron homeostasis is a critical process that maintains normal levels of iron for cell function, maintaining the balance between providing sufficient iron for essential processes, while preventing toxic accumulation. The study found that iron homeostasis is highly correlated with the MSC’s chondrogenic potential, where significant iron uptake and accumulation will reduce the cell’s ability to form cartilage. The researchers also found that supplementing the cell growth process with ascorbic acid (AA) helps regulate iron homeostasis by limiting iron flux, thereby improving the MSC’s chondrogenic potential.

Using this novel method, spent media are collected as samples and treated with AA. The µMRR device is then used to track and provide real-time insights into small iron concentration changes within the spent media. These iron concentration changes reflect how MSCs take up and release iron and can provide an early indicator of whether a batch is likely to succeed in forming good cartilage.

These findings allow manufacturers to not only monitor MSCs quality for cartilage repair in real-time, but also to assess when, and to what extent, interventions such as AA supplementation are likely to be beneficial - supporting efficient manufacturing of more effective and consistent MSC‑based therapies.

“One of the key challenges in cartilage regeneration is the inability to reliably predict whether MSCs will retain their chondrogenic potential during manufacturing. Our study addresses this by introducing a rapid, non-destructive method to monitor iron flux dynamics as a novel critical quality attribute (CQA) of MSCs' chondrogenic capacity. This approach enables early identification of suboptimal cell batches during culture, enhancing quality control efficiency, reducing manufacturing costs, and accelerating clinical translation,” says Yanmeng Yang, CAMP postdoc and first author of the paper.

“Our research sheds light on a fundamental biological process that, until now, has been extremely difficult to measure. By monitoring iron flux in real-time without destroying the cells, we can gain actionable insights into a cell batch’s chondrogenic potential, which allows for early decision-making during the manufacturing process. The findings support µMRR‑based iron monitoring as an effective quality control strategy for MSC-based therapy manufacturing, paving the way for more consistent and clinically viable regenerative medicine for cartilage regeneration,” says MIT Professor Jongyoon Han, co-head CAMP PI, AMP PI, and corresponding author of the paper.

This method represents a promising step toward improving manufacturing consistency and functional characterisation of MSC-based cellular products. Beyond advancing cell therapy manufacturing, it contributes to the scientific industry studying iron biology by providing real-time iron flux measurements that were previously unavailable. The research also advances clinical translation of high-quality cell therapies for cartilage regeneration, bringing these closer to patients with joint degeneration conditions and cartilage injuries.

Building on these findings, the researchers plan to carry out future preclinical and clinical studies to expand this approach beyond quality control in manufacturing, with the aim of establishing µMRR as a validated method for the clinical translation of MSC-based therapies in patients for cartilage repair.

The research, conducted at SMART, was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.

Why targeting Kharg Island could backfire on Trump

ClimateWire News - Thu, 03/19/2026 - 7:02am
The president’s attacks on Iran’s oil infrastructure could determine the course of the war — and its domestic political fallout.

Bipartisan ESA reform evolves in Senate

ClimateWire News - Thu, 03/19/2026 - 7:01am
Senators project optimism about changes to the Endangered Species Act, although staffing levels could be a point of contention.

Fervo inks financing deal for first geothermal plant

ClimateWire News - Thu, 03/19/2026 - 7:00am
The company's Cape Station is a bellwether for whether advanced geothermal can deliver carbon-free power around the clock.

Mullin addresses FEMA funding during confirmation hearing

ClimateWire News - Thu, 03/19/2026 - 6:57am
Sen. Markwayne Mullin, nominee for Homeland Security secretary, said he would “absolutely” change a policy on approval for smaller payments.

Virginia lawmakers pass extreme heat bill for workers

ClimateWire News - Thu, 03/19/2026 - 6:57am
The measure gives state agencies until 2028 to draft standards requiring employers to implement safeguards.

Oregon searches for ways to hit climate goals

ClimateWire News - Thu, 03/19/2026 - 6:54am
Electrification, hydrogen and seafood are among the options state officials say could help cut greenhouse gas emissions.

Hochul says she rebuffed Trump on fracking

ClimateWire News - Thu, 03/19/2026 - 6:54am
Gov. Kathy Hochul continues to push to weaken New York’s landmark 2019 climate law as she points to federal opposition to clean energy.

9 EU countries plot to weaken EU carbon pricing system

ClimateWire News - Thu, 03/19/2026 - 6:53am
Austria, Croatia, Czechia, Greece, Hungary, Italy, Poland, Romania and Slovakia met in Brussels to coordinate their mutual concerns with the Emissions Trading System.

UK set to publish green homes plan amid Iran energy shock

ClimateWire News - Thu, 03/19/2026 - 6:53am
The Future Homes Standard will likely be presented as an essential step to reduce U.K. reliance on fossil fuels and to cut energy bills.

EVs avoided the use of 2.3M barrels of oil per day in 2025

ClimateWire News - Thu, 03/19/2026 - 6:52am
BloombergNEF projects that by 2030, avoided worldwide daily consumption could reach 5.25 million barrels.

Oil, gas majors cut green spending for first time since 2017

ClimateWire News - Thu, 03/19/2026 - 6:52am
Not all firms retreated from such spending. Repsol and Saudi Aramco, the largest investors in low-carbon technology in 2025, each committed about $4 billion.

Hacking a Robot Vacuum

Schneier on Security - Thu, 03/19/2026 - 5:47am

Someone tries to remote control his own DJI Romo vacuum, and ends up controlling 7,000 of them from all around the world.

The IoT is horribly insecure, but we already knew that.

Misbehaviour dominates GHG emissions from food loss and waste

Nature Climate Change - Thu, 03/19/2026 - 12:00am

Nature Climate Change, Published online: 19 March 2026; doi:10.1038/s41558-026-02596-y

Food loss and waste (FLW) is a major source of global GHG emissions, yet its drivers and mitigation potential remain understudied. By attributing FLW to techno-economic and misbehavioural drivers, this study shows misbehaviour dominates FLW emissions and offers substantial mitigation potential.

Generative AI improves a wireless vision system that sees through obstructions

MIT Latest News - Thu, 03/19/2026 - 12:00am

MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.

Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.

This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.

The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.  

This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.

These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”

Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Surmounting specularity

The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.

These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.

But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.

“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.

The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.

In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.

“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.

Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.

Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.

“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.

The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.

The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.

Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.

Seeing “ghosts”

The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.

Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.

These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.

“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.

They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.

They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.

In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.

This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.

A better method for identifying overconfident large language models

MIT Latest News - Thu, 03/19/2026 - 12:00am

Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.   

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.    

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.

New model predicts how mosquitoes will fly

MIT Latest News - Wed, 03/18/2026 - 2:00pm

A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.

Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.

Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.

When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.

When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.

Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.

The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.

“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”

The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.

Flight by numbers

Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.

Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.

“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”

At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.

Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?

“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”

Taking cues

For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.

In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.

The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.

Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.

“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”

In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.

“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.

“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”

The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.

“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”

This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund. 

Pages