MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 21 hours 34 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Scientists get a first look at the innermost region of a white dwarf system

23 hours 37 min ago

Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.

Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.

The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.

What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.

The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.

“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”

 

Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.

A high-energy fountain

All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.

The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.

The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.

“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.

An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.

In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.

An innermost picture

By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.

“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”

Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.

“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.

The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.

“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”

The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.

“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”

This research was supported, in part, by NASA.

The cost of thinking

Wed, 11/19/2025 - 4:45pm

Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.

A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these — and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report today in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.

The researchers, who were led by Evelina Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says. “The fact that there’s some convergence is really quite striking.”

Reasoning models

Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well — and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.

“Up until recently, I was among the people saying, ‘These models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” Fedorenko says. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”

Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoc in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”

To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”

Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem-solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before — but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.

The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail, too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.

Time versus tokens

This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains. “It’s as if they were talking to themselves.”

Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it — and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.

Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.

De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.

The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.

How a building creates and defines a region

Wed, 11/19/2025 - 4:35pm

As an undergraduate majoring in architecture, Dong Nyung Lee ’21 wasn’t sure how to respond when friends asked him what the study of architecture was about.

“I was always confused about how to describe it myself,” he says with a laugh. “I would tell them that it wasn’t just about a building, or a city, or a community. It’s a balance across different scales, and it has to touch everything all at once.”

As a graduate student enrolled in a design studio course last spring — 4.154 (Territory as Interior) — Lee and his classmates had to design a building that would serve a specific community in a specific location. The course, says Lee, gave him clarity as to “what architecture is all about.”

Designed by Roi Salgueiro Barrio, a lecturer in the MIT School of Architecture and Planning’s Department of Architecture, the coursework combines ecological principles, architectural design, urban economics, and social considerations to address real-world problems in marginalized or degraded areas.

“When we build, we always impact economies, mostly by the different types of technologies we use and their dependence on different types of labor and materials,” says Salgueiro Barrio. “The intention here was to think at both levels: the activities that can be accommodated, and how we can actually build something.”

Research first

Students were tasked with repurposing an abandoned fishing industry building on the Barbanza Peninsula in Galicia, Spain, and proposing a new economic activity for the building that would help regenerate the local economy. Working in groups, they researched the region’s material resources and fiscal sectors and designed detailed maps. This approach to constructing a building was new for Vincent Jackow a master's student in architecture.

“Normally in architecture, we work at the scale of one-to-100 meters,” he says. But this process allowed me to connect the dots between what the region offered and what could be built to support the economy.”

The aim of revitalizing this area is also a goal of Fundación RIA (FRIA), a nonprofit think tank established by Pritzker Prize-winning architect David Chipperfield. FRIA generates research and territorial planning with the goal of long-term sustainability of the built and natural environment in the Galicia region. During their spring break in March, the students traveled to Galicia, met with Chipperfield, business owners, fishermen, and farmers, and explored a variety of sites. They also consulted with the owner of the building they were to repurpose.

Returning to MIT, the students constructed nine detailed models. Master’s student Aleks Banaś says she took the studio because it required her to explore the variety of scales in an architectural project from territorial analysis to building detail, all while keeping the socio-economic aspect of design decisions in mind.

“I’m interested in how architecture can support local economies,” says Banaś. “Visiting Galicia was very special because of the communities we interacted with. We were no longer looking at articles and maps of the region; we were learning about day-to-day life. A lot of people shared with us the value of their work, which is not economically feasible.”

Banaś was impressed by the region’s strong maritime history and the generations of craftspeople working on timber boat-making. Inspired by the collective spirit of the region, she designed “House of Sea,” transforming the former cannery into a hub for community gathering and seafront activities. The reimagined building would accommodate a variety of functions including a boat-building workshop for the Ribeira carpenters’ association, a restaurant, and a large, covered section for local events such as the annual barnacle festival.

“I wanted to demonstrate how we can create space for an alternative economy that can host and support these skills and traditions,” says Banaś. 

Jackow’s building — “La Nueva Cordelería,” or “New Rope Making” — was a facility using hemp to produce rope and hempcrete blocks (a construction material). The production of both “is very on-trend in the E.U.” and provides an alternative to petrochemical-based ropes for the region’s marine uses, says Jackow. The building would serve as a cultural hub, incorporating a café, worker housing, and offices. Even its very structure would also make use of the rope by joining timber with knots allowing the interior spaces to be redesigned.

Lee’s building was designed to engage with the forestry and agricultural industries.

“What intrigued me was that Galicia is heavily dependent on pulp production and wood harvesting,” he says. “I wanted to give value to the post-harvest residue.”

Lee designed a biochar plant using some of the concrete and terra cotta blocks on site. Biochar is made by heating the harvested wood residue through pyrolysis — thermal decomposition in an environment with little oxygen. The resulting biochar would be used by farmers for soil enhancement.

“The work demonstrated an understanding of the local resources and using them to benefit the revitalization of the area,” says Salgueiro Barrio, who was pleased with the results. 

FRIA was so impressed with the work that they held an exhibition at their gallery in Santiago de Compostela in August and September to highlight the importance of connecting academic research with the territory through student projects. Banaś interned with FRIA over the summer working on multiple projects, including the plan and design for the exhibition. The challenge here, she says, was to design an exhibition of academic work for a general audience. The final presentation included maps, drawings, and photographs by the students.

For Lee, the course was more meaningful than any he has taken to date. Moving between the different scales of the project illustrated, for him, “the biggest challenge for a designer and an architect. Architecture is universal, and very specific. Keeping those dualities in focus was the biggest challenge and the most interesting part of this project. It hit at the core of what architecture is.”

Symposium examines the neural circuits that keep us alive and well

Wed, 11/19/2025 - 4:25pm

Taking an audience of hundreds on a tour around the body, seven speakers at The Picower Institute for Learning and Memory’s symposium “Circuits of Survival and Homeostasis” Oct. 21 shared their advanced and novel research about some of the nervous system’s most evolutionarily ancient functions.

Introducing the symposium that she arranged with a picture of a man at a campfire on a frigid day, Sara Prescott, assistant professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences, pointed out that the brain and the body cooperate constantly just to keep us going, and that when the systems they maintain fail, the consequence is disease.

“[This man] is tightly regulating his blood pressure, glucose levels, his energy expenditure, inflammation and breathing rate, and he’s doing this in the face of a fluctuating external environment,” Prescott said. “Behind each of these processes there are networks of neurons that are working quietly in the background to maintain internal stability. And this is, of course, the brain’s oldest job.”

Indeed, although the discoveries they shared about the underlying neuroscience were new, the speakers each described experiences that are as timeless as they are familiar: the beating of the heart, the transition from hunger to satiety, and the healing of cuts on our skin.

Feeling warm and full

Li Ye, a scientist at Scripps Research, picked right up on the example of coping with the cold. Mammals need to maintain a consistent internal body temperature, and so they will increase metabolism in the cold and then, as energy supplies dwindle, seek out more food. His lab’s 2023 study identified the circuit, centered in the Xiphoid nucleus of the brain’s thalamus, that regulates this behavior by sensing prolonged cold exposure and energy consumption. Ye described other feeding mechanisms his lab is studying as well, including searching out the circuitry that regulates how long an animal will feed at a time. For instance, if you’re worried about predators finding you, it’s a bad idea to linger for a leisurely lunch.

Physiologist Zachary Knight of the University of California at San Francisco also studies feeding and drinking behaviors. In particular, his lab asks how the brain knows when it’s time to stop. The conventional wisdom is that all that’s needed is a feeling of fullness coming from the gut, but his research shows there is more to the story. A 2023 study from his lab found a population of neurons in the caudal nucleus of the solitary tract in the brain stem that receive signals about ingestion and taste from the mouth, and that send that “stop eating” signal. They also found a separate neural population in the brain stem that indeed receives fullness signals from the gut, and teaches the brain over time how much food leads to satisfaction. Both neuron types work together to regulate the pace of eating. His lab has continued to study how brain stem circuits regulate feeding using these multiple inputs.

Energy balance depends not only on how many calories come in, but also on how much energy is spent. When food is truly scarce, many animals will engage in a state of radically lowered metabolism called torpor (like hibernation), where body temperature plummets. The brain circuits that exert control over body temperature are another area of active research. In his talk, Harvard University neurologist Clifford Saper described years of research in which his lab found neurons in the median preoptic nucleus that dictate this metabolic state. Recently, his lab demonstrated that the same neurons that regulate torpor also regulate fever during sickness. When the neurons are active, body temperature drops. When they are inhibited, fever ensues. Thus, the same neurons act as a two-way switch for body temperature in response to different threatening conditions.

Sickness, injury, and stress

As the idea of fever suggests, the body also has evolved circuits (that scientists are only now dissecting) to deal with sickness and injury.

Washington University neuroscientist Qin Liu described her research into the circuits governing coughing and sneezing, which, on one hand, can clear the upper airways of pathogens and obstructions but, on the other hand, can spread those pathogens to others in the community. She described her lab’s 2024 study in which her team pinpointed a population of neurons in the nasal passages that mediate sneezing and a different population of sensory neurons in the trachea that produce coughing. Identifying the specific cells and their unique characteristics makes them potentially viable drug targets.

While Liu tackled sickness, Harvard stem cell biologist Ya-Chieh Hsu discussed how neurons can reshape the body’s tissues during stress and injury, specifically the hair and skin. While it is common lore that stress can make your hair gray and fall out, Hsu’s lab has shown the actual physiological mechanisms that make it so. In 2020 her team showed that bursts of noradrenaline from the hyperactivation of nerves in the sympathetic nervous system kills the melanocyte stem cells that give hair its color. She described newer research indicating a similar mechanism may also make hair fall out by killing off cells at the base of hair follicles, releasing cellular debris and triggering auto-immunity. Her lab has also looked at how the nervous system influences skin healing after injury. For instance, while our skin may appear to heal after a cut because it closes up, many skin cell types actually don’t rebound (unless you’re still an embryo). By looking at the difference between embryos and post-birth mice, Hsu’s lab has traced the neural mechanisms that prevent fuller healing, identifying a role for cells called fibroblasts and the nervous system.

Continuing on the theme of stress, Caltech biologist Yuki Oka discussed a broad-scale project in his lab to develop a molecular and cellular atlas of the sympathetic nervous system, which innervates much of the body and famously produces its “fight or flight” responses. In work partly published last year, their journey touched on cells and circuits involved in functions ranging from salivation to secreting bile. Oka and co-authors made the case for the need to study the system more in a review paper earlier this year.

A new model to study human biology

In their search for the best ways to understand the circuits that govern survival and homeostasis, researchers often use rodents because they are genetically tractable, easy to house, and reproduce quickly, but Stanford University biochemist Mark Krasnow has worked to develop a new model with many of those same traits but a closer genetic relationship to humans: the mouse lemur. In his talk, he described that work (which includes extensive field research in Madagascar) and focused on insights the mouse lemurs have helped him make into heart arrhythmias. After studying the genes and health of hundreds of mouse lemurs, his lab identified a family with “sick sinus syndrome,” an arrhythmia also seen in humans. In a preprint study, his lab describes the specific molecular pathways at fault in disrupting the heart’s natural pace making.

By sharing some of the latest research into how the brain and body work to stay healthy, the symposium’s speakers highlighted the most current thinking about the nervous system’s most primal purposes.

Quantum modeling for breakthroughs in materials science and sustainable energy

Wed, 11/19/2025 - 4:00pm

Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself.

Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they learned about chemical interactions through the formulas and diagrams they drew together.

“I grew up in a town where it was difficult to find a scientist,” he says.

Today, Opoku has become one himself, recently earning a PhD in quantum chemistry from Auburn University. This year, he joins MIT as a part of the School of Science Dean’s Postdoctoral Fellowship program. Working with the Van Voorhis Group at the Department of Chemistry, Opoku’s goal is to advance computational methods to study how electrons behave — a fundamental research that underlies applications ranging from materials science to drug discovery.

“As a boy who wanted to satisfy my own curiosities at a young age, in addition to the fact that my parents had minimal formal education,” Opoku says, “I knew that the only way I would be able to accomplish my goal was to work hard.”

In pursuit of knowledge

When Opoku was 8 years old, he began independently learning English at school. He would come back with homework, but his parents were unable to help him, as neither of them could read or write in English. Frustrated, his mother asked an older student to help tutor her son.

Every day, the boys would meet at 6 o’clock. With no electricity at either of their homes, they practiced new vocabulary and pronunciations together by a kerosene lamp.

As he entered junior high school, Opoku’s fascination with nature grew.

“I realized that chemistry was the central science that really offered the insight that I wanted to really understand Creation from the smallest level,” he says.

He studied diligently and was able to get into one of Ghana’s top high schools — but his parents couldn’t afford the tuition. He therefore enrolled in Dadease Agric Senior High School in his hometown. By growing tomatoes and maize, he saved up enough money to support his education.

In 2012, he got into Kwame Nkrumah University of Science and Technology (KNUST), a first-ranking university in Ghana and the West Africa region. There, he was introduced to computational chemistry. Unlike many other branches of science, the field required only a laptop and the internet to study chemical reactions.

“Anything that comes to mind, anytime I can grab my computer and I’ll start exploring my curiosity. I don’t have to wait to go to the laboratory in order to interrogate nature,” he says.

Opoku worked from early morning to late night. None of it felt like work, though, thanks to his supervisor, the late quantum chemist Richard Tia, who was an associate professor of chemistry at KNUST.

“Every single day was a fun day,” he recalls of his time working with Tia. “I was being asked to do the things that I myself wanted to know, to satisfy my own curiosity, and by doing that I’ll be given a degree.”

In 2020, Opoku’s curiosity brought him even further, this time overseas to Auburn University in Alabama for his PhD. Under the guidance of his advisor, Professor J. V. Ortiz, Opoku contributed to the development of new computational methods to simulate how electrons bind to or detach from molecules, a process known as electron propagation.

What is new about Opoku’s approach is that it does not rely on any adjustable or empirical parameters. Unlike some earlier computational methods that require tuning to match experimental results, his technique uses advanced mathematical formulations to directly account for first principles of electron interactions. This makes the method more accurate — closely resembling results from lab experiments — while using less computational power.

By streamlining the calculations and eliminating guesswork, Opoku’s work marks a major step toward faster, more trustworthy quantum simulations across a wide range of molecules, including those never studied before — laying the groundwork for breakthroughs in many areas such as materials science and sustainable energy.

For his postdoctoral research at MIT, Opoku aims to advance electron propagator methods to address larger and more complex molecules and materials by integrating quantum computing, machine learning, and bootstrap embedding — a technique that simplifies quantum chemistry calculations by dividing large molecules into smaller, overlapping fragments. He is collaborating with Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, whose expertise in these areas can help make Opoku’s advanced simulations more computationally efficient and scalable.

“His approach is different from any of the ways that we've pursued in the group in the past,” Van Voorhis says.

Passing along the opportunity to learn

Opoku thanks previous mentors who helped him overcome the “intellectual overhead required to make contributions to the field,” and believes Van Voorhis will offer the same kind of support.

In 2021, Opoku joined the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) to gain mentorship, networking, and career development opportunities within a supportive community. He later led the Auburn University chapter as president, helping coordinate k-12 outreach to inspire the next generation of scientists, engineers, and innovators.

“Opoku’s mentorship goes above and beyond what would be typical at his career stage,” says Van Voorhis. “One reason is his ability to communicate science to people, and not just the concepts of science, but also the process of science."

Back home, Opoku founded the Nesvard Institute of Molecular Sciences to support African students to develop not only skills for graduate school and professional careers, but also a sense of confidence and cultural identity. Through the nonprofit, he has mentored 29 students so far, passing along the opportunity for them to follow their curiosity and help others do the same.

“There are many areas of science and engineering to which Africans have made significant contributions, but these contributions are often not recognized, celebrated, or documented,” Opoku says.

He adds: “We have a duty to change the narrative.” 

New AI agent learns to use CAD to create 3D objects from sketches

Wed, 11/19/2025 - 12:00am

Computer-Aided Design (CAD) is the go-to method for designing most of today’s physical products. Engineers use CAD to turn 2D sketches into 3D models that they can then test and refine before sending a final version to a production line. But the software is notoriously complicated to learn, with thousands of commands to choose from. To be truly proficient in the software takes a huge amount of time and practice.

MIT engineers are looking to ease CAD’s learning curve with an AI model that uses CAD software much like a human would. Given a 2D sketch of an object, the model quickly creates a 3D version by clicking buttons and file options, similar to how an engineer would use the software.

The MIT team has created a new dataset called VideoCAD, which contains more than 41,000 examples of how 3D models are built in CAD software. By learning from these videos, which illustrate how different shapes and objects are constructed step-by-step, the new AI system can now operate CAD software much like a human user.

With VideoCAD, the team is building toward an AI-enabled “CAD co-pilot.” They envision that such a tool could not only create 3D versions of a design, but also work with a human user to suggest next steps, or automatically carry out build sequences that would otherwise be tedious and time-consuming to manually click through.

“There’s an opportunity for AI to increase engineers’ productivity as well as make CAD more accessible to more people,” says Ghadi Nehme, a graduate student in MIT’s Department of Mechanical Engineering.

“This is significant because it lowers the barrier to entry for design, helping people without years of CAD training to create 3D models more easily and tap into their creativity,” adds Faez Ahmed, associate professor of mechanical engineering at MIT.

Ahmed and Nehme, along with graduate student Brandon Man and postdoc Ferdous Alam, will present their work at the Conference on Neural Information Processing Systems (NeurIPS) in December.

Click by click

The team’s new work expands on recent developments in AI-driven user interface (UI) agents — tools that are trained to use software programs to carry out tasks, such as automatically gathering information online and organizing it in an Excel spreadsheet. Ahmed’s group wondered whether such UI agents could be designed to use CAD, which encompasses many more features and functions, and involves far more complicated tasks than the average UI agent can handle.

In their new work, the team aimed to design an AI-driven UI agent that takes the reins of the CAD program to create a 3D version of a 2D sketch, click by click. To do so, the team first looked to an existing dataset of objects that were designed in CAD by humans. Each object in the dataset includes the sequence of high-level design commands, such as “sketch line,” “circle,” and “extrude,” that were used to build the final object.

However, the team realized that these high-level commands alone were not enough to train an AI agent to actually use CAD software. A real agent must also understand the details behind each action. For instance: Which sketch region should it select? When should it zoom in? And what part of a sketch should it extrude? To bridge this gap, the researchers developed a system to translate high-level commands into user-interface interactions.

“For example, let’s say we drew a sketch by drawing a line from point 1 to point 2,” Nehme says. “We translated those high-level actions to user-interface actions, meaning we say, go from this pixel location, click, and then move to a second pixel location, and click, while having the ‘line’ operation selected.”

In the end, the team generated over 41,000 videos of human-designed CAD objects, each of which is described in real-time in terms of the specific clicks, mouse-drags, and other keyboard actions that the human originally carried out. They then fed all this data into a model they developed to learn connections between UI actions and CAD object generation.

Once trained on this dataset, which they dub VideoCAD, the new AI model could take a 2D sketch as input and directly control the CAD software, clicking, dragging, and selecting tools to construct the full 3D shape. The objects ranged in complexity from simple brackets to more complicated house designs. The team is training the model on more complex shapes and envisions that both the model and the dataset could one day enable CAD co-pilots for designers in a wide range of fields.

“VideoCAD is a valuable first step toward AI assistants that help onboard new users and automate the repetitive modeling work that follows familiar patterns,” says Mehdi Ataei, who was not involved in the study, and is a senior research scientist at Autodesk Research, which develops new design software tools. “This is an early foundation, and I would be excited to see successors that span multiple CAD systems, richer operations like assemblies and constraints, and more realistic, messy human workflows.”

A new take on carbon capture

Wed, 11/19/2025 - 12:00am

If there was one thing Cameron Halliday SM ’19, MBA ’22, PhD ’22 was exceptional at during the early days of his PhD at MIT, it was producing the same graph over and over again. Unfortunately for Halliday, the graph measured various materials’ ability to absorb CO2 at high temperatures over time — and it always pointed down and to the right. That meant the materials lost their ability to capture the molecules responsible for warming our climate.

At least Halliday wasn’t alone: For many years, researchers have tried and mostly failed to find materials that could reliably absorb CO2 at the super-high temperatures of industrial furnaces, kilns, and boilers. Halliday’s goal was to find something that lasted a little longer.

Then in 2019, he put a type of molten salt called lithium-sodium ortho-borate through his tests. The salts absorbed more than 95 percent of the CO2. And for the first time, the graph showed almost no degradation over 50 cycles.  The same was true after 100 cycles. Then 1,000.

“I honestly don’t know if we ever expected to completely solve the problem,” Halliday says. “We just expected to improve the system. It took another two months to figure out why it worked.”

The researchers discovered the salts behave like a liquid at high temperatures, which avoids the brittle cracking responsible for the degradation of many solid materials.

“I remember walking home over the Mass Ave bridge at 5 a.m. with all the morning runners going by me,” Halliday recalls. “That was the moment when I realized what this meant. Since then, it’s been about proving it works at larger scales. We’ve just been building the next scaled-up version, proving it still works, building a bigger version, proving that out, until we reach the ultimate goal of deploying this everywhere.”

Today, Halliday is the co-founder and CEO of Mantel, a company building systems to capture carbon dioxide at large industrial sites of all types. Although a lot of people think the carbon capture industry is a dead end, Halliday doesn’t give up so easily, and he’s got a growing corpus of performance data to keep him encouraged.

Mantel’s system can be added on to the machines of power stations and factories making cement, steel, paper and pulp, oil and gas, and more, reducing their carbon emissions by around 95 percent. Instead of being released into the atmosphere, the emitted CO2 is channeled into Mantel’s system, where the company’s salts are sprayed out from something that looks like a shower head. The CO2 diffuses through the molten salts in a reaction that can be reversed through further temperature increases, so the salts boil off pure CO2 that can be transported for use or stored underground.

A key difference from other carbon capture methods that have struggled to be profitable is that Mantel uses the heat from its process to generate steam for customers by combining it with water in another part of its system. Mantel says delivering steam, which is used to drive many common industrial processes, lets its system work with just 3 percent of the net energy that state-of-the-art carbon capture systems require.

“We’re still consuming energy, but we get most of it back as steam, whereas the incumbent technology only consumes steam,” says Halliday, who co-founded Mantel with Sean Robertson PhD ’22 and Danielle Rapson. “That steam is a useful revenue stream, so we can turn carbon capture from a waste management process into a value creation process for our customer’s core business — whether that’s a power station using steam to make electricity, or oil and gas refineries. It completely changes the economics of carbon capture.”

From science to startup

Halliday’s first exposure to MIT came in 2016 when he cold emailed Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, asking if he could come to his lab for the summer and work on research into carbon capture.

“He invited me, but he didn’t put me on that project,” Halliday recalls. “At the end of the summer he said, ‘You should consider coming back and doing a PhD.’”

Halliday enrolled in a joint PhD-MBA program the following year.

“I really wanted to work on something that had an impact,” Halliday says. “The dual PhD-MBA program has some deep technical academic elements to it, but you also work with a company for two months, so you use a lot of what you learn in the real world.”

Halliday worked on a few different research projects in Hatton’s lab early on, all three of which eventually turned into companies. The one that he stuck with explored ways to make carbon capture more energy efficient by working at the high temperatures common at emissions-heavy industrial sites.

Halliday ran into the same problems as past researchers with materials degrading at such extreme conditions.

“It was the big limiter for the technology,” Halliday recalls.

Then Halliday ran his successful experiment with molten borate salts in 2019. The MBA portion of his program began soon after, and Halliday decided to use that time to commercialize the technology. Part of that occurred in Course 15.366 (Climate and Energy Ventures), where Halliday met his co-founders. As it happens, alumni of the class have started more than 150 companies over the years.

“MIT tries to pull these great ideas out of academia and get them into the world so they can be valued and used,” Halliday says. “For the Climate and Energy Ventures class, outside speakers showed us every stage of company-building. The technology roadmap for our system is shoebox-sized, shipping container, one-bedroom house, and then the size of a building. It was really valuable to see other companies and say, ‘That’s what we could look like in three years, or six years.”

From startup to scale up

When Mantel was officially founded in 2022 the founders had their shoebox-sized system. After raising early funding, the team built its shipping container-sized system at The Engine, an MIT-affiliated startup incubator. That system has been operational for almost two years.

Last year, Mantel announced a partnership with Kruger Inc. to build the next version of its system at a factory in Quebec, which will be operational next year. The plant will run in a two-year test phase before scaling across Kruger’s other plants if successful.

“The Quebec project is proving the capture efficiency and proving the step-change improvement in energy use of our system,” Halliday says. “It’s a derisking of the technology that will unlock a lot more opportunities.”

Halliday says Mantel is in conversations with close to 100 industrial partners around the world, including the owners of refineries, data centers, cement and steel plants, and oil and gas companies. Because it’s a standalone addition, Halliday says Mantel’s system doesn’t have to change much to be used in different industries.

Mantel doesn’t handle CO2 conversion or sequestration, but Halliday says capture makes up the bulk of the costs in the CO2 value chain. It also generates high-quality CO2 that can be transported in pipelines and used in industries including the food and beverage industry — like the CO2 that makes your soda bubbly.

“This is the solution our customers are dreaming of,” Halliday says. “It means they don’t have to shut down their billion-dollar asset and reimagine their business to address an issue that they all appreciate is existential. There are questions about the timeline, but most industries recognize this is a problem they’ll have to grapple with eventually. This is a pragmatic solution that’s not trying to reshape the world as we dream of it. It’s looking at the problem at hand today and fixing it.”

An improved way to detach cells from culture surfaces

Tue, 11/18/2025 - 4:20pm

Anchorage-dependent cells are cells that require physical attachment to a solid surface, such as a culture dish, to survive, grow, and reproduce. In the biomedical industry, and others, having the ability to culture these cells is crucial, but current techniques used to separate cells from surfaces can induce stresses and reduce cell viability.

“In the pharmaceutical and biotechnology industries, cells are typically detached from culture surfaces using enzymes — a process fraught with challenges,” says Kripa Varanasi, MIT professor of mechanical engineering. “Enzymatic treatments can damage delicate cell membranes and surface proteins, particularly in primary cells, and often require multiple steps that make the workflow slow and labor-intensive.”

Existing approaches also rely on large volumes of consumables, generating an estimated 300 million liters of cell culture waste each year. Moreover, because these enzymes are often animal-derived, they can introduce compatibility concerns for cells intended for human therapies, limiting scalability and high-throughput applications in modern biomanufacturing.

Varanasi is corresponding author on a new paper in the journal ACS Nanoin which researchers from the MIT Department of Mechanical Engineering and the Cancer Program at the Broad Institute of Harvard and MIT present a novel enzyme-free strategy for detaching cells from culture surfaces. The method works by harnessing alternating electrochemical current on a conductive biocompatible polymer nanocomposite surface.

“By applying low-frequency alternating voltage, our platform disrupts adhesion within minutes while maintaining over 90 percent cell viability — overcoming the limitations of enzymatic and mechanical methods that can damage cells or generate excess waste,” says Varanasi.

Beyond simplifying routine cell culture, the approach could transform large-scale biomanufacturing by enabling automated and contamination-conscious workflows for cell therapies, tissue engineering, and regenerative medicine. The platform also provides a pathway for safely expanding and harvesting sensitive immune cells for applications such as CAR-T therapies.

“Because our electrically tunable interface can dynamically shape the ionic microenvironment around cells, it also offers powerful opportunities to control ion channels, study signaling pathways, and integrate with bioelectronic systems for high-throughput drug screening, regenerative medicine, and personalized therapies,” Varanasi explains.

“Our work shows how electrochemistry can be harnessed not just for scientific discovery, but also for scalable, real-world applications,” says Wang Hee (Wren) Lee, MIT postdoc and co-first author. “By translating electrochemical control into biomanufacturing, we’re laying the foundation for technologies that can accelerate automation, reduce waste, and ultimately enable new industries built on sustainable and precise processing.”

Bert Vandereydt, co-first author and mechanical engineering researcher at MIT, emphasizes the potential for industrial scalability. “Because this method can be applied uniformly across large areas, it’s ideal for high-throughput and large-scale applications like cell therapy manufacturing. We envision it enabling fully automated, closed-loop cell culture systems in the near future.”

Yuen-Yi (Moony) Tseng, principal investigator at the Broad Institute and collaborator on the project, underscores the biomedical significance. “This platform opens new doors for culturing and harvesting delicate primary or cancer cells. It could streamline workflows across research and clinical biomanufacturing, reducing variability and preserving cell functionality for therapeutic use.”

Industrial applications of adherent cells include uses in the biomedical, pharmaceutical, and cosmetic sectors. For this study, the team tested their new method using human cancer cells, including osteosarcoma and ovarian cancer cells. After identifying an optimal frequency, the detachment efficiency for both types of cells increased from 1 percent to 95 percent, with cell viability exceeding 90 percent.

The paper, “Alternating Electrochemical Redox-Cycling on Nanocomposite Biointerface for High-Efficiency Enzyme-Free Cell Detachment,” is available from the American Chemical Society journal ACS Nano. 

MIT Energy Initiative conference spotlights research priorities amidst a changing energy landscape

Tue, 11/18/2025 - 12:10pm

“We’re here to talk about really substantive changes, and we want you to be a participant in that,” said Desirée Plata, the School of Engineering Distinguished Professor of Climate and Energy in MIT’s Department of Civil and Environmental Engineering, at Energizing@MIT: the MIT Energy Initiative’s (MITEI) Annual Research Conference that was held on Sept. 9-10.

Plata’s words resonated with the 150-plus participants from academia, industry, and government meeting in Cambridge for the conference, whose theme was “tackling emerging energy challenges.” Meeting such challenges and ultimately altering the trajectory of global climate outcomes requires partnerships, speakers agreed.

“We have to be humble and open,” said Giacomo Silvestri, chair of Eniverse Ventures at Eni, in a shared keynote address. “We cannot develop innovation just focusing on ourselves and our competencies … so we need to partner with startups, venture funds, universities like MIT and other public and private institutions.” 

Added his Eni colleague, Annalisa Muccioli, head of research and technology, “The energy transition is a race we can win only by combining mature solutions ready to deploy, together with emerging technologies that still require acceleration and risk management.”

Research targets

In a conference that showcased a suite of research priorities MITEI has identified as central to ensuring a low-carbon energy future, participants shared both promising discoveries and strategies for advancing proven technologies in the face of shifting political winds and policy uncertainties.

One panel focused on grid resiliency — a topic that has moved from the periphery to the center of energy discourse as climate-driven disruptions, cyber threats, and the integration of renewables challenge legacy systems. A dramatic case in point: the April 2025 outage in Spain and Portugal that left millions without power for eight to 15 hours. 

“I want to emphasize that this failure was about more than the power system,” said MITEI research scientist Pablo Duenas-Martinez. While he pinpointed technical problems with reactive power and voltage control behind the system collapse, Duenas-Martinez also called out a lack of transmission capacity with Central Europe and out-of-date operating procedures, and recommended better preparation and communication among transmission systems and utility operators.

“You can’t plan for every single eventuality, which means we need to broaden the portfolio of extreme events we prepare for,” noted Jennifer Pearce, vice president at energy company Avangrid. “We are making the system smarter, stronger, and more resilient to better protect from a wide range of threats such as storms, flooding, and extreme heat events.” Pearce noted that Avangrid’s commitment to deliver safe, reliable power to its customers necessitates “meticulous emergency planning procedures.”

The resiliency of the electric grid under greatly increased demand is an important motivation behind MITEI’s September 2025 launch of the Data Center Power Forum, which was also announced during the annual research conference. The forum will include research projects, webinars, and other content focused on energy supply and storage, grid design and management, infrastructure, and public and economic policy related to data centers. The forum’s members include MITEI companies that also participate in MIT’s Center for Environmental and Energy Policy Research (CEEPR).

Storage and transportation: Staggering challenges

Meeting climate goals to decarbonize the world by 2050 requires building around 300 terawatt-hours of storage, according to Asegun Henry, a professor in the MIT Department of Mechanical Engineering. “It’s an unbelievably enormous problem people have to wrap their minds around,” he said. Henry has been developing a high-temperature thermal energy storage system he has nicknamed “sun in a box.” His system uses liquid metal and graphite to hold electricity as heat and then convert it back to electricity, enabling storage anywhere from five to 500 hours.

“At the end of the day, storage provides a service, and the type of technology that you need is a function of the service that you value the most,” said Nestor Sepulveda, commercial lead for advanced energy investments and partnerships at Google. “I don't think there is one winner-takes-all type of market here.”

Another panel explored sustainable fuels that could help decarbonize hard-to-electrify sectors like aviation, shipping, and long-haul trucking. Randall Field, MITEI’s director of research, noted that sustainably produced drop-in fuels — fuels that are largely compatible with existing engines — “could eliminate potentially trillions of dollars of cost for fleet replacement and for infrastructure build-out, while also helping us to accelerate the rate of decarbonization of the transportation sectors."

Erik G. Birkerts is the chief growth officer of LanzaJet, which produces a drop-in, high-energy-density aviation fuel derived from agricultural residue and other waste carbon sources. “The key to driving broad sustainable aviation fuel adoption is solving both the supply-side challenge through more production and the demand-side hurdle by reducing costs,” he said.

“We think a good policy framework [for sustainable fuels] would be something that is technology-neutral, does not exclude any pathways to produce, is based on life cycle accounting practices, and on market mechanisms,” said Veronica L. Robertson, energy products technology portfolio manager at ExxonMobil.

MITEI plans a major expansion of its research on sustainable fuels, announcing a two-year study, “The future of fuels: Pathways to sustainable transportation,” starting in early 2026. According to Field, the study will analyze and assess biofuels and e-fuels.

Solutions from labs big and small

Global energy leaders offered glimpses of their research projects. A panel on carbon capture in power generation featured three takes on the topic: Devin Shaw, commercial director of decarbonization technologies at Shell, described post-combustion carbon capture in power plants using steam for heat recovery; Jan Marsh, a global program lead at Siemens Energy, discussed deploying novel materials to capture carbon dioxide directly from the air; and Jeffrey Goldmeer, senior director of technology strategy at GE Vernova, explained integrating carbon capture into gas-powered turbine systems.

During a panel on vehicle electrification, Brian Storey, vice president of energy and materials at the Toyota Research Institute, provided an overview of Toyota’s portfolio of projects for decarbonization, including solid-state batteries, flexible manufacturing lines, and grid-forming inverters to support EV charging infrastructure.

A session on MITEI seed fund projects revealed promising early-stage research inside MIT’s own labs. A new process for decarbonizing the production of ethylene was presented by Yogesh Surendranath, Donner Professor of Science in the MIT Department of Chemistry. Materials Science and Engineering assistant professor Aristide Gumyusenge also discussed the development of polymers essential for a new kind of sodium-ion battery.

Shepherding bold, new technologies like these from academic labs into the real world cannot succeed without ample support and deft management. A panel on paths to commercialization featured the work of Iwnetim Abate, Chipman Career Development Professor and assistant professor in the MIT Department of Materials Science and Engineering, who has spun out a company, Addis Energy, based on a novel geothermal process for harvesting clean hydrogen and ammonia from subsurface, iron-rich rocks. Among his funders: ARPA-E and MIT’s own The Engine Ventures.

The panel also highlighted the MIT Proto Ventures Program, an initiative to seize early-stage MIT ideas and unleash them as world-changing startups. “A mere 4.2 percent of all the patents that are actually prosecuted in the world are ever commercialized, which seems like a shocking number,” said Andrew Inglis, an entrepreneur working with Proto Ventures to translate geothermal discoveries into businesses. “Can’t we do this better? Let’s do this better!”

Geopolitical hazards

Throughout the conference, participants often voiced concern about the impacts of competition between the United States and China. Kelly Sims Gallagher, dean of the Fletcher School at Tufts University and an expert on China’s energy landscape, delivered the sobering news in her keynote address: “U.S. competitiveness in low-carbon technologies has eroded in nearly every category,” she said. “The Chinese are winning the clean tech race.”

China enjoys a 51 percent share in global wind turbine manufacture and 75 percent in solar modules. It also controls low-carbon supply chains that much of the world depends on. “China is getting so dominant that nobody can carve out a comparative advantage in anything,” said Gallagher. “China is just so big, and the scale is so huge that the Chinese can truly conquer markets and make it very hard for potential competitors to find a way in.”

And for the United States, the problem is “the seesaw of energy policy,” she says. “It’s incredibly difficult for the private sector to plan and to operate, given the lack of predictability and policy here.”

Nevertheless, Gallagher believes the United States still has a chance of at least regaining competitiveness, by setting up a stable, bipartisan energy policy, rebuilding domestic manufacturing and supply chains; providing consistent fiscal incentives; attracting and retaining global talent; and fostering international collaboration.

The conference shone a light on one such collaboration: a China-U.S. joint venture to manufacture lithium iron phosphate batteries for commercial vehicles in the United States. The venture brings together Eve Energy, a Chinese battery technology and manufacturing company; Daimler, a global commercial vehicle manufacturer; PACCAR Inc., a U.S.-based truck manufacturer; and Accelera, the zero-emissions business of Cummins Inc. “Manufacturing batteries in the U.S. makes the supply chain more robust and reduces geopolitical risks,” said Mike Gerty, of PACCAR.

While she acknowledged the obstacles confronting her colleagues in the room, Plata nevertheless concluded her remarks as a panel moderator with some optimism: “I hope you all leave this conference and look back on it in the future, saying I was in the room when they actually solved some of the challenges standing between now and the future that we all wish to manifest.”

Introducing the MIT-GE Vernova Climate and Energy Alliance

Tue, 11/18/2025 - 11:50am

MIT and GE Vernova launched the MIT-GE Vernova Energy and Climate Alliance on Sept. 15, a collaboration to advance research and education focused on accelerating the global energy transition.

Through the alliance — an industry-academia initiative conceived by MIT Provost Anantha Chandrakasan and GE Vernova CEO Scott Strazik — GE Vernova has committed $50 million over five years in the form of sponsored research projects and philanthropic funding for research, graduate student fellowships, internships, and experiential learning, as well as professional development programs for GE Vernova leaders.

“MIT has a long history of impactful collaborations with industry, and the collaboration between MIT and GE Vernova is a shining example of that legacy,” said Chandrakasan in opening remarks at a launch event. “Together, we are working on energy and climate solutions through interdisciplinary research and diverse perspectives, while providing MIT students the benefit of real-world insights from an industry leader positioned to bring those ideas into the world at scale.”

The energy of change

An independent company since its spinoff from GE in April 2024, GE Vernova is focused on accelerating the global energy transition. The company generates approximately 25 percent of the world’s electricity — with the world’s largest installed base of over 7,000 gas turbines, about 57,000 wind turbines, and leading-edge electrification technology.

GE Vernova’s slogan, “The Energy of Change,” is reflected in decisions such as locating its headquarters in Cambridge, Massachusetts — in close proximity to MIT. In pursuing transformative approaches to the energy transition, the company has identified MIT as a key collaborator.

A key component of the mission to electrify and decarbonize the world is collaboration, according to CEO Scott Strazik. “We want to inspire, and be inspired by, students as we work together on our generation’s greatest challenge, climate change. We have great ambition for what we want the world to become, but we need collaborators. And we need folks that want to iterate with us on what the world should be from here.”

Representing the Healey-Driscoll administration at the launch event were Massachusetts Secretary of Energy and Environmental Affairs Rebecca Tepper and Secretary of the Executive Office of Economic Development Eric Paley. Secretary Tepper highlighted the Mass Leads Act, a $1 billion climate tech and life sciences initiative enacted by Governor Maura Healey last November to strengthen Massachusetts’ leadership in climate tech and AI.

“We're harnessing every part of the state, from hydropower manufacturing facilities to the blue-to-blue economy in our south coast, and right here at the center of our colleges and universities. We want to invent and scale the solutions to climate change in our own backyard,” said Tepper. “That’s been the Massachusetts way for decades.”

Real-world problems, insights, and solutions

The launch celebration featured interactive science displays and student presenters introducing the first round of 13 research projects led by MIT faculty. These projects focus on generating scalable solutions to our most pressing challenges in the areas of electrification, decarbonization, renewables acceleration, and digital solutions. Read more about the funded projects here.

Collaborating with industry offers the opportunity for researchers and students to address real-world problems informed by practical insights. The diverse, interdisciplinary perspectives from both industry and academia will significantly strengthen the research supported through the GE Vernova Fellowships announced at the launch event.

“I’m excited to talk to the industry experts at GE Vernova about the problems that they work on,” said GE Vernova Fellow Aaron Langham. “I’m looking forward to learning more about how real people and industries use electrical power.”

Fellow Julia Estrin echoed a similar sentiment: “I see this as a chance to connect fundamental research with practical applications — using insights from industry to shape innovative solutions in the lab that can have a meaningful impact at scale.”

GE Vernova’s commitment to research is also providing support and inspiration for fellows. “This level of substantive enthusiasm for new ideas and technology is what comes from a company that not only looks toward the future, but also has the resources and determination to innovate impactfully,” says Owen Mylotte, a GE Vernova Fellow.

The inaugural cohort of eight fellows will continue their research at MIT with tuition support from GE Vernova. Find the full list of fellows and their research topics here.

Pipeline of future energy leaders

Highlighting the alliance’s emphasis on cultivating student talent and leadership, GE Vernova CEO Scott Strazik introduced four MIT alumni who are now leaders at GE Vernova: Dhanush Mariappan SM ’03, PhD ’19, senior engineering manager in the GE Vernova Advanced Research Center; Brent Brunell SM ’00, technology director in the Advanced Research Center; Paolo Marone MBA ’21, CFO of wind; and Grace Caza MAP ’22, chief of staff in supply chain and operations.

The four shared their experiences of working with MIT as students and their hopes for the future of this alliance in the realm of “people development,” as Mariappan highlighted. “Energy transition means leaders. And every one of the innovative research and professional education programs that will come out of this alliance is going to produce the leaders of the energy transition industry.”

The alliance is underscoring its commitment to developing future energy leaders by supporting the New Engineering Education Transformation program (NEET) and expanding opportunities for student internships. With 100 new internships for MIT students announced in the days following the launch, GE Vernova is opening broad opportunities for MIT students at all levels to contribute to a sustainable future.

“GE Vernova has been a tremendous collaborator every step of the way, with a clear vision of the technical breakthroughs we need to affect change at scale and a deep respect for MIT’s strengths and culture, as well as a hunger to listen and learn from us as well,” said Betar Gallant, alliance director who is also the Kendall Rohsenow Associate Professor of Mechanical Engineering at MIT. “Students, take this opportunity to learn, connect, and appreciate how much you’re valued, and how bright your futures are in this area of decarbonizing our energy systems. Your ideas and insight are going to help us determine and drive what’s next.”

Daring to create the future we want

The launch event transformed MIT’s Lobby 13 with green lighting and animated conversation around the posters and hardware demos on display, reflecting the sense of optimism for the future and the type of change the alliance — and the Commonwealth of Massachusetts — seeks to advance.

“Because of this collaboration and the commitment to the work that needs doing, many things will be created,” said Secretary Paley. “People in this room will work together on all kinds of projects that will do incredible things for our economy, for our innovation, for our country, and for our climate.”

The alliance builds on MIT’s growing portfolio of initiatives around sustainable energy systems, including the Climate Project at MIT, a presidential initiative focused on developing solutions to some of the toughest barriers to an effective global climate response. “This new alliance is a significant opportunity to move the needle of energy and climate research as we dare to create the future that we want, with the promise of impactful solutions for the world,” said Evelyn Wang, MIT vice president for energy and climate, who attended the launch.

To that end, the alliance is supporting critical cross-institution efforts in energy and climate policy, including funding three master’s students in MIT Technology and Policy Program and hosting an annual symposium in February 2026 to advance interdisciplinary research. GE Vernova is also providing philanthropic support to the MIT Human Insight Collaborative. For 2025-26, this support will contribute to addressing global energy poverty by supporting the MIT Abdul Latif Jameel Poverty Action Lab (J-PAL) in its work to expand access to affordable electricity in South Africa.

“Our hope to our fellows, our hope to our students is this: While the stakes are high and the urgency has never been higher, the impact that you are going to have over the decades to come has never been greater,” said Roger Martella, chief corporate and sustainability officer at GE Vernova. “You have so much opportunity to move the world in a better direction. We need you to succeed. And our mission is to serve you and enable your success.”

With the alliance’s launch — and GE Vernova’s new membership in several other MIT consortium programs related to sustainability, automation and robotics, and AI, including the Initiative for New Manufacturing, MIT Energy Initiative, MIT Climate and Sustainability Consortium, and Center for Transportation and Logistics — it’s evident why Betar Gallant says the company is “all-in at MIT.”

The potential for tremendous impact on the energy industry is clear to those involved in the alliance. As GE Vernova Fellow Jack Morris said at the launch, “This is the beginning of something big.”

MIT researchers use CT scans to unravel mysteries of early metal production

Tue, 11/18/2025 - 10:00am

Around 5,000 years ago, people living in what is now Iran began extracting copper from rock by processing ore, an activity known as smelting. This monumental shift gave them a powerful new technology and may have marked the birth of metallurgy. Soon after, people in different parts of the world were using copper and bronzes (alloys of copper and tin, or copper and arsenic) to produce decorative objects, weapons, tools, and more.

Studying how humans produced such objects is challenging because little evidence still exists, and artifacts that have survived are carefully guarded and preserved.

In a paper published in PLOS One, MIT researchers demonstrated a new approach to uncovering details of some of the earliest metallurgical processes. They studied 5,000-year-old slag waste, a byproduct of smelting ore, using techniques including X-ray computed tomography, also known as CT scanning. In their paper, they show how this noninvasive imaging technique, which has primarily been used in the medical field, can reveal fine details about structures within the pieces of ancient slag.

“Even though slag might not give us the complete picture, it tells stories of how past civilizations were able to refine raw materials from ore and then to metal,” says postdoc Benjamin Sabatini. “It speaks to their technological ability at that time, and it gives us a lot of information. The goal is to understand, from start to finish, how they accomplished making these shiny metal products.”

In the paper, Sabatini and senior author Antoine Allanore, a professor of metallurgy and the Heather N. Lechtman Professor of Materials Science and Engineering, combined CT scanning with more traditional methods of studying ancient artifacts, including cutting the samples for further analysis. They demonstrated that CT scanning could be used to complement those techniques, revealing pores and droplets of different materials within samples. This information could shed light on the materials used by and the technological sophistication of some of the first metallurgists on Earth.

“The Early Bronze Age is one of the earliest reported interactions between mankind and metals,” says Allanore, who is also director of MIT’s Center for Materials Research in Archaeology and Ethnology. “Artifacts in that region at that period are extremely important in archaeology, yet the materials themselves are not very well-characterized in terms of our understanding of the underlying materials and chemical processes. The CT scan approach is a transformation of traditional archaeological methods of determining how to make cuts and analyze samples.”

A new tool in archaeology

Slag is produced as a molten hot liquid when ores are heated to produce metal. The slag contains other constituent minerals from the ore, as well as unreacted metals, which are commonly mixed with additives like limestone. In the mixture, the slag is less dense than the metal, so it can rise and be removed, solidifying like lava as it cools.

“Slag waste is chemically complex to interpret because in our modern metallurgical practices it contains everything not desired in the final product — in particular, arsenic, which is a key element in the original minerals for copper,” says Allanore. “There’s always been a question in archaeometallurgy if we can use arsenic and similar elements in these remains to learn something about the metal production process. The challenge here is that these minerals, especially arsenic, are very prone to dissolution and leaching, and therefore their environmental stability creates additional problems in terms of interpreting what this object was when it was being made 6,000 years ago.”

For the study, the researchers used slag from an ancient site known as Tepe Hissar in Iran. The slag has previously been dated to the period between 3100 and 2900 BCE and was loaned by the Penn Museum to Allanore for study in 2022.

“This region is often brought up as one of the earliest places where evidence of copper processing and object production might have happened,” Allanore explains. “It is very well-preserved, and it’s an early example of a site with long-distance trade and highly organized society. That’s why it’s so important in metallurgy.”

The researchers believe this is the first attempt to study ancient slag using CT scanning, partly because medical-grade scanners are expensive and primarily located in hospitals. The researchers overcame these challenges by working with a local startup in Cambridge that makes industrial CT scanners. They also used the CT scanner on MIT’s campus.

“It was really out of curiosity to see if there was a better way to study these objects,” Sabatini said.

In addition to the CT scans, the researchers used more conventional archaeological analytical methods such as X-ray fluorescence, X-ray diffraction, and optical and scanning electron microscopy. The CT scans provided a detailed overall picture of the internal structure of the slag and the location of interesting features like pores and bits of different materials, augmenting the conventional techniques to impart more complete information about the inside of samples.

They used that information to decide where to section their sample, noting that researchers often guess where to section samples, unsure even which side of the sample was originally facing up or down.

“My strategy was to zero in on the high-density metal droplets that looked like they were still intact, since those might be most representative of the original process,” Sabatini says. “Then I could destructively analyze the samples with a single slice. The CT scanning shows you exactly what is most interesting, as well as the general layout of things you need to study.”

Finding stories in slag

In previous studies, some slag samples from the Tepe Hissar site contained copper and thus seemed to fit the narrative that they resulted from the production of copper, while others showed no evidence of copper at all.

The researchers found that CT scanning allowed them to characterize the intact droplets that contained copper. It also allowed them to identify where gases evolved, forming voids that hold information about how the slags were produced.

Other slags at the site had previously been found to contain small metallic arsenide compounds, leading to disagreements about the role of arsenic in early metal production. The MIT researchers found that arsenic existed in different phases across their samples and could move within the slag or even escape the slag entirely, making it complicated to infer metallurgical processes from the study of arsenic alone.

Moving forward, the researchers say CT scanning could be a powerful tool in archaeology to unravel complex ancient materials and processes.

“This should be an important lever for more systematic studies of the copper aspect of smelting, and also for continuing to understand the role of arsenic,” Allanore says. “It allows us to be cognizant of the role of corrosion and the long-term stability of the artifacts to continue to learn more. It will be a key support for people who want to investigate these questions.”

This work was supported, in part, by the MIT Human Insight Collaborative (MITHIC).

Ultrasonic device dramatically speeds harvesting of water from the air

Tue, 11/18/2025 - 5:00am

Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”

But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days. 

Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.

The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.

Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.

“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”

Boriskina and her colleagues report on their new device in a study appearing today in the journal Nature Communications. The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.

Precious hours

Boriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.

Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.

“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”

She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.

“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.

Water dance

Ultrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.

“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”

Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.

They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.

The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.

“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.

“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”

This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund.

This work was carried out in part by using MIT.nano and ISN facilities at MIT.

Bigger datasets aren’t always better

Tue, 11/18/2025 - 12:00am

Determining the least expensive path for a new subway line underneath a metropolis like New York City is a colossal planning challenge — involving thousands of potential routes through hundreds of city blocks, each with uncertain construction costs. Conventional wisdom suggests extensive field studies across many locations would be needed to determine the costs associated with digging below certain city blocks.

Because these studies are costly to conduct, a city planner would want to perform as few as possible while still gathering the most useful data for making an optimal decision.

With almost countless possibilities, how would they know where to start?

A new algorithmic method developed by MIT researchers could help. Their mathematical framework provably identifies the smallest dataset that guarantees finding the optimal solution to a problem, often requiring fewer measurements than traditional approaches suggest.

In the case of the subway route, this method considers the structure of the problem (the network of city blocks, construction constraints, and budget limits) and the uncertainty surrounding costs. The algorithm then identifies the minimum set of locations where field studies would guarantee finding the least expensive route. The method also identifies how to use this strategically collected data to find the optimal decision.

This framework applies to a broad class of structured decision-making problems under uncertainty, such as supply chain management or electricity network optimization.

“Data are one of the most important aspects of the AI economy. Models are trained on more and more data, consuming enormous computational resources. But most real-world problems have structure that can be exploited. We’ve shown that with careful selection, you can guarantee optimal solutions with a small dataset, and we provide a method to identify exactly which data you need,” says Asu Ozdaglar, Mathworks Professor and head of the MIT Department of Electrical Engineering and Computer Science (EECS), deputy dean of the MIT Schwarzman College of Computing, and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Ozdaglar, co-senior author of a paper on this research, is joined by co-lead authors Omar Bennouna, an EECS graduate student, and his brother Amine Bennouna, a former MIT postdoc who is now an assistant professor at Northwestern University; and co-senior author Saurabh Amin, co-director of Operations Research Center, a professor in the MIT Department of Civil and Environmental Engineering, and a principal investigator in LIDS. The research will be presented at the Conference on Neural Information Processing Systems.

An optimality guarantee

Much of the recent work in operations research focuses on how to best use data to make decisions, but this assumes these data already exist.

The MIT researchers started by asking a different question — what are the minimum data needed to optimally solve a problem? With this knowledge, one could collect far fewer data to find the best solution, spending less time, money, and energy conducting experiments and training AI models.

The researchers first developed a precise geometric and mathematical characterization of what it means for a dataset to be sufficient. Every possible set of costs (travel times, construction expenses, energy prices) makes some particular decision optimal. These “optimality regions” partition the decision space. A dataset is sufficient if it can determine which region contains the true cost.

This characterization offers the foundation of the practical algorithm they developed that identifies datasets that guarantee finding the optimal solution.

Their theoretical exploration revealed that a small, carefully selected dataset is often all one needs.

“When we say a dataset is sufficient, we mean that it contains exactly the information needed to solve the problem. You don’t need to estimate all the parameters accurately; you just need data that can discriminate between competing optimal solutions,” says Amine Bennouna.

Building on these mathematical foundations, the researchers developed an algorithm that finds the smallest sufficient dataset.

Capturing the right data

To use this tool, one inputs the structure of the task, such as the objective and constraints, along with the information they know about the problem.

For instance, in supply chain management, the task might be to reduce operational costs across a network of dozens of potential routes. The company may already know that some shipment routes are especially costly, but lack complete information on others.

The researchers’ iterative algorithm works by repeatedly asking, “Is there any scenario that would change the optimal decision in a way my current data can't detect?” If yes, it adds a measurement that captures that difference. If no, the dataset is provably sufficient.

This algorithm pinpoints the subset of locations that need to be explored to guarantee finding the minimum-cost solution.

Then, after collecting those data, the user can feed them to another algorithm the researchers developed which finds that optimal solution. In this case, that would be the shipment routes to include in a cost-optimal supply chain.

“The algorithm guarantees that, for whatever scenario could occur within your uncertainty, you’ll identify the best decision,” Omar Bennouna says.

The researchers’ evaluations revealed that, using this method, it is possible to guarantee an optimal decision with a much smaller dataset than would typically be collected.

“We challenge this misconception that small data means approximate solutions. These are exact sufficiency results with mathematical proofs. We’ve identified when you’re guaranteed to get the optimal solution with very little data — not probably, but with certainty,” Amin says.

In the future, the researchers want to extend their framework to other types of problems and more complex situations. They also want to study how noisy observations could affect dataset optimality.

“I was impressed by the work’s originality, clarity, and elegant geometric characterization. Their framework offers a fresh optimization perspective on data efficiency in decision-making,” says Yao Xie, the Coca-Cola Foundation Chair and Professor at Georgia Tech, who was not involved with this work.

Small, inexpensive hydrophone boosts undersea signals

Mon, 11/17/2025 - 5:00pm

Researchers at MIT Lincoln Laboratory have developed a first-of-its-kind hydrophone built around a simple, commercially available microphone. The device, leveraging a common microfabrication process known as microelectromechanical systems (MEMS), is significantly smaller and less expensive than current hydrophones, yet has equal or exceeding sensitivity. The hydrophone could have applications for the U.S. Navy, as well as industry and the scientific research community.

"Given the broad interest from the Navy in low-cost hydrophones, we were surprised that this design had not been pursued before," says Daniel Freeman, who leads this work in the Advanced Materials and Microsystems Group. "Hydrophones are critical for undersea sensing in a variety of applications and platforms. Our goal was to demonstrate that we could develop a device at reduced size and cost without sacrificing performance."

Essentially an underwater microphone, a hydrophone is an instrument that converts sound waves into electrical signals, allowing us to "hear" and record sounds in the ocean and other bodies of water. These signals can later be analyzed and interpreted, providing valuable information about the underwater environment.

MEMS devices are incredibly small systems — ranging from a few millimeters down to microns (smaller than a human hair) — with tiny moving parts. They are used in a variety of sensors, including microphones, gyroscopes, and accelerometers. The small size of MEMS sensors has made them crucial in various applications, from smartphones to medical devices. Currently, no commercially available hydrophones utilize MEMS technology, so the team set out to understand whether such a design was possible.

With funding from the Office of the Under Secretary of War for Research and Engineering to develop a novel hydrophone, the team first planned to use microfabrication, an area of expertise at the laboratory, to develop their device. However, that approach proved to be too costly and involved to pursue. This obstacle led the team to pivot and build their hydrophone around a commercially available MEMS microphone. "We had to come up with an inexpensive alternative without giving up performance, and this is what led us to build the design around a microphone, which to our knowledge is a novel approach," Freeman explains.

In collaboration with researchers at Tufts University, as well as industry partners SeaLandAire Technologies and Navmar Applied Sciences Corp., the team made the hydrophone by encapsulating the MEMS microphone in a polymer with low permeability to water while leaving an air cavity around the microphone’s diaphragm (the component of the microphone that vibrates in response to sound waves). One key challenge that they faced was the possibility of losing too much signal to the packaging and the air cavity around the MEMS microphone. After a substantial amount of simulation, design iterations, and testing, the team found that the signal lost from incorporating air into the device was compensated for by the very high sensitivity of the MEMS microphone itself. As a result, the device was able to perform at a sensitivity comparable to high-end hydrophones at depths down to 400 feet and temperatures as low as 40 degrees Fahrenheit. To date, the collaborative effort has involved computational modeling, system electronics design and fabrication, prototype unit manufacturing, and calibrator and pool testing.

In July, eight researchers traveled to Seneca Lake in New York to test a variety of devices. The hydrophones were lowered to increasing depths in the water — 100 feet at first, then incrementally lower down to 400 feet. At each depth, acoustic signals of varying frequencies were transmitted for the instrument to record. The transmitted signals were calibrated to a known level so they could then measure the actual sensitivity of the hydrophones across different frequencies. When the sound hits the hydrophone’s diaphragm, it generates an electrical signal that is amplified, digitized, and transmitted to a recording device at the surface for post-test data analysis. The team utilized both commercial underwater cables as well as Lincoln Laboratory’s fiber-based sensing arrays.

"This was our first field test in deep water, and therefore it was an important milestone in demonstrating the ability to operate in a realistic environment, rather than the water chambers that we’d been using," Freeman says. "Our hope was that the performance of our device would match what we've seen in our water tank, where we tested at high hydrostatic pressure across a range of frequencies. In other words, we hoped this test would provide results that confirm our predictions based on lab-based testing."

The test results were excellent, showing that the sensitivity and the signal-to-noise was within a few decibels of the quietest ocean state, known as sea state zero. Moreover, this performance was achieved in deep water, at 400 feet, and with very low temperatures, around 40 degrees Fahrenheit.

The prototype hydrophone has applications across a wide variety of commercial and military use-cases owing to its small size, efficient power draw, and low cost.

"We're in discussion with the Department of War about transitioning this technology to the U.S. government and industry," says Freeman. "There is still some room for optimizing the design, but we think we've demonstrated that this hydrophone has the key benefits of being robust, high performance, and very low cost."

Q&A: On the ethics of catastrophe

Mon, 11/17/2025 - 4:15pm

At first glimpse, student Jack Carson might appear too busy to think beyond his next problem set, much less tackle major works of philosophy. The sophomore, who plans to double major in electrical engineering with computing and mathematics, has been both an officer in Impact@MIT and a Social and Ethical Responsibility in Computing (SERC) Fellow in the MIT Schwarzman College of Computer Science — and is an active member of Concourse

But this fall, Carson was awarded first place in the Elie Wiesel Prize in Ethics Essay Contest for his entry, “We Know Only Men: Reading Emmanuel Levinas On The Rez,” a comparative exploration of Jewish and Cherokee ethical thought. The deeply researched essay links Carson’s hometown in Adair County, Oklahoma, to the village of Le Chambon sur Lignon, France, and attempts to answer the question: “What is to be done after catastrophe?” Carson explains in this interview.

Q: The prompt for your entry in the Elie Wiesel Prize in Ethics Essay Contest was: “What challenges awaken your conscience? Is it the conflicts in American society? An international crisis? Maybe a difficult choice you currently face or a hard decision you had to make?” How did you land on the topic you’d write about?

A: It was really an insight that just came to me as I struggled with reading Levinas, who is notoriously challenging. The Talmud is a tradition very far from my own, but, as I read Levinas’ lectures on the Talmud, I realized that his project is one that I can relate to: preserving a culture that has been completely displaced, where not destroyed. The more I read of Levinas’ work the more I realized that his philosophy of radical alterity — that you must act when confronted with another person who you can never really comprehend — arose naturally from his efforts to show how to preserve Jewish cultural continuity. In the same if less articulated way, the life I’ve witnessed in Eastern Oklahoma has led people to “act first, think later” — to use a Levinasian term. So it struck me that similar situations of displaced cultures had led to a similar ethical approach. Given that Levinas was writing about Jewish life in Eastern Europe and I was immersed in a heavily Native American culture, the congruence of the two ethical approaches seemed surprising. I thought, perhaps rightly, that it showed something essentially human that could be abstracted away from the very different cultural settings.

Q: Your entry for the contest is a meditation on the ethical similarities between ga-du-gi, the Cherokee concept of communal effort toward the betterment of all; the actions of the Huguenot inhabitants of the French village of Le Chambon sur Lignon (who protected thousands of Jewish refugees during Nazi occupation); and the Jewish philosopher Emmanuel Levinas’ interpretation of the Talmud, which essentially posits that action must come first in an ethical framework, not second. Did you find your own personal philosophy changing as a result of engaging with these ideas — or, perhaps more appropriately — have you noticed your everyday actions changing? 

A: Yes, definitely my personal philosophy has been affected by thinking through Levinas’ demanding approach. Like a lot of people, I sit around thinking through what ethical approach I prefer. Should I be a utilitarian? A virtue theorist? A Kantian? Something else? Levinas had no time for this. He urged acting, not thinking, when confronted with human need. I wrote about the resistance movement of Le Chambon because those brave citizens also just acted without thinking — in a very Levinasian way. That seems a strange thing to valorize, as we are often taught to think before you act, and this is probably good advice! But sometimes you can think your way right out of helping people in need. 

Levinas instructed that you should act in the face of the overwhelming need of what he would call the “Other.” That’s a rather intimidating term, but I read it as meaning just “other people.” The Le Chambon villagers, who protected Jews fleeing the Nazis, and the Cherokees lived this, acting in an almost pre-theoretical way in helping people in need that is really quite beautiful. And for Levinas, I’d note that the problematic word is “because.” And I wrote about how “because” is indeed a thin reed that the murderers will always break. 

Put a little differently, “because” suggests that you have to have “reasons” that complete the phrase and make it coherent. This might seem almost a matter of logic. But Levinas says no. Because the genocide starts when the reasons are attacked. For example, you might believe we should help some persecuted group “because” they are really just like you and me. And that’s true, of course. But Levinas knows that the killers always start by dehumanizing their targets, so they convince you that the victims are not really like you at all, but are more like “vermin” or “insects.” So the “because” condition fails, and that’s when the murdering starts. So you should just act and then think, says Levinas, and this immunizes you from that rhetorical poison. It’s a counterintuitive idea, but powerful when you really think about it.

Q: You open with a particularly striking question: What is to be done after catastrophe? Do you feel more sure of your answer, now that you’ve deeply considered these disparate response to a catastrophic event — or do you have more questions? 

A: I am still not sure what to do after world-historical catastrophes like genocides. I guess I’d say there is nothing to do — other than maintain a kind of radical hope that has no basis in evidence. “Catastrophes” like those I write about — the Holocaust, the Trail of Tears — are more than just acts of physical destruction. They destroy whole ways of being and uproot whole systems of meaning-making. Cultural concepts become void overnight, as their preconditions are destroyed. 

There is a great book by Jonathan Lear called “Radical Hope.” It begins with a discussion of a Plains Indian leader named Plenty Coups. After removal to the reservation in the 19th century, he is quoted as saying, “But when the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened.” Lear ponders what that last sentence is all about. What did Plenty Coups mean when he said “after this nothing happened?” Obviously, life’s daily activities still happened: births, deaths, eating, drinking, and such. So what does it mean? It’s perplexing. In the end, Lear concludes that Plenty Coups was making an ontological statement, in which he meant that all of the things that gave life meaning — all of those things that make the word “happen” actually signify something — had been erased. Events occurred, but didn’t “happen” because they fell into a world that to Plenty Coups lacked any sense at all. And Plenty Coups was not wrong about this; for him and his people, the world lost intelligibility. Nonetheless, Plenty Coups continued to lead his people, even amidst great deprivation, even though he never found a new basis for belief. He only had “radical hope” — which gave Lears’ book its name — that some new way of life might arise over time. I guess my answer to “what happens after catastrophe?” is just, well, “nothing happens” in the sense Plenty Coups meant it. And “radical hope” is all you get, if anything.

Q: There’s a memorable scene in your essay in which, during a visit to your community cemetery near Stilwell, your grandfather points out the burial plots that hold both your ancestors, and that will eventually hold him and you. You describe this moment beautifully as a comforting and connective chain linking you to both past and future communities. How does being part of that chain shape your life? 

A: I feel this sense of knowing where you will be buried — alongside all of your ancestors — is a great gift. That sounds a little odd, but it gives a rootedness that is very removed from most people’s experience today. And the cemetery is just a stand-in for a whole cultural structure that gives me a sense of role and responsibility. The lack of these, I think, creates a real sense of alienation, and this alienation is the condition of our age. So I feel lucky to have a strong sense of place and a place that will always be home. Lincoln talked about the “mystic chords of memory.” I feel this very mystical attachment to Oklahoma. The idea that this road or this community is one where every member of your family for generations has lived — or even if they moved away, always considered “home” — is very powerful. It always gives an answer to “Who are you?” That’s a hard question, but I can always say, “We are from Adair County,” and this is a sufficient answer. And back home, people would instantly nod their heads at the adequacy of this response. As I said, it’s a little mystical, but maybe that’s a strength, not a weakness.

Q: People might be surprised to learn that the winner of an essay contest focusing on ethics is actually not an English or philosophy major, but is instead in EECS. What areas and current issues in the field do you find interesting from an ethical perspective?

A: I think the pace of technological change — and society’s struggle to keep up — shows you how important philosophy, literature, history, and the liberal arts really are. Whether it’s algorithmic bias affecting real lives, or questions about what values we encode in AI systems, these aren’t just technical problems, but fundamentally about who we are and what we owe each other. It is true that I’m majoring in 6-5 [electrical engineering with computing] and 18 [mathematics], and of course these disciplines are extraordinarily important. But the humanities are something very important to me, as they do answer fundamental questions about who we are, what we owe to others, why people act this way or that, and how we should think through social issues. I despair when I hear brilliant engineers say they read nothing longer than a blog post. If anything, the humanities should be more important overall at MIT. 

When I was younger, I just happened across a discussion of CP Snow’s famous essay on the “Two Cultures.” In it, he talks about his scientist friends who had never read Shakespeare, and his literary friends who couldn’t explain thermodynamics. In a modest way, I’ve always thought that I’d like my education to be one that allowed me to participate in the two cultures. The essay on Levinas is my attempt to pursue this type of education.

Four from MIT named 2026 Rhodes Scholars

Sat, 11/15/2025 - 10:00pm

Vivian Chinoda ’25, Alice Hall, Sofia Lara, and Sophia Wang ’24 have been selected as 2026 Rhodes Scholars and will begin fully funded postgraduate studies at the University of Oxford in the U.K. next fall. Hall, Lara, and Wang, are U.S. Rhodes Scholars; Chinoda was awarded the Rhodes Zimbabwe Scholarship.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“MIT students never cease to amaze us with their creativity, vision, and dedication,” says Professor Taylor Perron, who co-chairs the committee along with Professor Nancy Kanwisher. “This is especially true of this year’s Rhodes scholars. It’s remarkable how they are simultaneously so talented in their respective fields and so adept at communicating their goals to the world. I look forward to seeing how these outstanding young leaders shape the future. It’s an honor to work with such talented students.”

Vivian Chinoda ’25

Vivian Chinoda, from Harare, Zimbabwe, was named a Rhodes Zimbabwe Scholar on Oct. 10. Chinoda graduated this spring with a BS in business analytics. At Oxford, she hopes to pursue the MSc in social data science and a master’s degree in public policy.  Chinoda aims to foster economic development and equitable resource access for Zimbabwean communities by promoting social innovation and evidence-based policy.

At MIT, Chinoda researched the impacts of the EU’s General Data Protection Regulation on stakeholders and key indicators, such as innovation, with the Institute for Data, Systems, and Society. She supported the Digital Humanities Lab and MIT Ukraine in building a platform to connect and fundraise for exiled Ukrainian scientists. With the MIT Office of Sustainability, Chinoda co-led the plan for a campus transition to a fully electric vehicle fleet, advancing the Institute’s Climate Action Plan.

Chinoda’s professional experience includes roles as a data science and research intern at Adaviv (a controlled-environment agriculture startup) and a product manager at Red Hat, developing AI tools for open-source developers.

Beyond academics, Chinoda served as first-year outreach chair and vice president of the African Students’ Association, where she co-founded the Impact Fund, raising over $30,000 to help members launch social impact initiatives in their countries. She was a scholar in the Social and Ethical Responsibilities of Computing (SERC) program, studying big-data ethics across sectors like criminal justice and health care, and a PKG social impact internship participant. Chinoda also enjoys fashion design, which she channeled into reviving the MIT Black Theatre Guild, earning her the 2025 Laya and Jerome B. Wiesner Student Art Award.

Alice Hall

Alice Hall is a senior from Philadelphia studying chemical engineering with a minor in Spanish. At Oxford, she will earn a DPhil in engineering, focusing on scaling sustainable heating and cooling technologies. She is passionate about bridging technology, leadership, and community to address the climate crisis.

Hall’s research journey began in the Lienhard Group, developing computational and techno-economic models of electrodialysis for nutrient reclamation from brackish groundwater. She then worked in the Langer Lab, investigating alveolar-capillary barrier function to enhance lung viability for transplantation. During a summer in Madrid, she collaborated with the European Space Agency to optimize surface treatments for satellite materials.

Hall’s current research in the Olivetti Group, as part of the MIT Climate Project, examines the manufacturing scalability of early-stage clean energy solutions. Hall has gained industry experience through internships with Johnson and Johnson and Procter and Gamble.

Hall represents the student body as president of MIT’s Undergraduate Association. She also serves on the Presidential Advisory Cabinet, the executive boards of the Chemical Engineering Undergraduate Student Advisory Board and MIT’s chapter of the American Institute of Chemical Engineers, the Corporation Joint Advisory Committee, the Compton Lectures Advisory Committee, and the MIT Alumni Association Board of Directors as an invited guest.

She is an active member of the Gordon-MIT Engineering Leadership Program, the Black Students’ Union, and the National Society of Black Engineers. As a member of the varsity basketball team, she earned both NEWMAC and D3hoops.com Region 2 Rookie of the Year honors in 2023.

Sofia Lara

Hailing from Los Angeles, Sofia Lara is a senior majoring in biological engineering with a minor in Spanish. As a Rhodes Scholar at Oxford, she will pursue a DPhil in clinical medicine, leveraging UK biobank data to develop sex-stratified dosing protocols and safety guidelines for the NHS.

Lara aspires to transform biological complexity from medicine’s blind spots into a therapeutic superpower where variability reveals hidden possibilities and precision medicine becomes truly precise.

At the Broad Institute of MIT and Harvard, Lara investigates the cGAS-STING immune pathway in cancer. Her thesis, a comprehensive genome-wide association study illuminating the role of STING variation in disease pathology, aims to expand understanding of STING-linked immune disorders.

Lara co-founded the MIT-Harvard Future of Biology Conference, convening multidisciplinary researchers to interrogate vulnerabilities in cancer biology. As president of MIT Baker House, she steered community initiatives and executed the legendary Piano Drop, mobilizing hundreds of students in an enduring ritual of collective resilience. Lara captains the MIT Archery Team, serves as music director for MIT Catholic Community, and channels empathy through hand-stitched crocheted octopuses for pediatric patients at the Massachusetts General Hospital.

Sophia Wang ’24

Sophia Wang, from Woodbridge, Connecticut, graduated with a BS in aerospace engineering and a concentration in the design of highly autonomous systems. At Oxford, she will pursue an MSc in mathematical and theoretical physics, followed by an MSc in global governance and diplomacy.

As an undergraduate, Wang conducted research with the MIT Space Telecommunications Astronomy Radiation (STAR) Lab and the MIT Media Lab’s Tangible Media Group and Center for Bits and Atoms. She also interned at the NASA Jet Propulsion Laboratory, working on engineering projects for exoplanet detection missions, the Mars Sample Return mission, and terrestrial proofs-of-concept for self-assembly in space.

Since graduating from MIT, Wang has been engaged in a number of projects. In Bhutan, she contributes to national technology policy centered on mindful development. In Japan, she is a founding researcher at the Henkaku Center, where she is creating an international network of academic institutions. As a venture capitalist, she recently worked with commercial space stations on the effort to replace the International Space Station, which will decommission in 2030. Wang’s creative prototyping tools, such as a modular electromechanical construction kit, are used worldwide through the Fab Foundation, a network of 2,500+ community digital fabrication labs.

An avid cook, Wang created with friends Mince, a pop-up restaurant that serves fine-dining meals to MIT students. Through MIT Global Teaching Labs, Wang taught STEM courses in Kazakhstan and Germany, and she taught digital fabrication and 3D printing workshops across the U.S. as a teacher and cyclist with MIT Spokes. 

Study suggests 40Hz sensory stimulation may benefit some Alzheimer’s patients for years

Fri, 11/14/2025 - 3:35pm

A new research paper documents the outcomes of five volunteers who continued to receive 40Hz light and sound stimulation for around two years after participating in an MIT early-stage clinical study of the potential Alzheimer’s disease (AD) therapy. The results show that for the three participants with late-onset Alzheimer’s disease, several measures of cognition remained significantly higher than comparable Alzheimer’s patients in national databases. Moreover, in the two late-onset volunteers who donated plasma samples, levels of Alzheimer’s biomarker tau proteins were significantly decreased.

The three volunteers who experienced these benefits were all female. The two other participants, each of whom were males with early-onset forms of the disease, did not exhibit significant benefits after two years. The dataset, while small, represents the longest-term test so far of the safe, noninvasive treatment method (called GENUS, for gamma entrainment using sensory stimuli), which is also being evaluated in a nationwide clinical trial run by MIT-spinoff company Cognito Therapeutics.

“This pilot study assessed the long-term effects of daily 40Hz multimodal GENUS in patients with mild AD,” the authors wrote in an open-access paper in Alzheimer's & Dementia: The Journal of the Alzheimer’s Association. “We found that daily 40Hz audiovisual stimulation over 2 years is safe, feasible, and may slow cognitive decline and biomarker progression, especially in late-onset AD patients.”

Diane Chan, a former research scientist in The Picower Institute for Learning and Memory and a neurologist at Massachusetts General Hospital, is the study’s lead and co-corresponding author. Picower Professor Li-Huei Tsai, director of The Picower Institute and the Aging Brain Initiative at MIT, is the study’s senior and co-corresponding author.

An “open label” extension

In 2020, MIT enrolled 15 volunteers with mild Alzheimer’s disease in an early-stage trial to evaluate whether an hour a day of 40Hz light and sound stimulation, delivered via an LED panel and speaker in their homes, could deliver clinically meaningful benefits. Several studies in mice had shown that the sensory stimulation increases the power and synchrony of 40Hz gamma frequency brain waves, preserves neurons and their network connections, reduces Alzheimer’s proteins such as amyloid and tau, and sustains learning and memory. Several independent groups have also made similar findings over the years.

MIT’s trial, though cut short by the Covid-19 pandemic, found significant benefits after three months. The new study examines outcomes among five volunteers who continued to use their stimulation devices on an “open label” basis for two years. These volunteers came back to MIT for a series of tests 30 months after their initial enrollment. Because four participants started the original trial as controls (meaning they initially did not receive 40Hz stimulation), their open label usage was six to nine months shorter than the 30-month period.

The testing at zero, three, and 30 months of enrollment included measurements of their brain wave response to the stimulation, MRI scans of brain volume, measures of sleep quality, and a series of five standard cognitive and behavioral tests. Two participants gave blood samples. For comparison to untreated controls, the researchers combed through three national databases of Alzheimer’s patients, matching thousands of them on criteria such as age, gender, initial cognitive scores, and retests at similar time points across a 30-month span.

Outcomes and outlook

The three female late-onset Alzheimer’s volunteers showed improvement or slower decline on most of the cognitive tests, including significantly positive differences compared to controls on three of them. These volunteers also showed increased brain-wave responsiveness to the stimulation at 30 months and showed improvement in measures of circadian rhythms. In the two late-onset volunteers who gave blood samples, there were significant declines in phosphorylated tau (47 percent for one and 19.4 percent for the other) on a test recently approved by the U.S. Food and Drug Administration as the first plasma biomarker for diagnosing Alzheimer’s.

“One of the most compelling findings from this study was the significant reduction of plasma pTau217, a biomarker strongly correlated with AD pathology, in the two late-onset patients in whom follow-up blood samples were available,” the authors wrote in the journal. “These results suggest that GENUS could have direct biological impacts on Alzheimer’s pathology, warranting further mechanistic exploration in larger randomized trials.”

Although the initial trial results showed preservation of brain volume at three months among those who received 40Hz stimulation, that was not significant at the 30-month time point. And the two male early-onset volunteers did not show significant improvements on cognitive test scores. Notably, the early onset patients showed significantly reduced brain-wave responsiveness to the stimulation.

Although the sample is small, the authors hypothesize that the difference between the two sets of patients is likely attributable to the difference in disease onset, rather than the difference in gender.

“GENUS may be less effective in early onset Alzheimer’s disease patients, potentially owing to broad pathological differences from late-onset Alzheimer’s disease that could contribute to differential responses,” the authors wrote. “Future research should explore predictors of treatment response, such as genetic and pathological markers.”

Currently, the research team is studying whether GENUS may have a preventative effect when applied before disease onset. The new trial is recruiting participants aged 55-plus with normal memory who have or had a close family member with Alzheimer's disease, including early-onset.

In addition to Chan and Tsai, the paper’s other authors are Gabrielle de Weck, Brennan L. Jackson, Ho-Jun Suk, Noah P. Milman, Erin Kitchener, Vanesa S. Fernandez Avalos, MJ Quay, Kenji Aoki, Erika Ruiz, Andrew Becker, Monica Zheng, Remi Philips, Rosalind Firenze, Ute Geigenmüller, Bruno Hammerschlag, Steven Arnold, Pia Kivisäkk, Michael Brickhouse, Alexandra Touroutoglou, Emery N. Brown, Edward S. Boyden, Bradford C. Dickerson, and Elizabeth B. Klerman.

Funding for the research came from the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, the Eleanor Schwartz Charitable Foundation, the Dolby Family, Che King Leo, Amy Wong and Calvin Chin, Kathleen and Miguel Octavio, the Degroof-VM Foundation, the Halis Family Foundation, Chijen Lee, Eduardo Eurnekian, Larry and Debora Hilibrand, Gary Hua and Li Chen, Ko Han Family, Lester Gimpelson, David B Emmes, Joseph P. DiSabato and Nancy E. Sakamoto, Donald A. and Glenda G. Mattes, the Carol and Gene Ludwig Family Foundation, Alex Hu and Anne Gao, Elizabeth K. and Russell L. Siegelman, the Marc Haas Foundation, Dave and Mary Wargo, James D. Cook, and the Nobert H. Hardner Foundation.

John Marshall and Erin Kara receive postdoctoral mentoring award

Fri, 11/14/2025 - 3:20pm

Shining a light on the critical role of mentors in a postdoc’s career, the MIT Postdoctoral Association presented the fourth annual Excellence in Postdoctoral Mentoring Awards to professors John Marshall and Erin Kara.

The awards honor faculty and principal investigators who have distinguished themselves across four areas: the professional development opportunities they provide, the work environment they create, the career support they provide, and their commitment to continued professional relationships with their mentees. 

They were presented at the annual Postdoctoral Appreciation event hosted by the Office of the Vice President for Research (VPR), on Sept. 17.

An MIT Postdoctoral Association (PDA) committee, chaired this year by Danielle Coogan, oversees the awards process in coordination with VPR and reviews nominations by current and former postdocs. “[We’re looking for] someone who champions a researcher, a trainee, but also challenges them,” says Bettina Schmerl, PDA president in 2024-25. “Overall, it’s about availability, reasonable expectations, and empathy. Someone who sees the postdoctoral scholar as a person of their own, not just someone who is working for them.” Marshall’s and Kara’s steadfast dedication to their postdocs set them apart, she says.

Speaking at the VPR resource fair during National Postdoc Appreciation Week, Vice President for Research Ian Waitz acknowledged “headwinds” in federal research funding and other policy issues, but urged postdocs to press ahead in conducting the very best research. “Every resource in this room is here to help you succeed in your path,” he said.

Waitz also commented on MIT’s efforts to strengthen postdoctoral mentoring over the last several years, and the influence of these awards in bringing lasting attention to the importance of mentoring. “The dossiers we’re getting now to nominate people [for the awards] may have five, 10, 20 letters of support,” he noted. “What we know about great mentoring is that it carries on between academic generations. If you had a great mentor, then you are more likely to be an amazing mentor once you’ve seen it demonstrated.”

Ann Skoczenski, director of MIT Postdoctoral Services, works closely with Waitz and the Postdoctoral Association to address the goals and concerns of MIT’s postdocs to ensure a successful experience at the Institute. “The PDA and the whole postdoctoral community do critical work at MIT, and it’s a joy to recognize them and the outstanding mentors who guide them,” said Skoczenski.

A foundation in good science

The awards recognize excellent mentors in two categories. Marshall, professor of oceanography in the Department of Earth, Atmospheric and Planetary Sciences, received the “Established Mentor Award.” 

Nominators described Marshall’s enthusiasm for research as infectious, creating an exciting work environment that sets the tone. “John’s mentorship is unique in that he immerses his mentees in the heart of cutting-edge research. His infectious curiosity and passion for scientific excellence make every interaction with him a thrilling and enriching experience,” one postdoc wrote.

At the heart of Marshall’s postdoc relationships is a straightforward focus on doing good science and working alongside postdocs and students as equals. As one nominator wrote, “his approach is centered on empowering his mentees to assume full responsibility for their work, engage collaboratively with colleagues, and make substantial contributions to the field of science.” 

His high expectations are matched by the generous assistance he provides his postdocs when needed. “He balances scientific rigor with empathy, offers his time generously, and treats his mentees as partners in discovery,” a nominator wrote.

Navigating career decisions and gaining the right experience along the way are important aspects of the postdoc experience. “When it was time for me to move to a different step in my career, John offered me the opportunities to expand my skills by teaching, co-supervising PhD students, working independently with other MIT faculty members, and contributing to grant writing,” one postdoc wrote. 

Marshall’s research group has focused on ocean circulation and coupled climate dynamics involving interactions between motions on different scales, using theory, laboratory experiments, observations and innovative approaches to global ocean modeling.

“I’ve always told my postdocs, if you do good science, everything will sort itself out. Just do good work,” Marshall says. “And I think it’s important that you allow the glory to trickle down.” 

Marshall sees postdoc appointments as a time they can learn to play to their strengths while focusing on important scientific questions. “Having a great postdoc [working] with you and then seeing them going on to great things, it’s such a pleasure to see them succeed,” he says. 

“I’ve had a number of awards. This one means an awful lot to me, because the students and the postdocs matter as much as the science.”

Supporting the whole person

Kara, associate professor of physics, received the “Early Career Mentor Award.”

Many nominators praised Kara’s ability to give advice based on her postdocs’ individual goals. “Her mentoring style is carefully tailored to the particular needs of every individual, to accommodate and promote diverse backgrounds while acknowledging different perspectives, goals, and challenges,” wrote one nominator.

Creating a welcoming and supportive community in her research group, Kara empowers her postdocs by fostering their independence. “Erin’s unique approach to mentorship reminds us of the joy of pursuing our scientific curiosities, enables us to be successful researchers, and prepares us for the next steps in our chosen career path,” said one. Another wrote, “Rather than simply giving answers, she encourages independent thinking by asking the right questions, helping me to arrive at my own solutions and grow as a researcher.”

Kara’s ability to offer holistic, nonjudgmental advice was a throughline in her nominations. “Beyond her scientific mentorship, what truly sets Erin apart is her thoughtful and honest guidance around career development and life beyond work,” one wrote. Another nominator highlighted their positive relationship, writing, “I feel comfortable sharing my concerns and challenges with her, knowing that I will be met with understanding, insightful advice, and unwavering support.” 

Kara’s research group is focused on understanding the physics behind how black holes grow and affect their environments. Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling on to black holes and measure the effects of strongly curved spacetime close to the event horizon. 

“I feel like postdocs hold a really special place in our research groups because they come with their own expertise,” says Kara. “I’ve hired them particularly because I want to learn and grow from them as well, and hopefully vice versa.” Kara focuses her mentorship on providing for autonomy, giving postdocs their own mentorship opportunities, and treating them like colleagues.

A postdoc appointment “is this really pivotal time in your career, when you’re figuring out what it is you want to do with the rest of your life,” she says. “So if I can help postdocs navigate that by giving them some support, but also giving them independence to be able to take their next steps, that feels incredibly valuable.”

“I just feel like they make my work/life so rich, and it’s not a hard thing to mentor them because they all are such awesome people and they make our research group really fun.”

MIT Haystack scientists study recent geospace storms and resulting light shows

Fri, 11/14/2025 - 3:15pm

The northern lights, or aurora borealis, one of nature's most spectacular visual shows, can be elusive. Conventional wisdom says that to see them, we need to travel to northern Canada or Alaska. However, in the past two years, New Englanders have been seeing these colorful atmospheric displays on a few occasions — including this week — from the comfort of their backyards, as auroras have been visible in central and southern New England and beyond. These unusual auroral events have been driven by increased space weather activity, a phenomenon studied by a team of MIT Haystack Observatory scientists.

Auroral events are generated when particles in space are energized by complicated processes in the near-Earth environment, following which they interact with gases high up in the atmosphere. Space weather events such as coronal mass ejections, in which large amounts of material are ejected from our sun, along with geomagnetic storms, greatly increase energy input into those space regions near Earth. These inputs then trigger other processes that cause an increase in energetic particles entering our atmosphere. 

The result is variable colorful lights when the newly energized particles crash into atoms and molecules high above Earth's surface. Recent significant geomagnetic storm events have triggered these auroral displays at latitudes lower than normal — including sightings across New England and other locations across North America.

New England has been enjoying more of these spectacular light shows, such as this week's displays and those during the intense geomagnetic solar storms in May and October 2024, because of increased space weather activity.

Research has determined that auroral displays occur when selected atoms and molecules high in the upper atmosphere are excited by incoming charged particles, which are boosted in energy by intense solar activity. The most common auroral display colors are pink/red and green, with colors varying according to the altitude at which these reactions occur. Red auroras come from lower-energy particles exciting neutral oxygen and cause emissions at altitudes above 150 miles. Green auroras come from higher-energy particles exciting neutral oxygen and cause emissions at altitudes below 150 miles. Rare purple and blue aurora come from excited molecular nitrogen ions and occur during the most intense events.

Scientists measure the magnitude of geomagnetic activity driving auroras in several different ways. One of these uses sensitive magnetic field-measuring equipment at stations around the planet to obtain a geomagnetic storm measurement known as Kp, on a scale from 1 (least activity) to 9 (greatest activity), in three-hour intervals. Higher Kp values indicate the possibility — not a guarantee — of greater auroral sightings as the location of auroral displays move to lower latitudes. Typically, when the Kp index reaches a range of 6 or higher, this indicates that aurora viewings are more likely outside the usual northern ranges. The geomagnetic storm events of this week reached a Kp value of 9, indicating very strong activity in the sun–Earth system.

At MIT Haystack Observatory in Westford, Massachusetts, geospace and atmospheric physics scientists study the atmosphere and its aurora year-round by combining observations from many different instruments. These include ground-based sensors — including large upper-atmosphere radars that bounce signals off particles in the ionosphere — as well as data from space satellites. These tools provide key information, such as density, temperature, and velocity, on conditions and disturbances in the upper atmosphere: basic information that helps researchers at MIT and elsewhere understand the weather in space. 

Haystack geospace research is primarily funded through science funding by U.S. federal agencies such as the National Science Foundation (NSF) and NASA. This work is crucial for our increasingly spacefaring civilization, which requires continual expansion of our understanding of how space weather affects life on Earth, including vital navigation systems such as GPS, worldwide communication infrastructure, and the safety of our power grids. Research in this area is especially important in modern times, as humans increasingly use low Earth orbit for commercial satellite constellations and other systems, and as civilization further progresses into space.

Studies of the variations in our atmosphere and its charged component, known as the ionosphere, have revealed the strong influence of the sun. Beyond the normal white light that we experience each day, the sun also emits many other wavelengths of light, from infrared to extreme ultraviolet. Of particular interest are the extreme ultraviolet portions of solar output, which have enough energy to ionize atoms in the upper atmosphere. Unlike its white light component, the sun's output at these very short wavelengths has many different short- and long-term variations, but the most well known is the approximately 11-year solar cycle, in which the sun goes from minimum to maximum output. 

Scientists have determined that the most recent peak in activity, known as solar maximum, occurred within the past 12 months. This is good news for auroral watchers, as the most active period for severe geomagnetic storms that drive auroral displays at New England latitudes occurs during the three-year period following solar maximum.

Despite intensive research to date, we still have a great deal more to learn about space weather and its effects on the near-Earth environment. MIT Haystack Observatory continues to advance knowledge in this area. 

Larisa Goncharenko, lead geospace scientist and assistant director at Haystack, states, "In general, understanding space weather well enough to forecast it is considerably more challenging than even normal weather forecasting near the ground, due to the vast distances involved in space weather forces. Another important factor comes from the combined variation of Earth's neutral atmosphere, affected by gravity and pressure, and from the charged particle portion of the atmosphere, created by solar radiation and additionally influenced by the geometry of our planet's magnetic field. The complex interplay between these elements provides rich complexity and a sustained, truly exciting scientific opportunity to improve our understanding of basic physics in this vital part of our home in the solar system, for the benefit of civilization."

For up-to-date space weather forecasts and predictions of possible aurora events, visit SpaceWeather.com or NOAA's Aurora Viewline site.

Pages