MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Learning with audiobooks
Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.
“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.
Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.
“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”
So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.
Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.
“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”
Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.
Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.
A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.
Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.
Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.
For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.
The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning.
A philosophy of work
What makes work valuable? Michal Masny, the NC Ethics of Technology Postdoctoral Fellow in the MIT Department of Philosophy, investigates the role work plays in our lives and its impact on our well-being.
Masny sees numerous benefits to work, beyond a paycheck. It’s a space for people to develop excellence at something, make a social contribution, gain social recognition, and create and sustain community.
“Consider a future in which we shorten the work week, or one in which we eliminate work altogether,” Masny says. “I don’t believe either of these scenarios would be unambiguously good for everyone.”
“Work is both necessary and positively valuable,” he argues, further suggesting that our lives might be worsened if we were to eliminate work completely. “There can be optimal combinations of work and leisure time.”
Masny is completing his two-year term in the NC Ethics of Technology Fellowship at the end of the spring semester. In addition to advancing his research, Masny has been working to foster dialogue and educate students on issues at the intersection of philosophy and computing. This semester, Masny is teaching an undergraduate course, 24.131 (Ethics of Technology).
Masny advocates for an updated approach to educating complete, socially aware students. “I want to create scientists who think about their projects and potential outcomes as lawyers and philosophers might, and vice versa,” he says. Masny argues for the importance of eliminating the “wisdom gap” between these groups, citing scientist Carl Sagan’s warning about the dangers of becoming “powerful without becoming commensurately wise” as scientific and technological advances continue.
“The traditional division of labor is that scientists and engineers invent new technologies, and then philosophers and lawyers evaluate and regulate them,” he continues. “But the pace at which new technologies are invented and deployed has made this division of labor untenable.”
Established in 2021 with support from the NC Cultural Foundation, the fellowship was created with the goal of advancing critical discourse and research in the ethics of technology and AI at MIT, and by making important research and information available to the global community.
Venture capitalist Songyee Yoon, founder and managing partner of AI-focused investment firm Principal Venture Partners and a supporter of the NC Ethics of Technology Fellowship, believes technology and scientific discovery are among humanity’s most valuable public goods, and artificial intelligence represents the most consequential technology of our time.
“If we want the fabric of our society to be built responsibly, we must train our builders upstream, at the very moment they begin learning to design and scale technology. There is no better place to begin this work than MIT,” she says. “Supporting the Ethics of Technology Fellows Program was born from that conviction, and I am deeply encouraged to see it embraced at MIT.”
“In philosophy, you’re supposed to question everything”
Masny arrived at MIT in fall 2024, following a year as a postdoc at the Kavli Center for Ethics, Science, and the Public at the University of California at Berkeley. Originally from Poland, Masny received his PhD in philosophy from Princeton University after completing studies at Oxford University and the University of Warwick in the United Kingdom.
He works mainly in value theory, ethics of technology, and social and political philosophy. His current research interests include the nature of human and animal well-being, our obligations to future generations, the risk of human extinction, the future of work, and anti-aging technology.
During his tenure in the fellowship, Masny has published several research articles on ethical issues concerning the future of humanity — a topic closely relevant to thinking about the existential risks of AI development and deployment.
“In philosophy, you’re supposed to question everything,” he says.
Masny’s work in the fellowship continues a tradition of collaborative investigation and exploration that MIT encourages and celebrates. In fall 2024, Masny co-taught an introductory undergraduate course, STS.006J/24.06J (Bioethics), with Robin Scheffler, an associate professor in the Program in Science, Technology, and Society.
During the 2024-25 academic year, Masny led a student research group, “Deepfakes: Ethical, Political, and Epistemological Issues,” as a part of the Social and Ethical Responsibilities of Computing (SERC) Scholars Program. The group explored the ethical, political, and epistemological dimensions of concerns over misleading deepfakes, and how they can be mitigated.
Students in Masny’s cohort spent spring 2025 working in small groups on a number of projects and presented their findings in a poster session during the MIT Ethics of Computing Research Symposium at the MIT Schwarzman College of Computing.
In summer 2025, Masny assisted with a summer course in philosophy, 24.133/134 (Experiential Ethics), in which students subject their computer science and engineering projects to ethical scrutiny with the help of trained philosophers.
He’s encouraged by the opportunities to test his ideas and share them with people who can help refine and improve them.
Communities of practice and engagement
When considering the value of his experience at MIT, Masny lauds the philosophy department and the opportunities to collaborate with so many different kinds of scholars. To answer the kinds of questions his research uncovers, he says, you must range further afield. He values the space MIT creates for broad inquiry while also seeking connections between his findings on work, its value, and the human impact of technology on our social lives.
“Typically, undergraduate philosophy courses include two hour-long lectures followed by discussion; a lecture is like an audiobook,” he says. Instead, he believes, they should more like listening to a podcast or watching a talk show.
“I want the class to be an event in a student’s schedule,” he continues.
Masny is also considering how to integrate valuable philosophical tools into life outside the classroom. Philosophy and research can support other kinds of inquiry. Developing philosophers’ mindsets is a net positive, by his reckoning. Designing better questions, for example, can lead to better, more insightful, more accurate answers. It can also improve students’ abilities to identify challenges.
Masny will begin teaching at the University of Colorado at Boulder in fall 2026, and wants to test new ideas while continuing his research into the value of work.
Kieran Setiya, the Peter de Florez Professor in Philosophy and head of the Department of Linguistics and Philosophy, says the NC Ethics of Technology Postdoctoral Fellowship has allowed MIT to bring in a series of exceptional young philosophers working at the intersection of ethics and AI, studying the systemic effects of new computing technologies and the moral, social, and political challenges they pose.
“This is just the kind of applied interdisciplinary thinking we need to support and sustain at MIT,” he adds.
Slice and dice
What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?
That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.
Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.
New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages.
Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.
“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.”
Light it up
For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria.
“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”
When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection.
Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE.
“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says.
Problems at the periphery
Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes.
SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.
The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage.
The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before.
There may also be other proteins or interactions involved.
“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.”
Future directions
Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.
Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role.
Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.
“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”
A new type of electrically driven artificial muscle fiber
Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.
Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.
The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.
The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.
Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.
The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.
A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.
“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.
Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.
To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”
Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers.
“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”
These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”
This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.
Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”
In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.
Bridging space research and policy
While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.
As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.
Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.
“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”
Likewise, she wants to bring her expertise in public policy into the lab.
“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.
McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.
Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.
In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.
Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”
As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.
When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.
Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.
“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”
In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.
“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”
McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”
One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.
“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.
In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.
McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.
McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.
Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.
She credits the AeroAstro graduate community with keeping her momentum going.
“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”
New technique makes AI models leaner and faster while they’re still learning
Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance.
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.
The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.
"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."
The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.
"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”
The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.
"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."
What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.
The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.
The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.
The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.
There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.
The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.
Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.
"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."
"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."
The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.
The flawed fundamentals of failing banks
Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.
But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.
That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.
“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”
This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.
Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.
“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.
For his research and teaching, Verner received tenure at MIT last year.
Landing in a “great place”
Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.
“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”
As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.
Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.
“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”
For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.
After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.
“MIT is a great place,” Verner says simply. “Great colleagues, great students.”
Focused on fundamentals
Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.
Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.
In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correa and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.
This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.
“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”
The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.
“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.
Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.
“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”
Desirée Plata appointed associate dean of engineering
Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.
In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.
Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.
“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.
Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion. Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.
Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.
Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.
Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.
As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.
Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.
Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.
Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.
Physicists zero in on the mass of the fundamental W boson particle
When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.
In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.
The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.
Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.
Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.
From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.
The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.
Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.
“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”
The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.
Piecing together
The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.
“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”
Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.
For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.
“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”
Gathering momentum
When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.
The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.
The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.
That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.
“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”
“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”
This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics.
Sixteen new START.nano companies are developing hard-tech solutions with the support of MIT.nano
MIT.nano has announced that 16 startups became active participants in its START.nano program in 2025, more than doubling the number of new companies from the previous year. Aimed at speeding the transition of hard-tech innovation to market, START.nano supports new ventures through the discounted use of MIT.nano shared facilities and a guided access to the MIT innovation ecosystem. The newly engaged startups are developing solutions for some of the world’s greatest challenges in health, climate, energy, semiconductors, novel materials, and quantum computing.
“The unique resources of MIT.nano enable not just the foundational research of academia, but the translation of that research into commercial innovations through startups,” says START.nano Program Manager Joyce Wu SM ’00, PhD ’07. “The START.nano accelerator supports early-stage companies from MIT and beyond with the tools and network they need for success.”
Launched in 2021, START.nano aims to increase the survival rate of hard-tech startups by easing their journey from the lab to the real world. In addition to receiving access to MIT.nano’s laboratories, program participants are invited to present at startup exhibits at MIT conferences, and in exclusive events including the newly launched PITCH.nano competition.
“For an early-stage startup working at the frontier of superconductor discovery, the combination of infrastructure and community has been irreplaceable,” says Jason Gibson, CEO and co-founder of Quantum Formatics. “START.nano isn’t just a resource,” adds Cynthia Liao MBA ’24, CEO and co-founder of Vertical Semiconductor. “It’s a strategic advantage that accelerates our roadmap, allowing us to iterate quickly to meet customer needs and strengthen our competitive edge.”
Although an MIT affiliation is not required, five of the 16 companies in the new cohort are led by MIT alumni, and an additional three have MIT affiliation. In total, 49 percent of the startups in START.nano are founded by MIT graduates.
Here are the intended impacts of the 16 new START.nano companies:
Acorn Genetics is developing a "smartphone of sequencing," launching the power of genetic analysis out of slow, centralized labs and into the hands of consumers for fast, portable, and affordable sequencing.
Addis Energy leverages oil, gas, and geothermal drilling technologies to unlock the chemical potential of iron-rich rocks. By injecting engineered fluids, they harness the earth’s natural energy to produce ammonia that is both abundant and cost-effective.
Augmend Health uses virtual reality and AI to deliver clinical data intelligence services for specialty care that turns incomplete documentation into revenue, compliance, and better treatment decisions.
Brightlight Photonics is building high-performance laser infrastructure at chip scale, integrating Titanium:Sapphire gain to deliver broadband, high-power, low-noise optical sources for advanced photonic systems.
Cahira Technologies is creating the new paradigm of brain-computer symbiosis for treating intractable diseases and human augmentation through autonomous, nonsurgical neural implants.
Copernic Catalysts is leveraging computational modeling to develop and commercialize transformational catalysts for low-cost and sustainable production of bulk chemicals and e-fuels.
Daqus Energy is unlocking high-energy lithium-ion batteries using critical metal-free organic cathodes.
Electrified Thermal Solutions is reinventing the firebrick to electrify industrial heat.
Guardion is making analytical instruments, chemical detectors, and radiation detectors more sensitive, portable, and easier to scale with nanomaterial-based ion detectors.
Mantel Capture is designing carbon capture materials to operate at the high temperatures found inside boilers, kilns, and furnaces — enabling highly efficient carbon capture that has not been possible until now.
nOhm Devices is developing highly-efficient cryogenic electronics for quantum computers and sensors.
Quantum Formatics is speeding discovery of the world’s next superconductors using proprietary AI.
Qunett is building the foundational hardware stack for deployable quantum networks to power the next era of global connectivity.
Rheyo is developing new ways to make dental care more effective, efficient, and easy through advanced materials and technology.
Vertical Semiconductor is commercializing high-voltage, high-density, high-efficiency vertical GaN (gallium nitride) to power the next era of compute.
VioNano Innovations is developing specialty material solutions that reduce variability and improve precision in semiconductor manufacturing, allowing chipmakers to build even smaller, faster, and more cost-effective chips.
START.nano now comprises over 32 companies and 11 graduates — ventures that have moved beyond the prototyping stages, and some into commercialization. See the full list here.
Researchers develop molecular editing tool to relocate alcohol groups
A significant challenge for researchers in materials science and drug discovery is that even the most minor change to a molecule’s structure can completely alter its function. Historically, making these adjustments meant researchers had to re-synthesize the target molecule from scratch — a time-consuming and expensive bottleneck akin to tearing down a house just to move a lamp.
In an exciting discovery recently published in Nature, MIT chemists led by Professor Alison Wendlandt have developed a precision technique that allows scientists to seamlessly relocate alcohol functional groups from one spot on a molecule to a neighboring site. This process bypasses the need to rebuild the entire structure and is the result of a multi-year collaboration with Bristol Myers Squibb.
Functional group repositioning
Using a special light-sensitive molecule called decatungstate as a catalyst, the reaction triggers a highly controlled “migration” of the alcohol group. The process is remarkably predictable, ensuring the molecule retains its precise 3D shape and orientation throughout the move.
The ability to implement subtle structural tweaks without the waste of “from-scratch” synthesis eliminates a primary hurdle that has long plagued the field. Furthermore, because the reaction is gentle enough to work on complex, nearly finished structures, it serves as a powerful fine-tuning tool for late-stage drug candidates.
Precision editing to unlock new chemical designs
When combined with existing chemical methods, this tool provides new pathways to create challenging molecular architectures and oxygenation patterns that were previously out of reach.
“This alcohol migration strategy allows for precise, molecular-level tuning of oxygen atom positions,” says Qian Xu, the co-first author of the paper and a postdoc in the Wendlandt Group. “With predictable stereo- and regioselectivity and late-stage operability, it presents an enticing chance to modify natural products and drug molecules through ‘editing.’”
Ultimately, this precision editing tool holds the potential to dramatically improve the efficiency of molecular design campaigns, accelerating the development of new pharmaceuticals, materials, and agrochemicals.
In addition to Wendlandt and Xu, MIT contributors include co-lead author and graduate student Yichen Nie, recent postdoc Ronghua Zhang, and professor of chemistry Jeremiah A. Johnson. Other authors include Jacob-Jan Haaksma of the University of Groningen in The Netherlands; Natalie Holmberg-Douglas, Farid van der Mei, and Chloe Williams of of Bristol Myers Squibb; and Paul M. Scola of Actithera.
Study reveals “two-factor authentication” system that controls microRNA destruction
Cells rely on tiny molecules called microRNAs to tune which genes are active and when. Cells must carefully control the lifespan of microRNAs to prevent widespread disruption to gene regulation.
A new study led by researchers at MIT’s Whitehead Institute for Biomedical Research and Germany’s Max Planck Institute of Biochemistry reveals how cells selectively eliminate certain microRNAs through an unexpectedly intricate molecular recognition system. The open-access work, published on March 18 in Nature, shows that the process requires two separate RNA signals, similar to how many digital systems require two forms of identity verification before granting access.
The findings explain how cells use this “two-factor authentication” system to ensure that only intended microRNAs are destroyed, leaving the rest of the gene regulation machinery in operation.
MicroRNAs are short strands of RNA that help control gene expression. Working together with a protein called Argonaute, they bind to specific messenger RNAs — the molecules that carry genetic instructions from DNA to the cell’s protein-making machinery — and trigger their destruction. In this way, microRNAs can reduce the production of specific proteins.
While scientists recognized that microRNAs could be destroyed through a pathway known as target-directed microRNA degradation, or TDMD, the details of how cells recognized which microRNAs to eliminate remained unclear.
“We knew there was a pathway that could target microRNAs for degradation, but the biochemical mechanism behind it wasn’t understood,” says MIT Professor David Bartel, a Whitehead Institute member and co-senior author of the study.
Earlier work from Bartel’s lab and others had identified a key player in this pathway: the ZSWIM8 E3 ubiquitin ligase. E3 ubiquitin ligases are involved in the cell’s recycling system and attach a small molecular tag called ubiquitin to other proteins, marking them for destruction.
The researchers first showed that the ZSWIM8 E3 ligase specifically binds and tags Argonaute, the protein that holds microRNAs and helps regulate genes. The researchers’ next challenge was to understand how this machinery recognized only Argonaute complexes carrying specific microRNAs that should be degraded.
The answer turned out to be surprisingly sophisticated.
Using a combination of biochemistry and cryo-electron microscopy — an imaging technique that reveals molecular structures at near-atomic resolution — the researchers discovered that the degradation system relies on a dual-RNA recognition process. First, Argonaute must carry a specific microRNA. Second, another RNA molecule called a “trigger RNA” must bind to that microRNA in a particular way.
The degradation machinery activates only when both signals are present.
This dual requirement ensures exquisite specificity. Each cell contains over a hundred thousand Argonaute–microRNA complexes regulating many genes, and destroying them indiscriminately would disrupt essential biological processes.
“The vast majority of Argonaute molecules in the cell are doing useful work regulating gene expression,” says Bartel, who is a professor of biology at MIT and also a Howard Hughes Medical Institute investigator. “You only want to degrade the ones carrying a particular microRNA and bound to the right trigger RNA. Without that specificity, the cell would lose its microRNAs and the essential regulation that they provide.”
The structural images revealed complex molecular interactions. The ZSWIM8 ligase detects multiple structural changes that occur when the two RNAs bind together within the Argonaute protein.
“When we saw the structure, everything clicked,” says Elena Slobodyanyuk, a graduate student in Bartel’s lab and co-first author of the study. “You could see how the pairing of the trigger RNA with the microRNA reshapes the Argonaute complex in a way that the ligase can recognize.”
Beyond explaining how TDMD works, the findings may impact how scientists think about the regulation of RNA molecules more broadly.
“A lot of E3 ligases recognize their targets through simpler signals,” says Jakob Farnung, co-first author and researcher in the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “It was like opening a treasure chest where every detail revealed something new and mesmerizing.”
MicroRNAs typically persist in cells for much longer time periods than most messenger RNAs, but some degrade far more quickly, and the TDMD pathway appears to account for many of these unusually short-lived microRNAs.
The researchers are now investigating whether other RNAs can trigger similar degradation pathways and whether additional microRNAs are regulated through variations of the mechanism shown in this study.
“This opens up a whole new way of thinking about how RNA molecules can control protein degradation,” says Brenda Schulman, study co-senior author and director of the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “Here, the recognition was far more elaborate than expected. There’s likely much more left to discover.”
Uncovering the details of this intricate regulatory system required interdisciplinary collaboration, combining expertise in RNA biochemistry, structural biology, and ubiquitin enzymology to solve this long-standing molecular puzzle.
“This was a project that required the strengths of two labs working at the forefront of their fields,” says Schulman, who is also an alum of Whitehead Institute. “It was an incredible team effort.”
How bacteria suppress immune defenses in stubborn wound infections
Chronic wound infections are notoriously difficult to manage because some bacteria can actively interfere with the body’s immune defenses. In wounds, Enterococcus faecalis (E. faecalis) is particularly resilient — it can survive inside tissues, alter the wound environment, and weaken immune signals at the injury site. This disruption creates conditions where other microbes can easily establish themselves, resulting in multi-species infections that are complex and slow to resolve. Such persistent wounds, including diabetic foot ulcers and post-surgical infections, place a heavy burden on patients and health care systems, and sometimes lead to serious complications such as amputations.
Now, researchers have discovered how E. faecalis releases lactic acid to acidify its surroundings and suppresses the immune-cell signal needed to start a proper response to infection. By silencing the body’s defenses, the bacterium can cause persistent and hard-to-treat wound infections. This explains why some wounds struggle to heal, even with treatment, and why infections involving multiple bacteria are especially difficult to eradicate.
The work was led by researchers from the Singapore-MIT Alliance for Research and Technology (SMART) Antimicrobial Resistance (AMR) interdisciplinary research group, alongside collaborators from the Singapore Centre for Environmental Life Sciences Engineering at Nanyang Technological University (NTU Singapore), MIT, and the University of Geneva in Switzerland.
In a paper titled “Enterococcus faecalis-derived lactic acid suppresses macrophage activation to facilitate persistent and polymicrobial wound infections,” recently published in Cell Host & Microbe, the researchers documented how E. faecalis releases large amounts of lactic acid during infection. This acidity suppresses the activation of macrophages — immune cells that normally help to clear infections — and interferes with several important internal processes that help the cell recognize and respond to infection. As a result, the mechanisms that cells rely on to send out “danger” signals are suppressed, leaving the macrophages unable to fully activate.
Researchers found that E. faecalis uses a two‑step mechanism to achieve this. Lactic acid enters the macrophages through a lactate transporter called MCT‑1 and also binds to a lactate-sensing receptor, GPR81, on the cell surface. By engaging both pathways, the bacterium effectively shuts down downstream immune signalling and blocks the macrophage’s inflammatory response, allowing E. faecalis to persist in the wound much longer than it should. Specifically, the lactic acid prevents a key immune alarm signal, known as NF-κB, from switching on inside these cells.
This was proven in a mouse wound model, where strains of E. faecalis that could not make lactic acid were cleared much more quickly, and the wounds also showed stronger immune activity. In wounds infected with both E. faecalis and Escherichia coli, the weakened immune response caused by lactic acid also allowed E. coli to grow better. This explains why wound infections often involve multiple species of bacteria and become harder to treat over time, particularly since E. faecalis is among the most common bacteria found in chronic wounds.
“Chronic wound infections often fail not because antibiotics are powerless, but because the immune system has effectively been ‘switched off’ at the infection site. We found that E. faecalis floods the wound with lactic acid, lowering pH and muting the NF‑κB alarm inside macrophages — the very cells that should be calling for help. By pinpointing how acidity rewires immune signalling, we now have clear targets to reactivate the immune response,” says first author Ronni da Silva, research scientist at SMART AMR, former postdoc in the lab of co-author and MIT professor of biology Jianzhu Chen, and SCELSE-NTU visiting researcher.
“This discovery strengthens our understanding of host-pathogen interactions and offers new directions for developing treatments and wound care that target the bacteria’s immunosuppressive strategies. By revealing how the immune response is shut down, this research may help improve infection management and support better recovery outcomes for patients, especially those with chronic wounds or weakened immunity,” says Kimberly Kline, principal investigator at SMART AMR, SCELSE-NTU visiting academic, professor at the University of Geneva, and corresponding author of the paper.
By identifying lactic‑acid‑driven immune suppression as a root cause of persistent wound infections, this work highlights the potential of treatment approaches that support the immune system, rather than rely on antibiotics alone. This could lead to therapies that help wounds heal more reliably and reduce the risk of complications. Potential directions include reducing acidity in the wound or blocking the signals that lactic acid uses to switch off immune cells.
Building on their study, the researchers plan to explore validation in additional pathogens and human wound samples, followed by assessments in advanced preclinical models ahead of any potential clinical trials.
The research was partially supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
MIT graduate engineering and business programs ranked highly by U.S. News for 2026-27
U.S. News and World Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, released today. The Institute has held the No. 1 spot since 1990, when the magazine first ranked such programs.
The MIT Sloan School of Management also placed highly, occupying the No. 6 spot for the best graduate business programs.
Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering (tied with the University of California at Berkeley), electrical/electronic/communications engineering (tied with Stanford University and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.
In the rankings of individual MBA specialties, MIT placed first in four areas: business analytics, entrepreneurship (with Stanford), production/operations, and supply chain/logistics. It placed second in executive MBA programs (with the University of Chicago).
U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of graduate programs in the sciences, social sciences, and humanities are based solely on reputational surveys.
In the sciences, ranked by U.S. News for the first time in four years, MIT’s doctoral programs placed first in four areas: biology (with Scripps Research Institute), chemistry (with Berkeley and Caltech), computer science (with Carnegie Mellon University and Stanford), and physics (with Caltech, Princeton University, and Stanford). The Institute placed second in mathematics (with Harvard University, Stanford, and Berkeley).
Helping data centers deliver higher performance with less hardware
To improve data center efficiency, multiple storage devices are often pooled together over a network so many applications can share them. But even with pooling, significant device capacity remains underutilized due to performance variability across the devices.
MIT researchers have now developed a system that boosts the performance of storage devices by handling three major sources of variability simultaneously. Their approach delivers significant speed improvements over traditional methods that tackle only one source of variability at a time.
The system uses a two-tier architecture, with a central controller that makes big-picture decisions about which tasks each storage device performs, and local controllers for each machine that rapidly reroute data if that device is struggling.
The method, which can adapt in real-time to shifting workloads, does not require specialized hardware. When the researchers tested this system on realistic tasks like AI model training and image compression, it nearly doubled the performance delivered by traditional approaches. By intelligently balancing the workloads of multiple storage devices, the system can increase overall data center efficiency.
“There is a tendency to want to throw more resources at a problem to solve it, but that is not sustainable in many ways. We want to be able to maximize the longevity of these very expensive and carbon-intensive resources,” says Gohar Chaudhry, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. “With our adaptive software solution, you can still squeeze a lot of performance out of your existing devices before you need to throw them away and buy new ones.”
Chaudhry is joined on the paper by Ankit Bhardwaj, an assistant professor at Tufts University; Zhenyuan Ruan PhD ’24; and senior author Adam Belay, an associate professor of EECS and a member of the MIT Computer Science and Artificial Intelligence Laboratory. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation.
Leveraging untapped performance
Solid-state drives (SSDs) are high-performance digital storage devices that allow applications to read and write data. For instance, an SSD can store vast datasets and rapidly send data to a processor for machine-learning model training.
Pooling multiple SSDs together so many applications can share them improves efficiency, since not every application needs to use the entire capacity of an SSD at a given time. But not all SSDs perform equally, and the slowest device can limit the overall performance of the pool.
These inefficiencies arise from variability in SSD hardware and the tasks they perform.
To utilize this untapped SSD performance, the researchers developed Sandook, a software-based system that tackles three major forms of performance-hampering variability simultaneously. “Sandook” is an Urdu word that means “box,” to signify “storage.”
One type of variability is caused by differences in the age, amount of wear, and capacity of SSDs that may have been purchased at different times from multiple vendors.
The second type of variability is due to the mismatch between read and write operations occurring on the same SSD. To write new data to the device, the SSD must erase some existing data. This process can slow down data reads, or retrievals, happening at the same time.
The third source of variability is garbage collection, a process of gathering and removing outdated data to free up space. This process, which slows SSD operations, is triggered at random intervals that a data center operator cannot control.
“I can’t assume all SSDs will behave identically through my entire deployment cycle. Even if I give them all the same workload, some of them will be stragglers, which hurts the net throughput I can achieve,” Chaudhry explains.
Plan globally, react locally
To handle all three sources of variability, Sandook utilizes a two-tier structure. A global schedular optimizes the distribution of tasks for the overall pool, while faster schedulers on each SSD react to urgent events and shift operations away from congested devices.
The system overcomes delays from read-write interference by rotating which SSDs an application can use for reads and writes. This reduces the chance reads and writes happen simultaneously on the same machine.
Sandook also profiles the typical performance of each SSD. It uses this information to detect when garbage collection is likely slowing operations down. Once detected, Sandook reduces the workload on that SSD by diverting some tasks until garbage collection is finished.
“If that SSD is doing garbage collection and can’t handle the same workload anymore, I want to give it a smaller workload and slowly ramp things back up. We want to find the sweet spot where it is still doing some work, and tap into that performance,” Chaudhry says.
The SSD profiles also allow Sandook’s global controller to assign workloads in a weighted fashion that considers the characteristics and capacity of each device.
Because the global controller sees the overall picture and the local controllers react on the fly, Sandook can simultaneously manage forms of variability that happen over different time scales. For instance, delays from garbage collection occur suddenly, while latency caused by wear and tear builds up over many months.
The researchers tested Sandook on a pool of 10 SSDs and evaluated the system on four tasks: running a database, training a machine-learning model, compressing images, and storing user data. Sandook boosted the throughput of each application between 12 and 94 percent when compared to static methods, and improved the overall utilization of SSD capacity by 23 percent.
The system enabled SSDs to achieve 95 percent of their theoretical maximum performance, without the need for specialized hardware or application-specific updates.
“Our dynamic solution can unlock more performance for all the SSDs and really push them to the limit. Every bit of capacity you can save really counts at this scale,” Chaudhry says.
In the future, the researchers want to incorporate new protocols available on the latest SSDs that give operators more control over data placement. They also want to leverage the predictability in AI workloads to increase the efficiency of SSD operations.
“Flash storage is a powerful technology that underpins modern datacenter applications, but sharing this resource across workloads with widely varying performance demands remains an outstanding challenge. This work moves the needle meaningfully forward with an elegant and practical solution ready for deployment, bringing flash storage closer to its full potential in production clouds,” says Josh Fried, a software engineer at Google and incoming assistant professor at the University of Pennsylvania, who was not involved with this work.
This research was funded, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the Semiconductor Research Corporation.
Electrons in moiré crystals explore higher-dimensional quantum worlds
The electrons that power our society flow left and right through the circuitry in our electronics, back and forth along the transmission lines that make up our power grid, and up and down to light up every floor of every building. But the electrons in newly discovered “moiré crystals” move in much stranger ways. They can move left and right, back and forth, or up and down in our three-dimensional world, but these electrons also act as if they can teleport in and out of a mysterious fourth dimension of space that is perpendicular to our perceivable reality. Physicists have found that this strange, newly discovered quantum behavior has nothing to do with the electrons themselves and everything to do with the strange material environment in which they live.
The electrons in moiré crystals leap into a fourth dimension through a process called “quantum tunneling.” While a soccer ball sitting at the bottom of a hill will stay put until someone retrieves it, a quantum particle in a valley can jump out all on its own. Quantum tunneling may seem magical to us, but it is quite commonplace in the microscopic quantum world, on the length scales of atoms. Quantum tunneling is also important on larger length scales, particularly in large superconducting circuits that underlie an emerging landscape of quantum technology, as recognized by the 2025 Nobel Prize in Physics.
However, quantum tunneling in moiré crystals is different, in that once an electron tunnels, physicists have now measured that it acts as if it had tunneled into a completely different world and come back again, as if it had been transported through a fourth “synthetic” dimension.
In a paper published recently in the journal Nature, a team of MIT researchers realize a long-anticipated scalable technique for producing high-quality moiré materials as moiré crystals, overcoming a materials bottleneck for next-generation electronic applications. In addition, the electrons in these crystals act as if they can teleport through a fourth dimension of space, unlocking a realistic materials approach for realizing numerous theoretical predictions of higher-dimensional superconductivity and higher-dimensional topological properties in the laboratory.
The study’s co-lead authors are Kevin Nuckolls, a Pappalardo postdoc in physics at MIT, and Nisarga Paul PhD ’25, and the study’s corresponding author is Joe Checkelsky, professor of physics at MIT. In addition, the study’s MIT co-authors include Alan Chen, Filippo Gaggioli, Joshua Wakefield, and Liang Fu, along with collaborators at Harvard University, Toho University, and the National High Magnetic Field Laboratory.
Crystal perfection
To make a moiré material, physicists first start with atomically thin two-dimensional (2D) materials, like the thinnest sheets of carbon known as graphene. Moiré materials can be created by combining individual sheets of the same 2D material and twisting them back and forth with respect to one another. Moiré materials can also be created by combining two different 2D materials that are very similar, but not quite the same, which ensures that they can never perfectly match one another even when carefully aligned. Both of these methods create intricate interference patterns where the individual layers of moiré materials are nearly aligned in some areas and visibly misaligned in others. Physicists call these patterns “moiré superlattices,” named after historical French fabrics that show similarly beautiful patterns generated by overlaying two different threading patterns.
For more than a decade, moiré materials have completely reshaped how physicists design and control quantum material properties, and the physics labs at MIT have been the hotbed of transformative discoveries in this ever-growing research field. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, and Raymond Ashoori, professor of physics at MIT, were early adopters of new techniques for fabricating moiré materials. Together in 2014, their labs discovered that electrons in moiré materials made from graphene and the 2D material boron nitride live in an intricate quantum fractal known as “Hofstadter’s butterfly.” In 2018, Jarillo-Herrero’s lab discovered that moiré materials made from twisting two sheets of graphene were fertile grounds for unconventional superconductivity that, by some metrics, is one of the strongest superconductors ever discovered. Long Ju, the Lawrence C. and Sarah W. Biedenharn Associate Professor of Physics, and his lab discovered in 2024 that moiré materials made from multilayer graphene and boron nitride cause electrons to split apart into fractional pieces, a quantum phenomenon previously thought to be exclusively confined to extremely high magnetic fields, but now realized without the need for a magnetic field.
Common across all of these experiments, and those performed around the world, were the tireless efforts of students and postdocs in carefully assembling moiré material devices by hand, one at a time. To make a moiré material device, 2D materials like graphene are peeled using Scotch tape from rock-like crystals, such as graphite. Then, sticky polymer films and microscopes enable researchers to pick up different 2D materials one by one with a precise sequence of twist angles. Finally, these stacks of 2D materials are etched into individual devices that allow researchers to investigate their properties in the lab.
In their new study, Joe Checkelsky and his lab have discovered a new technique for generating moiré materials that skips over all of these laborious steps. Their new method takes an entirely different approach, and it’s one that can assemble moiré materials by the tens of thousands. Instead of assembling samples one by one and layer by layer, Checkelsky and his lab have found new chemical synthesis routes that enlist Mother Nature’s help to grow “moiré crystals” with high-quality moiré superlattices built into each of their layers. By analogy, if one were to think of previous generations of moiré materials like two stacked sheets of paper with different line spacings, Checkelsky has figured out how to generate entire libraries of encyclopedias whose odd-numbered pages and even-numbered pages have two different line spacings.
“It feels incredible for our team to have made this materials discovery, particularly at MIT,” says Nuckolls, co-lead author on the work. “Moiré materials have become a central focus of quantum materials research today in large part because of the work of our colleagues just down the hallway.”
In the end, it turns out that nature is by far the best at assembling moiré materials when given the right tools. The MIT team discovered that naturally grown moiré materials are nearly perfect and highly reproducible. This offers a long-anticipated proof-of-concept demonstration of a potentially scalable route to using moiré materials in next-generation electronics. Although there are many more obstacles to be overcome to transform these fundamental science results into usable technology, the team has demonstrated a crucial first step in the right direction.
4D in 4K
After discovering how to grow and manipulate moiré superlattices in moiré crystals, the team began to investigate their properties. Initially, the team found that the metallic properties of these materials were surprisingly complicated, but they soon shifted their perspective to think from a higher-dimensional point of view, an idea inspired by theoretical proposals made roughly half a century ago. To peer into this prospective four-dimensional quantum world, the team performed detailed studies of the electronic and magnetic properties of moiré crystals at very large magnetic fields. The electrons in common metals move in tight circular orbits when placed in a magnetic field. However, something very special happens when they move in moiré crystals with two different interfering lattices. This interference generates a moiré superlattice that is mathematically equivalent to an emergent four-dimensional “superspace” lattice. Guided by this new 4D superspace lattice, the team discovered that these electrons could now move through this fourth dimension when their motion aligns to the direction where the two competing lattices interfere the most.
“Metaphorically, our measurements uncover ‘shadows’ of emergent 4D landscape upon which the electrons live,” says Nuckolls. “By carefully analyzing these 3D silhouettes from different angles and perspectives, our measurement reconstructs the 4D landscape that guides electrons in moiré crystals.”
Although this extra synthetic dimension is fictitious and the electrons in moiré crystals are actually still stuck in our 3D reality, they simulate a four-dimensional quantum world so closely that the measured properties of moiré crystals appear as if the researchers had actually performed their experiments in 4D. It seems like moiré crystals aren’t particularly bothered by whether the fourth dimension is fictitious and synthetic or if it’s real. It’s all the same to them.
“Mathematically, the equations describing the electron dynamics in these crystals are four-dimensional,” says co-lead author Nisarga Paul. “The electrons propagate in the synthetic dimension just as they do in our world’s three physical dimensions. It’s hard to detect this motion, but one of the striking realizations was that a magnetic field can reveal fingerprints of this synthetic dimension in experimentally measurable electronic properties known as quantum oscillations.”
Going forward, the team will explore how a wide variety of material properties might benefit from extra synthetic dimensions, which now could be within reach of realization.
“It’s fascinating to consider what may be possible next,” Checkelsky says. “There are long-standing theoretical predictions for higher-dimensional conductors and superconductors, for example — materials of this type may offer a new platform to examine these experimentally in the laboratory.”
This research was supported, in part, by the Gordon and Betty Moore Foundation, the U.S. Department of Energy Office of Science, the U.S. Office of Naval Research, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, MIT Pappalardo Fellowships in Physics, the Swiss National Science Foundation, and the U.S. National Science Foundation.
Urban planning students engage with communities through the Freedom Summer Fellowship
For the past three summers, MIT master’s students and recently graduated planners have collaborated with cities and community organizations to advance climate, infrastructure, and economic development initiatives. They’re known as the Freedom Summer Fellows, participants in an impact-driven program launched in 2023 by the MIT Department of Urban Studies and Planning (DUSP), an expression of the department’s commitment to equal opportunity and experiential learning.
Over the course of eight to 10 weeks, fellows are immersed in the real stakes and challenges of projects that involve navigating a network of interconnected causes, competing agendas, a range of stakeholders, and rapidly changing circumstances. Host organizations define discrete tasks and provide ongoing supervision, while fellows develop actionable tools and materials designed to empower organizations in the long term — from policy research and grant application strategies to navigate funding, to analytical tools and implementation frameworks to ensure informed and streamlined project management.
“You can’t teach planning today without grappling with how policy actually unfolds within communities; under pressure, with limited resources, and with multiple conflicting interests,” says Phillip Thompson, professor of urban planning at MIT and former New York City deputy mayor for strategic policy initiatives under Mayor Bill de Blasio. “The Freedom Summer Fellowship is about capacity building through cooperative learning — a knowledge exchange intended to have lasting positive results for communities, while equipping planners with critical experience as they embark on their careers.”
From classroom to communities
The fellowship emerged from Bills and Billions, a DUSP Independent Activities Period course taught by Thompson and Elisabeth Reynolds, professor of the practice at MIT and former special assistant to President Joe Biden for manufacturing and economic development. The course examines U.S. federal policy and its intersection with local economic development, labor markets, and the infrastructure of industry, energy, and the built environment more broadly.
“We were at an inflection point,” says Reynolds, speaking of her return to MIT in fall 2022 after serving at the National Economic Council. “There was a real sense of urgency about the wave of new legislation and funding around clean energy, infrastructure, and reindustrialization, and much of the investment and work in these areas continues today. It’s a very dynamic time for cities and states, with significant experimentation and innovative strategies — a perfect environment for MIT graduate students and recent grads.”
Securing federal funding is typically dependent on competitive grants requiring technical, financial, and community planning that many local governments and nonprofits are not equipped for. “While much funding to localities has since been cut, the momentum for change is still there,” says Thompson. “The incentives put forward by the Inflation Reduction Act encouraged localities and communities to initiate their own clean energy projects, and there’s a continued recognition that climate change is going to take a movement from the bottom up.”
At a time when the U.S. is experiencing a paradigm shift in policy — characterized by challenges to a free-market economy and global trade, renewed investment in industrial strategy, and the lifting of environmental and other regulations — the fellowship offers a way to support the planning and implementation of equitable development strategies and to redirect resources where they are needed most.
From placements to professional practice
Since 2023, 31 Freedom Summer Fellows have collaborated with 19 host organizations, and contributed to more than $100 million in state, federal, and philanthropic grant applications, including a successful $3 million EPA Climate Pollution Reduction grant for Hawaii. Fellows have helped convene more than 3,500 community members and have produced dozens of planning tools, including implementation maps, technical tools, and dashboards that support equitable project design and production. Collaborations have inspired the focus of graduate theses produced as client reports for hosts, and in several cases fellows have extended their positions to full-time roles.
For Sara Jex MCP ’25, her 2024 Freedom Summer Fellowship became a direct pathway from graduate study to professional practice. She was placed with the Site Readiness Fund for Good Jobs in Cleveland, Ohio, an organization working to transform brownfields and disinvested industrial sites into engines of inclusive economic growth.
“Much of my work that summer involved developing an EPA Community Change Grant application for a proposed industrial district spanning over 350 acres — 200 of which we’re looking to reactivate,” says Jex. “So, it’s a transformative project that will bring in new jobs, but there are also major challenges that come with industrial place-making, especially given the proximity to residential neighborhoods. In Rust Belt cities, there’s a history of industrial disinvestment leading to job loss, population decline, and environmental injustices. We don’t want to repeat the harms of the past — we want to create something better.”
To support equitable development strategies for the industrial corridor, Jex helped to prepare technical tools mapping the effects of development on home values, seeking to identify a balance of growth, affordability, and resident benefit. She also evaluated wealth-building strategies such as land trusts and mixed-income neighborhood trusts, offering recommendations for community ownership of land holdings.
“Our vision for the project is not just about bringing in new businesses and creating new jobs,” says Jex, “it’s also about going beyond job creation to create lasting benefit for communities surrounding the sites.”
Jex continued working with Site Readiness Fund for Good Jobs during her second year at MIT and now holds a full-time role at the organization. “The Freedom Summer Fellowship gave me a platform to start building my planning career,” she reflects. “It was eye-opening to be in a cohort of other students doing similar work across the country. The insights from our weekly meetings have stayed with me since graduating — we were able to share perspectives on the challenges we were facing from multiple different contexts, and that brought a new dimension to the learning process.”
Redefining resilience
For Deena Darby, an MIT master’s student with a background in architecture and public art, her 2025 Freedom Summer Fellowship offered a way to bridge creative practice with structural change. Working with the LA84 Foundation and the Ubuntu Climate Initiative in Los Angeles, Darby focused on neighborhood-based resilience in the context of the 2025 wildfires and the upcoming 2028 Olympics.
“My decision to apply to do a master’s in city planning at MIT was informed by the projects I had been working on in Harlem, the Bronx, Brooklyn, and other cities, including Philadelphia and Detroit. Much of that work involved community engagement work when producing public art at an architectural scale, but I kept feeling that residents deserved more than an art piece at the end of a project.”
During the fellowship, Darby contributed to asset mapping across six neighborhoods, developed case studies on resilience hubs, and helped shape strategies that tied climate adaptation to culture, play, and community ownership. Her immersion in the lived experience of those neighborhoods — visiting sites, meeting organizers, and participating in local coalitions — was crucial to her development of strategic recommendations for decentralized infrastructure, cultural arts cohorts, and neighborhood-based resilience festivals.
“Resilience is often narrowly framed around climate,” Darby reflects. “But what we were really redefining was economic resilience, social resilience, and the ability of communities to tell their own stories.”
Darby’s fellowship experience has led to her thesis project, working with the residents of a historically Black neighborhood in her hometown of Savannah, Georgia, who are experiencing displacement. “Coming from an architecture and planning background, my instinct is to ask, How can we frame these issues in terms of cultural preservation and community-based policy development and implementation?” says Darby. “How can we manage change, with the goal of benefiting present residents as well as honoring those who have lived here in the past?”
For Darby, gaining practical understanding of the inseparability of planning and policy has been key to shaping her approach to navigating the educational opportunities at MIT. “In a higher-education context, you’ll often find policy housed separately from planning. But the moment you’re working in situ, it doesn’t make sense to separate the two. For me, the fellowship was a bridge between two often-siloed disciplines.”
Reassessing expertise
“Impact at MIT is typically associated with technological breakthroughs,” says Reynolds. “But much of MIT’s work can make a huge difference when applied in the near term, on the ground. At DUSP, we’re all about bringing theory and practice together, about the interrelation of communities, infrastructure, policy, and how that maps out in the built environment. We can bring expertise and knowledge into the field tomorrow, into places that can immediately benefit from the collaboration.”
Initial funding for the fellowship at MIT was provided by the MIT Climate Project, in addition to national foundations. Faculty are exploring ways to expand and increase the number of student placements, further embedding relationships between MIT and cities across the United States. There are also discussions about sharing the model with other institutions, including historically Black colleges and international collaborators.
“We’re just starting these conversations with other institutions, but it’s the model of engaged, experiential, cooperative learning that matters,” says Thompson. “It’s clear that the experts aren’t necessarily those who have read a lot of books about planning or design, but those who are embedded within communities, trying to figure out these challenges from the inside.”
The planner might not be the primary expert — but they are the ones who guide decisions that shape the futures of communities. The Freedom Summer Fellowship is about fostering a culture of urban planning in which those decisions are centered upon the lived experience of stakeholders. An approach to practice in which — as Jex put it, reflecting on her experience in Cleveland: “Planners are the people who make decisions about how cities shape access to opportunity.”
Applications for the 2026 Freedom Summer Fellowships are being accepted now through April 7.
Why does wealth inequality matter?
The MIT James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work recently hosted a half-day symposium at the Institute on “Why Wealth Inequality Matters.”
Three panel discussions convened experts from economics, philosophy, sociology, and political science to explore the origins, mechanisms, and political consequences of wealth inequality.
Richard Locke, John C Head III Dean of the MIT Sloan School of Management, welcomed attendees to the symposium, emphasizing how the event reflects MIT’s commitments to interdisciplinary collaboration and to addressing “society's most pressing issues.”
Here are three key takeaways from the afternoon’s panels.
When wealth buys political influence and legal immunity, democracy is threatened
Hélène Landemore of Yale University argued that wealth inequality isn’t inherently problematic, but becomes dangerous when wealth offers disproportionate influence in other spheres, including political power.
Wojciech Kopczuk of Columbia University echoed this, emphasizing that wealth is a complicated and often ambiguous measure of inequality. Wealth reflects institutional contexts — for example, weak safety nets drive precautionary saving. Still, he agreed that wealth is a relevant metric at the very top, where it correlates with political capture and corporate power.
Landemore explained that when the wealthy dominate policy discussions, “some groups are systematically disbelieved or ignored, and the result is policy failure.” For example, French carbon taxes disproportionately burdened working-class people who were more dependent on cars, which led to the yellow vests protests.
Elizabeth Anderson of the University of Michigan extended this point to corporate power, warning that extreme concentration gives powerful firms de facto immunity from the rule of law — the wealthiest companies can hire hundreds of lawyers to swamp the legal system.
To counteract these negative consequences of high inequality, Oren Cass of American Compass argued that strengthening worker power is key. Redistribution, he said, is a way to improve living standards, but “it is not a solution to the kinds of problems that actually plague democratic capitalism.”
The roots of the racial wealth gap are so deep that equal opportunity alone won’t close it
Ellora Derenoncourt of Princeton University explained that in the United States today, the wealth gap between Black and white Americans is 6:1. In other words, for every dollar of wealth held by an average white American, the average Black American holds about $0.17. She noted that this racial wealth gap has largely remained unchanged for the past 50 years.
“Even if we were to equalize differences in wealth accumulating opportunities — equal savings rates, equal capital gains rates going forward — we’re still hundreds of years away from convergence,” she explained, due to the magnitude of the original gap.
Alexandra Killewald of the University of Michigan added that the racial wealth gap is actively rebuilt each generation through unequal schools, unequal pay, and unequal access to homeownership.
“The past matters, but it’s not just about the past,” she explained. Even if a massive reparations plan were implemented, “if we just let things go on as they are, we will start to recreate inequality from Day 1.”
High inequality and authoritarianism reinforce each other
Daron Acemoglu of MIT described how increasing inequality goes hand-in-hand with the weakening of democracy: “Once inequality starts building up, it also naturally erodes democracies’ claim for legitimacy.”
High inequality, he argued, is both a cause and an effect of liberal democracy failing to deliver on its promise of shared prosperity. This failure, in turn, weakens public support for democracy.
Building on this argument, Sheri Berman of Barnard College examined why economically disadvantaged voters in the United States and Europe have increasingly voted for right-wing populist parties, despite holding economically progressive views.
She described how center-left parties have transformed since the late 20th century, converging with the right on economic policy (embracing free trade and market deregulation) while moving left on social and cultural issues. As a result, she argued, working-class and rural voters no longer saw center-left parties as champions of their economic interests, or as reflecting their social and cultural preferences.
David Yang of Harvard University explained that once authoritarianism takes hold, regimes continue to produce inequality. For example, non-democratic regimes are most responsive not to the average citizen, but to whoever poses the greatest threat to regime survival. In China, this tends to be the wealthier urban population capable of organizing large-scale collective action.
Working to advance the nuclear renaissance
Today, there are 94 nuclear reactors operating in the United States, more than in any other country in the world, and these units collectively provide nearly 20 percent of the nation’s electricity. That is a major accomplishment, according to Dean Price, but he believes that our country needs much more out of nuclear energy, especially at a moment when alternatives to fossil fuel-based power plants are desperately being sought. He became a nuclear engineer for this very reason — to make sure that nuclear technology is up to the task of delivering in this time of considerable need.
“Nuclear energy has been a tremendous part of our nation’s energy infrastructure for the past 60 years, and the number of people who maintain that infrastructure is incredibly small,” says Price, an MIT assistant professor in the Department of Nuclear Science and Engineering (NSE), as well as the Atlantic Richfield Career Development Professor in Energy Studies. “By becoming a nuclear engineer, you become one of a select number of people responsible for carbon-free energy generation in the United States.”
That was a mission he was eager to take part in, and the goals he set for himself were far from modest: He wanted to help design and usher in a new class of nuclear reactors, building on the safety, economics, and reliability of the existing nuclear fleet.
Price has never wavered from this objective, and he’s only found encouragement along the way. The nuclear engineering community, he says, “is small, close-knit, and very welcoming. Once you get into it, most people are not inclined to do anything else.”
Illuminating the relationships between physical processes
In his first research project as an undergraduate at the University of Illinois Urbana at Champaign, Price studied the safety of the steel and concrete casks used to store spent reactor fuel rods after they’ve cooled off in tanks of water, typically for several years. His analysis indicated that this storage method was quite safe, although the question as to what should ultimately be done with these fuel casks, in terms of long-term disposal, remains open in this country.
After starting graduate studies at the University of Michigan in 2020, Price took up a different line of research that he’s still engaged in today. That area of study, called multiphysics modeling, involves looking at various physical processes going on in the core of a nuclear reactor to see how they interact — an alternative to studying these processes one at a time.
One key process, neutronics, concerns how neutrons buzz around in the reactor core causing nuclear fission, which is what generates the power. A second process, called thermal hydraulics, involves cooling the reactor to extract the heat generated by neutrons. A multiphysics simulation, analyzing how these two processes interact, could show how the heat carried away as the reactor produces power affects the behavior of neutrons, because the hotter the fuel is, the less likely it is to cause fission.
“If you ever want to change your power level, or do anything with the reactor, the temperature of the fuel is a critical input that you need to know,” says Price. “Multiphysics modeling allows us to correlate the fission neutronics processes with a thermal property, temperature. That, in turn, can help us predict how the reactor will behave under different conditions.”
Multiphysics modeling for light water reactors, which are the ones operating today with capacities on the order of 1,000 megawatts, are pretty well established, Prices says. But methods for modeling advanced reactors — small modular reactors (SMRs with capacities ranging from around 20 to 300 MW) and microreactors (rated at 1 to 20 MW) — are far less advanced. Only a very small number of these reactors are operating today, but Price is focusing his efforts on them because of their potential to produce power more cheaply and more safely, along with their greater flexibility in power and size.
Although multiphysics simulations have supplied the nuclear community with a wealth of information, they can require supercomputers to solve, or find approximate solutions to, coupled and extremely difficult nonlinear equations. In the hopes of greatly reducing the computational burden, Price is actively exploring artificial intelligence approaches that could provide similar answers while bypassing those burdensome equations altogether. That has been a central theme of his research agenda since he joined the MIT faculty in September 2025.
A crucial role for artificial intelligence
What artificial intelligence and machine-learning methods, in particular, are good at is finding patterns concealed within data, such as correlations between variables critical to the functioning of a nuclear plant. For example, Price says, “if you tell me the power level of your reactor, it [AI] could tell you what the fuel temperature is and even tell you the 3-dimensional temperature distribution in your core.” And if this can be done without solving any complicated differential equations, computational costs could be greatly reduced.
Price is investigating several applications where AI may be especially useful, such as helping with the design of novel kinds of reactors. “We could then rely on the safety frameworks developed over the past 50 years to carry out a safety analysis of the proposed design,” he says. “In this way, AI will not be directly interfacing with anything that is safety-critical.” As he sees it, AI’s role would be to augment established procedures, rather than replacing them, helping to fill in existing gaps in knowledge.
When a machine-learning model is given a sufficient amount of data to learn from, it can help us better understand the relationship between key physical processes — again without having to solve nonlinear differential equations.
“By really pinning down those relationships, we can make better design decisions in the early stages,” Price says. “And when that technology is developed and deployed, AI can help us make more intelligent control decisions that will enable us to operate our reactors in a safer and more economical way.”
Giving back to the community that nurtured him
Simply put, one of his chief goals is to bring the benefits of AI to the nuclear industry, and he views the possibilities as vast and largely untapped. Price also believes that he is well-positioned as a professor at MIT to bring us closer to the nuclear future that he envisions. As he sees it, he’s working not only to develop the next generation of reactors, but also to help prepare the next generation of leaders in the field.
Price became acquainted with some prospective members of that “next generation” in a design course he co-taught last fall with Curtis Smith, the KEPCO Professor of the Practice of Nuclear Science and Engineering. For Price, that introduction lasted just a few months, but it was long enough for him to discover that MIT students are exceptionally motivated, hard-working, and capable. Not surprisingly, those happen to be the same qualities he’s hoping to find in the students that join his research team.
Price vividly recalls the support he received when taking his first, tentative steps in this field. Now that he’s moved up the ranks from undergraduate to professor, and acquired a substantial body of knowledge along the way, he wants his students “to experience that same feeling that I had upon entering the field.” Beyond his specific goals for improving the design and operation of nuclear reactors, Price says, “I hope to perpetuate the same fun and healthy environment that made me love nuclear engineering in the first place.”
