MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 19 hours 11 min ago

A greener way to 3D print stronger stuff

Thu, 09/04/2025 - 4:30pm

3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs. 

But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.

This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?

Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.

“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.” 

For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.

They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand. 

In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.

“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”

A lean, green, eco-friendly printing machine

SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.

Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.

Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.

3D for free

The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”

As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.

Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”

Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.

The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.

A new generative AI approach to predicting chemical reactions

Wed, 09/03/2025 - 3:55pm

Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.

The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.

“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.

Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.

In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.

The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.

The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.

“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”

But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”

“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.

The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.

In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”

He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”

The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”

In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”

The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.

3 Questions: The pros and cons of synthetic data in AI

Wed, 09/03/2025 - 12:00am

Synthetic data are artificially generated by algorithms to mimic the statistical properties of actual data, without containing any information from real-world sources. While concrete numbers are hard to pin down, some estimates suggest that more than 60 percent of data used for AI applications in 2024 was synthetic, and this figure is expected to grow across industries.

Because synthetic data don’t contain real-world information, they hold the promise of safeguarding privacy while reducing the cost and increasing the speed at which new AI models are developed. But using synthetic data requires careful evaluation, planning, and checks and balances to prevent loss of performance when AI models are deployed.       

To unpack some pros and cons of using synthetic data, MIT News spoke with Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems and co-founder of DataCebo whose open-core platform, the Synthetic Data Vaulthelps users generate and test synthetic data.

Q: How are synthetic data created?

A: Synthetic data are algorithmically generated but do not come from a real situation. Their value lies in their statistical similarity to real data. If we’re talking about language, for instance, synthetic data look very much as if a human had written those sentences. While researchers have created synthetic data for a long time, what has changed in the past few years is our ability to build generative models out of data and use them to create realistic synthetic data. We can take a little bit of real data and build a generative model from that, which we can use to create as much synthetic data as we want. Plus, the model creates synthetic data in a way that captures all the underlying rules and infinite patterns that exist in the real data.

There are essentially four different data modalities: language, video or images, audio, and tabular data. All four of them have slightly different ways of building the generative models to create synthetic data. An LLM, for instance, is nothing but a generative model from which you are sampling synthetic data when you ask it a question.      

A lot of language and image data are publicly available on the internet. But tabular data, which is the data collected when we interact with physical and social systems, is often locked up behind enterprise firewalls. Much of it is sensitive or private, such as customer transactions stored by a bank. For this type of data, platforms like the Synthetic Data Vault provide software that can be used to build generative models. Those models then create synthetic data that preserve customer privacy and can be shared more widely.      

One powerful thing about this generative modeling approach for synthesizing data is that enterprises can now build a customized, local model for their own data. Generative AI automates what used to be a manual process.

Q: What are some benefits of using synthetic data, and which use-cases and applications are they particularly well-suited for?

A: One fundamental application which has grown tremendously over the past decade is using synthetic data to test software applications. There is data-driven logic behind many software applications, so you need data to test that software and its functionality. In the past, people have resorted to manually generating data, but now we can use generative models to create as much data as we need.

Users can also create specific data for application testing. Say I work for an e-commerce company. I can generate synthetic data that mimics real customers who live in Ohio and made transactions pertaining to one particular product in February or March.

Because synthetic data aren’t drawn from real situations, they are also privacy-preserving. One of the biggest problems in software testing has been getting access to sensitive real data for testing software in non-production environments, due to privacy concerns. Another immediate benefit is in performance testing. You can create a billion transactions from a generative model and test how fast your system can process them.

Another application where synthetic data hold a lot of promise is in training machine-learning models. Sometimes, we want an AI model to help us predict an event that is less frequent. A bank may want to use an AI model to predict fraudulent transactions, but there may be too few real examples to train a model that can identify fraud accurately. Synthetic data provide data augmentation — additional data examples that are similar to the real data. These can significantly improve the accuracy of AI models.

Also, sometimes users don’t have time or the financial resources to collect all the data. For instance, collecting data about customer intent would require conducting many surveys. If you end up with limited data and then try to train a model, it won’t perform well. You can augment by adding synthetic data to train those models better.

Q. What are some of the risks or potential pitfalls of using synthetic data, and are there steps users can take to prevent or mitigate those problems?

A. One of the biggest questions people often have in their mind is, if the data are synthetically created, why should I trust them? Determining whether you can trust the data often comes down to evaluating the overall system where you are using them.

There are a lot of aspects of synthetic data we have been able to evaluate for a long time. For instance, there are existing methods to measure how close synthetic data are to real data, and we can measure their quality and whether they preserve privacy. But there are other important considerations if you are using those synthetic data to train a machine-learning model for a new use case. How would you know the data are going to lead to models that still make valid conclusions?

New efficacy metrics are emerging, and the emphasis is now on efficacy for a particular task. You must really dig into your workflow to ensure the synthetic data you add to the system still allow you to draw valid conclusions. That is something that must be done carefully on an application-by-application basis.

Bias can also be an issue. Since it is created from a small amount of real data, the same bias that exists in the real data can carry over into the synthetic data. Just like with real data, you would need to purposefully make sure the bias is removed through different sampling techniques, which can create balanced datasets. It takes some careful planning, but you can calibrate the data generation to prevent the proliferation of bias.

To help with the evaluation process, our group created the Synthetic Data Metrics Library. We worried that people would use synthetic data in their environment and it would give different conclusions in the real world. We created a metrics and evaluation library to ensure checks and balances. The machine learning community has faced a lot of challenges in ensuring models can generalize to new situations. The use of synthetic data adds a whole new dimension to that problem.

I expect that the old systems of working with data, whether to build software applications, answer analytical questions, or train models, will dramatically change as we get more sophisticated at building these generative models. A lot of things we have never been able to do before will now be possible.

Soft materials hold onto “memories” of their past, for longer than previously thought

Wed, 09/03/2025 - 12:00am

If your hand lotion is a bit runnier than usual coming out of the bottle, it might have something to do with the goop’s “mechanical memory.”

Soft gels and lotions are made by mixing ingredients until they form a stable and uniform substance. But even after a gel has set, it can hold onto “memories,” or residual stress, from the mixing process. Over time, the material can give in to these embedded stresses and slide back into its former, premixed state. Mechanical memory is, in part, why hand lotion separates and gets runny over time. 

Now, an MIT engineer has devised a simple way to measure the degree of residual stress in soft materials after they have been mixed, and found that common products like hair gel and shaving cream have longer mechanical memories, holding onto residual stresses for longer periods of time than manufacturers might have assumed.

In a study appearing today in Physical Review Letters, Crystal Owens, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), presents a new protocol for measuring residual stress in soft, gel-like materials, using a standard benchtop rheometer.

Applying this protocol to everyday soft materials, Owens found that if a gel is made by mixing it in one direction, once it settles into a stable and uniform state, it effectively holds onto the memory of the direction in which it is mixed. Even after several days, the gel will hold some internal stress that, if released, will cause the gel to shift in the direction opposite to how it was initially mixed, reverting back to its earlier state.

“This is one reason different batches of cosmetics or food behave differently even if they underwent ‘identical’ manufacturing,” Owens says. “Understanding and measuring these hidden stresses during processing could help manufacturers design better products that last longer and perform more predictably.”

A soft glass

Hand lotion, hair gel, and shaving cream all fall under the category of “soft glassy materials” — materials that exhibit properties of both solids and liquids.

“Anything you can pour into your hand and it forms a soft mound is going to be considered a soft glass,” Owens explains. “In materials science, it’s considered a soft version of something that has the same amorphous structure as glass.”

In other words, a soft glassy material is a strange amalgam of a solid and a liquid. It can be poured out like a liquid, and it can hold its shape like a solid. Once they are made, these materials exist in a delicate balance between solid and liquid. And Owens wondered: For how long?

“What happens to these materials after very long times? Do they finally relax or do they never relax?” Owens says. “From a physics perspective, that’s a very interesting concept: What is the essential state of these materials?”

Twist and hold

In the manufacturing of soft glassy materials such as hair gel and shampoo, ingredients are first mixed into a uniform product. Quality control engineers then let a sample sit for about a minute — a period of time that they assume is enough to allow any residual stresses from the mixing process dissipate. In that time, the material should settle into a steady, stable state, ready for use.

But Owens suspected that the materials may hold some degree of stress from the production process long after they’ve appeared to settle.

“Residual stress is a low level of stress that’s trapped inside a material after it’s come to a steady state,” Owens says. “This sort of stress has not been measured in these sorts of materials.”

To test her hypothesis, she carried out experiments with two common soft glassy materials: hair gel and shaving cream. She made measurements of each material in a rheometer — an instrument consisting of two rotating plates that can twist and press a material together at precisely controlled pressures and forces that relate directly to the material’s internal stresses and strains.

In her experiments, she placed each material in the rheometer and spun the instrument’s top plate around to mix the material. Then she let the material settle, and then settle some more — much longer than one minute. During this time, she observed the amount of force it took the rheometer to hold the material in place. She reasoned that the greater the rheometer’s force, the more it must be counteracting any stress within the material that would otherwise cause it to shift out of its current state.

Over multiple experiments using this new protocol, Owens found that different types of soft glassy materials held a significant amount of residual stress, long after most researchers would assume the stress had dissipated. What’s more, she found that the degree of stress that a material retained was a reflection of the direction in which it was initially mixed, and when it was mixed.

“The material can effectively ‘remember’ which direction it was mixed, and how long ago,” Owens says. “And it turns out they hold this memory of their past, a lot longer than we used to think.”

In addition to the protocol she has developed to measure residual stress, Owens has developed a model to estimate how a material will change over time, given the degree of residual stress that it holds. Using this model, she says scientists might design materials with “short-term memory,” or very little residual stress, such that they remain stable over longer periods.

One material where she sees room for such improvement is asphalt — a substance that is first mixed, then poured in molten form over a surface where it then cools and settles over time. She suspects that residual stresses from the mixing of asphalt may contribute to cracks forming in pavement over time. Reducing these stresses at the start of the process could lead to longer-lasting, more resilient roads.

“People are inventing new types of asphalt all the time to be more eco-friendly, and all of these will have different levels of residual stress that will need some control,” she says. “There’s plenty of room to explore.”

This research was supported, in part, by MIT’s Postdoctoral Fellowship for Engineering Excellence and an MIT Mathworks Fellowship.

3 Questions: On biology and medicine’s “data revolution”

Tue, 09/02/2025 - 5:45pm

Caroline Uhler is an Andrew (1956) and Erna Viterbi Professor of Engineering at MIT; a professor of electrical engineering and computer science in the Institute for Data, Science, and Society (IDSS); and director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, where she is also a core institute and scientific leadership team member. 

Uhler is interested in all the methods by which scientists can uncover causality in biological systems, ranging from causal discovery on observed variables to causal feature learning and representation learning. In this interview, she discusses machine learning in biology, areas that are ripe for problem-solving, and cutting-edge research coming out of the Schmidt Center.

Q: The Eric and Wendy Schmidt Center has four distinct areas of focus structured around four natural levels of biological organization: proteins, cells, tissues, and organisms. What, within the current landscape of machine learning, makes now the right time to work on these specific problem classes?

A: Biology and medicine are currently undergoing a “data revolution.” The availability of large-scale, diverse datasets — ranging from genomics and multi-omics to high-resolution imaging and electronic health records — makes this an opportune time. Inexpensive and accurate DNA sequencing is a reality, advanced molecular imaging has become routine, and single cell genomics is allowing the profiling of millions of cells. These innovations — and the massive datasets they produce — have brought us to the threshold of a new era in biology, one where we will be able to move beyond characterizing the units of life (such as all proteins, genes, and cell types) to understanding the `programs of life’, such as the logic of gene circuits and cell-cell communication that underlies tissue patterning and the molecular mechanisms that underlie the genotype-phenotype map.

At the same time, in the past decade, machine learning has seen remarkable progress with models like BERT, GPT-3, and ChatGPT demonstrating advanced capabilities in text understanding and generation, while vision transformers and multimodal models like CLIP have achieved human-level performance in image-related tasks. These breakthroughs provide powerful architectural blueprints and training strategies that can be adapted to biological data. For instance, transformers can model genomic sequences similar to language, and vision models can analyze medical and microscopy images.

Importantly, biology is poised to be not just a beneficiary of machine learning, but also a significant source of inspiration for new ML research. Much like agriculture and breeding spurred modern statistics, biology has the potential to inspire new and perhaps even more profound avenues of ML research. Unlike fields such as recommender systems and internet advertising, where there are no natural laws to discover and predictive accuracy is the ultimate measure of value, in biology, phenomena are physically interpretable, and causal mechanisms are the ultimate goal. Additionally, biology boasts genetic and chemical tools that enable perturbational screens on an unparalleled scale compared to other fields. These combined features make biology uniquely suited to both benefit greatly from ML and serve as a profound wellspring of inspiration for it.

Q: Taking a somewhat different tack, what problems in biology are still really resistant to our current tool set? Are there areas, perhaps specific challenges in disease or in wellness, which you feel are ripe for problem-solving?

A: Machine learning has demonstrated remarkable success in predictive tasks across domains such as image classification, natural language processing, and clinical risk modeling. However, in the biological sciences, predictive accuracy is often insufficient. The fundamental questions in these fields are inherently causal: How does a perturbation to a specific gene or pathway affect downstream cellular processes? What is the mechanism by which an intervention leads to a phenotypic change? Traditional machine learning models, which are primarily optimized for capturing statistical associations in observational data, often fail to answer such interventional queries.There is a strong need for biology and medicine to also inspire new foundational developments in machine learning. 

The field is now equipped with high-throughput perturbation technologies — such as pooled CRISPR screens, single-cell transcriptomics, and spatial profiling — that generate rich datasets under systematic interventions. These data modalities naturally call for the development of models that go beyond pattern recognition to support causal inference, active experimental design, and representation learning in settings with complex, structured latent variables. From a mathematical perspective, this requires tackling core questions of identifiability, sample efficiency, and the integration of combinatorial, geometric, and probabilistic tools. I believe that addressing these challenges will not only unlock new insights into the mechanisms of cellular systems, but also push the theoretical boundaries of machine learning.

With respect to foundation models, a consensus in the field is that we are still far from creating a holistic foundation model for biology across scales, similar to what ChatGPT represents in the language domain — a sort of digital organism capable of simulating all biological phenomena. While new foundation models emerge almost weekly, these models have thus far been specialized for a specific scale and question, and focus on one or a few modalities.

Significant progress has been made in predicting protein structures from their sequences. This success has highlighted the importance of iterative machine learning challenges, such as CASP (critical assessment of structure prediction), which have been instrumental in benchmarking state-of-the-art algorithms for protein structure prediction and driving their improvement.

The Schmidt Center is organizing challenges to increase awareness in the ML field and make progress in the development of methods to solve causal prediction problems that are so critical for the biomedical sciences. With the increasing availability of single-gene perturbation data at the single-cell level, I believe predicting the effect of single or combinatorial perturbations, and which perturbations could drive a desired phenotype, are solvable problems. With our Cell Perturbation Prediction Challenge (CPPC), we aim to provide the means to objectively test and benchmark algorithms for predicting the effect of new perturbations.

Another area where the field has made remarkable strides is disease diagnostic and patient triage. Machine learning algorithms can integrate different sources of patient information (data modalities), generate missing modalities, identify patterns that may be difficult for us to detect, and help stratify patients based on their disease risk. While we must remain cautious about potential biases in model predictions, the danger of models learning shortcuts instead of true correlations, and the risk of automation bias in clinical decision-making, I believe this is an area where machine learning is already having a significant impact.

Q: Let’s talk about some of the headlines coming out of the Schmidt Center recently. What current research do you think people should be particularly excited about, and why? 

A: In collaboration with Dr. Fei Chen at the Broad Institute, we have recently developed a method for the prediction of unseen proteins’ subcellular location, called PUPS. Many existing methods can only make predictions based on the specific protein and cell data on which they were trained. PUPS, however, combines a protein language model with an image in-painting model to utilize both protein sequences and cellular images. We demonstrate that the protein sequence input enables generalization to unseen proteins, and the cellular image input captures single-cell variability, enabling cell-type-specific predictions. The model learns how relevant each amino acid residue is for the predicted sub-cellular localization, and it can predict changes in localization due to mutations in the protein sequences. Since proteins’ function is strictly related to their subcellular localization, our predictions could provide insights into potential mechanisms of disease. In the future, we aim to extend this method to predict the localization of multiple proteins in a cell and possibly understand protein-protein interactions.

Together with Professor G.V. Shivashankar, a long-time collaborator at ETH Zürich, we have previously shown how simple images of cells stained with fluorescent DNA-intercalating dyes to label the chromatin can yield a lot of information about the state and fate of a cell in health and disease, when combined with machine learning algorithms. Recently, we have furthered this observation and proved the deep link between chromatin organization and gene regulation by developing Image2Reg, a method that enables the prediction of unseen genetically or chemically perturbed genes from chromatin images. Image2Reg utilizes convolutional neural networks to learn an informative representation of the chromatin images of perturbed cells. It also employs a graph convolutional network to create a gene embedding that captures the regulatory effects of genes based on protein-protein interaction data, integrated with cell-type-specific transcriptomic data. Finally, it learns a map between the resulting physical and biochemical representation of cells, allowing us to predict the perturbed gene modules based on chromatin images.

Furthermore, we recently finalized the development of a method for predicting the outcomes of unseen combinatorial gene perturbations and identifying the types of interactions occurring between the perturbed genes. MORPH can guide the design of the most informative perturbations for lab-in-a-loop experiments. Furthermore, the attention-based framework provably enables our method to identify causal relations among the genes, providing insights into the underlying gene regulatory programs. Finally, thanks to its modular structure, we can apply MORPH to perturbation data measured in various modalities, including not only transcriptomics, but also imaging. We are very excited about the potential of this method to enable the efficient exploration of the perturbation space to advance our understanding of cellular programs by bridging causal theory to important applications, with implications for both basic research and therapeutic applications.

New gift expands mental illness studies at Poitras Center for Psychiatric Disorders Research

Tue, 09/02/2025 - 5:20pm

One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.

Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.

“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.

Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.

“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”

A legacy of support

Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.

“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia. 

The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.

For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.

In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.

New initiatives at the Poitras Center

The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.

McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.

A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.

A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.

Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.

“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”

Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.

Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.

“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”

New particle detector passes the “standard candle” test

Tue, 09/02/2025 - 1:00pm

A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.

The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.

Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.

In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.

Straight ahead

This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.

In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.

“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”

“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”

The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.

“Gone in an instant”

Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.

Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.

“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”

“One in a billion”

The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.

The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.

Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.

“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”

In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.

“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.

“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”

This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.

Advancing career and academic ambitions with MITx MicroMasters Program in Finance

Fri, 08/29/2025 - 1:35pm

For a long time, Satik Movsesyan envisioned a future of working in finance and also pursuing a full-time master’s degree program at the MIT Sloan School of Management. She says the MITx MicroMasters Program in Finance provides her with the ideal opportunity to directly enhance her career with courses developed and delivered by MIT Sloan faculty.

Movsesyan first began actively pursuing ways to connect with the MIT community as a first-year student in her undergraduate program at the American University of Armenia, where she majored in business with a concentration in accounting and finance. That’s when she discovered the MicroMasters Program in Finance. Led by MIT Open Learning and MIT Sloan, the program offers learners an opportunity to advance in the finance field through a rigorous, comprehensive online curriculum comprising foundational courses, mathematical methods, and advanced modeling. During her senior year, she started taking courses in the program, beginning with 15.516x (Financial Accounting).

“I saw completing the MicroMasters program as a way to accelerate my time at MIT offline, as well as to prepare me for the academic rigor,” says Movsesyan. “The program provides a way for me to streamline my studies, while also working toward transforming capital markets here in Armenia — in a way, also helping me to streamline my career.”

Movsesyan initially started as an intern at C-Quadrat Ampega Asset Management Armenia and was promoted to her current role of financial analyst. The firm is one of two pension asset managers in Armenia. Movsesyan credits the MicroMasters program with helping her to make deeper inferences in terms of analytical tasks and empowering her to create more enhanced dynamic models to support the efficient allocation of assets. Her learning has enabled her to build different valuation models for financial instruments. She is currently developing a portfolio management tool for her company.

“Although the courses are grounded deeply in theory, they never lack a perfect applicability component, which makes them very useful,” says Movsesyan. “Having MIT’s MicroMasters on a CV adds credibility as a professional, and your input becomes more valued by the employer.”

Movsesyan says that the program has helped her to develop resilience, as well as critical and analytical thinking. Her long-term goal is to become a portfolio manager and ultimately establish an asset management company, targeted at offering an extensive range of funds based on diverse risk-return preferences of investors, while promoting transparent and sustainable investment practices. 

“The knowledge I’ve gained from the variety of courses is a perfect blend which supports me day-to-day in building solutions to existing problems in asset management,” says Movsesyan.

In addition to being a learner in the program, Movsesyan serves as a community teaching assistant (CTA). After taking 15.516x, she became a CTA for that course, working with learners around the world. She says that this role of helping and supporting others requires constantly immersing herself in the course content, which also results in challenging herself and mastering the material.

“I think my story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Movsesyan. “It’s an example for students around the world who also have transformative ideas and determination to take action. They can be a part of the MIT community.”

Understanding shocks to welfare systems

Thu, 08/28/2025 - 4:00pm

In an unhappy coincidence, the Covid-19 pandemic and Angie Jo’s doctoral studies in political science both began in 2019. Paradoxically, this global catastrophe helped define her primary research thrust.

As countries reacted with unprecedented fiscal measures to protect their citizens from economic collapse, Jo MCP ’19 discerned striking patterns among these interventions: Nations typically seen as the least generous on social welfare were suddenly deploying the most dramatic emergency responses.

“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says Jo.

Driven by this interest, Jo launched into a comparative exploration of welfare states that forms the backbone of her doctoral research. Her work examines how different types of welfare regimes respond to collective crises, and whether these responses lead to lasting institutional reforms or merely temporary patches.

A mismatch in investments

Jo’s research focuses on a particular subset of advanced industrialized democracies — countries like the United States, United Kingdom, Canada, and Australia — that political economists classify as “liberal welfare regimes.” These nations stand in contrast to the “social democratic welfare regimes” exemplified by Scandinavian countries.

“In everyday times, citizens in countries like Denmark or Sweden are already well-protected by a deep and comprehensive welfare state,” Jo explains. “When something like Covid hits, these countries were largely able to use the social policy tools and administrative infrastructure they already had, such as subsidized childcare and short-time work schemes that prevent mass layoffs.”

Liberal welfare regimes, however, exhibit a different pattern. During normal periods, "government assistance is viewed by many as the last resort,” Jo observes. “It’s means-tested and minimal, and the responsibility to manage risk is put on the individual.”

Yet when Covid struck, these same governments “spent historically unprecedented amounts on emergency aid to citizens, including stimulus checks, expanded unemployment insurance, child tax credits, grants, and debt forbearance that might normally have faced backlash from many Americans as government ‘handouts.’”

This stark contrast — minimal investment in social safety nets during normal times followed by massive crisis spending — lies at the heart of Jo’s inquiry. “What struck me was the mismatch: The U.S. invests so little in social welfare at baseline, but when crisis hits, it can suddenly unleash massive aid — just not in ways that stick. So what happens when the next crisis comes?”

From architecture to political economy

Jo took a winding path to studying welfare states in crisis. Born in South Korea, she moved with her family to California at age 3 as her parents sought an American education for their children. After moving back to Korea for high school, she attended Harvard University, where she initially focused on art and architecture.

“I thought I’d be an artist,” Jo recalls, “but I always had many interests, and I was very aware of different countries and different political systems, because we were moving around a lot.”

While studying architecture at Harvard, Jo’s academic focus pivoted.

“I realized that most of the decisions around how things get built, whether it’s a building or a city or infrastructure, are made by the government or by powerful private actors,” she explains. “The architect is the artist’s hand that is commissioned to execute, but the decisions behind it, I realized, were what interested me more.”

After a year working in macroeconomics research at a hedge fund, Jo found herself drawn to questions in political economy. “While I didn’t find the zero-sum game of finance compelling, I really wanted to understand the interactions between markets and governments that lay behind the trades,” she says.

Jo decided to pursue a master’s degree in city planning at MIT, where she studied the political economy of master-planning new cities as a form of industrial policy in China and South Korea, before transitioning to the political science PhD program. Her research focus shifted dramatically when the Covid-19 pandemic struck.

“It was the first time I realized, wow, these wealthy Western democracies have serious problems, too,” Jo says. “They are not dealing well with this pandemic and the structural inequalities and the deep tensions that have always been part of some of these societies, but are being tested even further by the enormity of this shock.”

The costs of crisis response

One of Jo’s key insights challenges conventional wisdom about fiscal conservatism. The assumption that keeping government small saves money in the long run may be fundamentally flawed when considering crisis response.

“What I’m exploring in my research is the irony that the less you invest in a capable, effective and well-resourced government, the more that backfires when a crisis inevitably hits and you have to patch up the holes,” Jo argues. “You’re not saving money; you’re deferring the cost.”

This inefficiency becomes particularly apparent when examining how different countries deployed aid during Covid. Countries like Denmark, with robust data systems connecting health records, employment information, and family data, could target assistance with precision. The United States, by contrast, relied on blunter instruments.

“If your system isn’t built to deliver aid in normal times, it won’t suddenly work well under pressure,” Jo explains. “The U.S. had to invent entire programs from scratch overnight — and many were clumsy, inefficient, or regressive.”

There is also a political aspect to this constraint. “Not only do liberal welfare countries lack the infrastructure to address crises, they are often governed by powerful constituencies that do not want to build it — they deliberately choose to enact temporary benefits that are precisely designed to fade,” Jo argues. “This perpetuates a cycle where short-term compensations are employed from crisis to crisis, constraining the permanent expansion of the welfare state.”

Missed opportunities

Jo’s dissertation also examines whether crises provide opportunities for institutional reform. Her second paper focuses on the 2008 financial crisis in the United States, and the Hardest Hit Fund, a program that allocated federal money to state housing finance agencies to prevent foreclosures.

“I ask why, with hundreds of millions in federal aid and few strings attached, state agencies ultimately helped so few underwater homeowners shed unmanageable debt burdens,” Jo says. “The money and the mandate were there — the transformative capacity wasn’t.”

Some states used the funds to pursue ambitious policy interventions, such as restructuring mortgage debt to permanently reduce homeowners’ principal and interest burdens. However, most opted for temporary solutions like helping borrowers make up missed payments, while preserving their original contract. Partisan politics, financial interests, and status quo bias are most likely responsible for these varying state strategies, Jo believes.

She sees this as “another case of the choice that governments have between throwing money at the problem as a temporary Band-Aid solution, or using a crisis as an opportunity to pursue more ambitious, deeper reforms that help people more sustainably in the long run.”

The significance of crisis response research

For Jo, understanding how welfare states respond to crises is not just an academic exercise, but a matter of profound human consequence.

“When there’s an event like the financial crisis or Covid, the scale of suffering and the welfare gap that emerges is devastating,” Jo emphasizes. “I believe political science should be actively studying these rare episodes, rather than disregarding them as once-in-a-century anomalies.”

Her research carries implications for how we think about welfare state design and crisis preparedness. As Jo notes, the most vulnerable members of society — “people who are unbanked, undocumented, people who have low or no tax liability because they don’t make enough, immigrants or those who don’t speak English or don’t have access to the internet or are unhoused” — are often invisible to relief systems.

As Jo prepares for her career in academia, she is motivated to apply her political science training to address such failures. “We’re going to have more crises, whether pandemics, AI, climate disasters, or financial shocks,” Jo warns. “Finding better ways to cover those people is essential, and is not something that our current welfare state — or our politics — are designed to handle.”

MIT researchers develop AI tool to improve flu vaccine strain selection

Thu, 08/28/2025 - 11:50am

Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.

This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.

To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.

Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

An open-access report on the study was published today in Nature Medicine.

The future of flu

VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.

The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)

In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population. 

For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.

So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.

Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity. 

Outpacing evolution

“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi. 

VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator. 

“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”

Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.

New self-assembling material could be the key to recyclable EV batteries

Thu, 08/28/2025 - 5:00am

Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.

A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.

The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.

“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”

Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.

Better batteries

There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.

That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.

To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.

“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”

When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.

“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”

The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.

“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.

When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.

“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”

Validating a new approach

Cho says the material is a proof of concept that demonstrates the recycle-first approach.

“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”

Cho also sees a lot of room for optimizing the material’s performance with further experiments.

Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.

“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”

Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.

“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”

The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.

Why countries trade with each other while fighting

Thu, 08/28/2025 - 12:00am

In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?

Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.

“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”

Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.

“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.

Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.

Calculating time and utility

“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.

Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”

Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).

“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”

Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.

“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”

Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.

“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”

This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.

States: Usually wrong about the length of wars

“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”

For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.

“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”

Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.

“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.

“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”

At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.

“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says. 

Locally produced proteins help mitochondria function

Wed, 08/27/2025 - 4:45pm

Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”

SHASS announces appointments of new program and section heads for 2025-26

Wed, 08/27/2025 - 4:30pm

The MIT School of Humanities, Arts, and Social Sciences announced leadership changes in three of its academic units for the 2025-26 academic year.

“We have an excellent cohort of leaders coming in,” says Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences. “I very much look forward to working with them and welcoming them into the school's leadership team.”

Sandy Alexandre will serve as head of MIT Literature. Alexandre is an associate professor of literature and served as co-head of the section in 2024-25. Her research spans the late 19th-century to present-day Black American literature and culture. Her first book, “The Properties of Violence: Claims to Ownership in Representations of Lynching,” uses the history of American lynching violence as a framework to understand matters concerning displacement, property ownership, and the American pastoral ideology in a literary context. Her work thoughtfully explores how literature envisions ecologies of people, places, and objects as recurring echoes of racial violence, resonating across the long arc of U.S. history. She earned a bachelor’s degree in English language and literature from Dartmouth College and a master’s and PhD in English from the University of Virginia.

Manduhai Buyandelger will serve as director of the Program in Women’s and Gender Studies. A professor of anthropology, Buyandelger’s research seeks to find solutions for achieving more-integrated (and less-violent) lives for humans and non-humans by examining the politics of multi-species care and exploitation, urbanization, and how diverse material and spiritual realities interact and shape the experiences of different beings. By examining urban multi-species coexistence in different places in Mongolia, the United States, Japan, and elsewhere, her study probes possibilities for co-cultivating an integrated multi-species existence. She is also developing an anthro-engineering project with the MIT Department of Nuclear Science and Engineering (NSE) to explore pathways to decarbonization in Mongolia by examining user-centric design and responding to political and cultural constraints on clean-energy issues. She offers a transdisciplinary course with NSE, 21A.S01 (Anthro-Engineering: Decarbonization at the Million Person Scale), in collaboration with her colleagues in Mongolia’s capital, Ulaanbaatar. She has written two books on religion, gender, and politics in post-socialist Mongolia: “Tragic Spirits: Shamanism, Gender, and Memory in Contemporary Mongolia” (University of Chicago Press, 2013) and “A Thousand Steps to the Parliament: Constructing Electable Women in Mongolia” (University of Chicago Press, 2022). Her essays have appeared in American Ethnologist, Journal of Royal Anthropological Association, Inner Asia, and Annual Review of Anthropology. She earned a BA in literature and linguistics and an MA in philology from the National University of Mongolia, and a PhD in social anthropology from Harvard University.

Eden Medina PhD ’05 will serve as head of the Program in Science, Technology, and Society. A professor of science, technology, and society, Medina studies the relationship of science, technology, and processes of political change in Latin America. She is the author of “Cybernetic Revolutionaries: Technology and Politics in Allende's Chile” (MIT Press, 2011), which won the 2012 Edelstein Prize for best book on the history of technology and the 2012 Computer History Museum Prize for best book on the history of computing. Her co-edited volume “Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America” (MIT Press, 2014) received the Amsterdamska Award from the European Society for the Study of Science and Technology (2016). In addition to her writings, Medina co-curated the exhibition “How to Design a Revolution: The Chilean Road to Design,” which opened in 2023 at the Centro Cultural La Moneda in Santiago, Chile, and is currently on display at the design museum Disseny Hub in Barcelona, Spain. She holds a PhD in the history and social study of science and technology from MIT and a master’s degree in studies of law from Yale Law School. She worked as an electrical engineer prior to starting her graduate studies.

Fikile Brushett named director of MIT chemical engineering practice school

Wed, 08/27/2025 - 4:15pm

Fikile R. Brushett, a Ralph Landau Professor of Chemical Engineering Practice, was named director of MIT’s David H. Koch School of Chemical Engineering Practice, effective July 1. In this role, Brushett will lead one of MIT’s most innovative and distinctive educational programs.

Brushett joined the chemical engineering faculty in 2012 and has been a deeply engaged member of the department. An internationally recognized leader in the field of energy storage, his research advances the science and engineering of electrochemical technologies for a sustainable energy economy. He is particularly interested in the fundamental processes that define the performance, cost, and lifetime of present-day and next-generation electrochemical systems. In addition to his research, Brushett has served as a first-year undergraduate advisor, as a member of the department’s graduate admissions committee, and on MIT’s Committee on the Undergraduate Program.

“Fik’s scholarly excellence and broad service position him perfectly to take on this new challenge,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering (ChemE). “His role as practice school director reflects not only his technical expertise, but his deep commitment to preparing students for meaningful, impactful careers. I’m confident he will lead the practice school with the same spirit of excellence and innovation that has defined the program for generations.”

Brushett succeeds T. Alan Hatton, a Ralph Landau Professor of Chemical Engineering Practice Post-Tenure, who directed the practice school for 36 years. For many, Hatton’s name is synonymous with the program. When he became director in 1989, only a handful of major chemical companies hosted stations.

“I realized that focusing on one industry segment was not sustainable and did not reflect the breadth of a chemical engineering education,” Hatton recalls. “So I worked to modernize the experience for students and have it reflect the many ways chemical engineers practice in the modern world.”

Under Hatton’s leadership, the practice school expanded globally and across industries, providing students with opportunities to work on diverse technologies in a wide range of locations. He pioneered the model of recruiting new companies each year, allowing many more firms to participate while also spreading costs across a broader sponsor base. He also introduced an intensive, hands-on project management course at MIT during Independent Activities Period, which has become a valuable complement to students’ station work and future careers.

Value for students and industry

The practice school benefits not only students, but also the companies that host them. By embedding teams directly into manufacturing plants and R&D centers, businesses gain fresh perspectives on critical technical challenges, coupled with the analytical rigor of MIT-trained problem-solvers. Many sponsors report that projects completed by practice school students have yielded measurable cost savings, process improvements, and even new opportunities for product innovation.

For manufacturing industries, where efficiency, safety, and sustainability are paramount, the program provides actionable insights that help companies strengthen competitiveness and accelerate growth. The model creates a unique partnership: students gain true real-world training, while companies benefit from MIT expertise and the creativity of the next generation of chemical engineers.

A century of hands-on learning

Founded in 1916 by MIT chemical engineering alumnus Arthur D. Little and Professor William Walker, with funding from George Eastman of Eastman Kodak, the practice school was designed to add a practical dimension to chemical engineering education. The first five sites — all in the Northeast — focused on traditional chemical industries working on dyes, abrasives, solvents, and fuels.

Today, the program remains unique in higher education. Students consult with companies worldwide across fields ranging from food and pharmaceuticals to energy and finance, tackling some of industry’s toughest challenges. More than a hundred years after its founding, the practice school continues to embody MIT’s commitment to hands-on, problem-driven learning that transforms both students and the industries they serve.

The practice school experience is part of ChemE’s MSCEP and PhD/ScDCEP programs. After coursework for each program is completed, a student attends practice school stations at host company sites. A group of six to 10 students spends two months each at two stations; each station experience includes teams of two or three students working on a month-long project, where they will prepare formal talks, scope of work, and a final report for the host company. Recent stations include Evonik in Marl, Germany; AstraZeneca in Gaithersburg, Maryland; EGA in Dubai, UAE; AspenTech in Bedford, Massachusetts; and Shell Technology Center and Dimensional Energy in Houston, Texas.

New method could monitor corrosion and cracking in a nuclear reactor

Wed, 08/27/2025 - 3:30pm

MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.

This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.

During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.

They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.

By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.

“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.

Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.

“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.

Real-time imaging

Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.

Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.

“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.

They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.

For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.

To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.

“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.

As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.

But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.

Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.

However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.

“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.

They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.

An unexpected result

As they fine-tuned the experiment, the researchers discovered something else.

They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.

In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.

“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.

In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.

“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.

This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.

Professor Emeritus Rainer Weiss, influential physicist who forged new paths to understanding the universe, dies at 92

Tue, 08/26/2025 - 6:50pm

MIT Professor Emeritus Rainer Weiss ’55, PhD ’62, a renowned experimental physicist and Nobel laureate whose groundbreaking work confirmed a longstanding prediction about the nature of the universe, passed away on Aug. 25. He was 92.

Weiss conceived of the Laser Interferometer Gravitational-Wave Observatory (LIGO) for detecting ripples in space-time known as gravitational waves, and was later a leader of the team that built LIGO and achieved the first-ever detection of gravitational waves. He shared the Nobel Prize in Physics for this work in 2017. Together with international collaborators, he and his colleagues at LIGO would go on to detect many more of these cosmic reverberations, opening up a new way for scientists to view the universe.

During his remarkable career, Weiss also developed a more precise atomic clock and figured out how to measure the spectrum of the cosmic microwave background via a weather balloon. He later co-founded and advanced the NASA Cosmic Background Explorer project, whose measurements helped support the Big Bang theory describing the expansion of the universe.

“Rai leaves an indelible mark on science and a gaping hole in our lives,” says Nergis Mavalvala PhD ’97, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. As a doctoral student with Weiss in the 1990s, Mavalvala worked with him to build an early prototype of a gravitational-wave detector as part of her PhD thesis. “He will be so missed but has also gifted us a singular legacy. Every gravitational wave event we observe will remind us of him, and we will smile. I am indeed heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us — of passion for science and discovery, but most of all to always put people first.” she says.

A member of the MIT physics faculty since 1964, Weiss was known as a committed mentor and teacher, as well as a dedicated researcher. 

“Rai’s ingenuity and insight as an experimentalist and a physicist were legendary,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics. “His no-nonsense style and gruff manner belied a very close, supportive and collaborative relationship with his students, postdocs, and other mentees. Rai was a thoroughly MIT product.”

“Rai held a singular position in science: He was the creator of two fields — measurements of the cosmic microwave background and of gravitational waves. His students have gone on to lead both fields and carried Rai’s rigor and decency to both. He not only created a huge part of important science, he also populated them with people of the highest caliber and integrity,” says Peter Fisher, the Thomas A. Frank Professor of Physics and former head of the physics department.

Enabling a new era in astrophysics

LIGO is a system of two identical detectors located 1,865 miles apart. By sending finely tuned lasers back and forth through the detectors, scientists can detect perturbations caused by gravitational waves, whose existence was proposed by Albert Einstein. These discoveries illuminate ancient collisions and other events in the early universe, and have confirmed Einstein’s theory of general relativity. Today, the LIGO Scientific Collaboration involves hundreds of scientists at MIT, Caltech, and other universities, and with the Virgo and KAGRA observatories in Italy and Japan makes up the global LVK Collaboration — but five decades ago, the instrument concept was an MIT class exercise conceived by Weiss.

As he told MIT News in 2017, in generating the initial idea, Weiss wondered: “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”

To realize the audacious design, Weiss teamed up in 1976 with physicist Kip Thorne, who, based in part on conversations with Weiss, soon seeded the creation of a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of the University of Glasgow, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave. Barry Barish later joined the team at Caltech, helping to secure funding and bring the detectors to completion.

After receiving support from the National Science Foundation, LIGO broke ground in the mid-1990s, constructing interferometric detectors in Hanford, Washington, and in Livingston, Louisiana. 

Years later, when he shared the Nobel Prize with Thorne and Barish for his work on LIGO, Weiss noted that hundreds of colleagues had helped to push forward the search for gravitational waves.

“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said at an MIT press conference. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”

He continued: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence.”

“While I have always been amazed and guided by Rai’s ingenuity, integrity, and humility, I was most impressed by his breadth of vision and ability to move between worlds,” says Matthew Evans, the MathWorks Professor of Physics. “He could seamlessly shift from the smallest technical detail of an instrument to the global vision for a future observatory. In the last few years, as the idea for a next-generation gravitational-wave observatory grew, Rai would often be at my door, sharing ideas for how to move the project forward on all levels. These discussions ranged from quantum mechanics to global politics, and Rai’s insights and efforts have set the stage for the future.”

A lifelong fascination with hard problems

Weiss was born in 1932 in Berlin. The young family fled Nazi Germany to Prague and then emigrated to New York City, where Weiss grew up with a love for classical music and electronics, earning money by fixing radios.

He enrolled at MIT, then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in the former Building 20. There, Weiss met physicist Jerrold Zacharias, who encouraged him in finishing his undergraduate degree in 1955 and his PhD in 1962.

Weiss spent some time at Princeton University as a postdoc in the legendary group led by Robert Dicke, where he developed experiments to test gravity. He returned to MIT as an assistant professor in 1964, starting a new research group in the Research Laboratory of Electronics dedicated to research in cosmology and gravitation.

Weiss received numerous awards and honors in addition to the Nobel Prize, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He was a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. He also shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.

Weiss is survived by his wife, Rebecca; his daughter, Sarah, and her husband, Tony; his son, Benjamin, and his wife, Carla; and a grandson, Sam, and his wife, Constance. Details about a memorial are forthcoming.

This article may be updated.

Simpler models can outperform deep learning at climate prediction

Tue, 08/26/2025 - 9:00am

Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.

Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.

They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.

“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.

Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.

Comparing emulators

Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.

Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.

But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.

The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.

Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.

“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.

Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.

Constructing a new evaluation

From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.

“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.

Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.

“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.

Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.

“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.

Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.

The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.

This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”

On the joys of being head of house at McCormick Hall

Tue, 08/26/2025 - 9:00am

While sharing a single cup of coffee, Raul Radovitzky, the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics, and his wife Flavia Cardarelli, senior administrative assistant in the Institute for Data, Systems, and Society, recently discussed the love they have for their “nighttime jobs” living in McCormick Hall as faculty heads of house, and explained why it is so gratifying for them to be a part of this community.

The couple, married for 32 years, first met playing in a sandbox at the age of 3 in Argentina (but didn't start dating until they were in their 20s). Radovitzky has been a part of the MIT ecosystem since 2001, while Cardarelli began working at MIT in 2006. They became heads of house at McCormick Hall, the only all-female residence hall on campus, in 2015, and recently applied to extend their stay.

“Our head-of-house role is always full of surprises. We never know what we’ll encounter, but we love it. Students think we do this just for them, but in truth, it’s very rewarding for us as well. It keeps us on our toes and brings a lot of joy,” says Cardarelli. “We like to think of ourselves as the cool aunt and uncle for the students,” Radovitzky adds.

Heads of house at MIT influence many areas of students’ development by acting as advisors and mentors to their residents. Additionally, they work closely with the residence hall’s student government, as well as staff from the Division of Student Life, to foster their community’s culture.

Vice Chancellor for Student Life Suzy Nelson explains, “Our faculty heads of house have the long view at MIT and care deeply about students’ academic and personal growth. We are fortunate to have such dedicated faculty who serve in this way. The heads of house enhance the student experience in so many ways — whether it is helping a student with a personal problem, hosting Thanksgiving dinner for students who were not able to go home, or encouraging students to get involved in new activities, they are always there for students.”

“Our heads of house help our students fully participate in residential life. They model civil discourse at community dinners, mentor and tutor residents, and encourage residents to try new things. With great expertise and aplomb, they formally and informally help our students become their whole selves,” says Chancellor Melissa Nobles.

“I love teaching, I love conducting research with my group, and I enjoy serving as a head of house. The community aspect is deeply meaningful to me. MIT has become such a central part of our lives. Our kids are both MIT graduates, and we are incredibly proud of them. We do have a life outside of MIT — weekends with friends and family, personal activities — but MIT is a big part of who we are. It’s more than a job; it’s a community. We live on campus, and while it can be intense and demanding, we really love it,” says Radovitzky.

Jessica Quaye ’20, a former resident of McCormick Hall, says, “what sets McCormick apart is the way Raul and Flavia transform the four dorm walls into a home for everyone. You might come to McCormick alone, but you never leave alone. If you ran into them somewhere on campus, you could be sure that they would call you out and wave excitedly. You could invite Raul and Flavia to your concerts and they would show up to support your extracurricular endeavors. They built an incredible family that carries the fabric of MIT with a blend of academic brilliance, a warm open-door policy, and unwavering support for our extracurricular pursuits.”

Soundbytes

Q: What first drew you to the heads of house role?

Radovitzky: I had been aware of the role since I arrived at MIT, and over time, I started to wonder if it might be something we’d consider. When our kids were young, it didn’t seem feasible — we lived in the suburbs, and life there was good. But I always had an innate interest in building stronger connections with the student community.

Later, several colleagues encouraged us to apply. I discussed it with the family. Everyone was excited about it. Our teenagers were thrilled by the idea of living on a college campus. We applied together, submitting a letter as a family explaining why we were so passionate about it. We interviewed at McCormick, Baker, and McGregor. When we were offered McCormick, I’ll admit — I was nervous. I wasn’t sure I’d be the right fit for an all-female residence.

Cardarelli: We would have been nervous no matter where we ended up, but McCormick felt like home. It suited us in ways we didn’t anticipate. Raul, for instance, discovered he had a real rapport with the students, telling goofy jokes, making karaoke playlists, and learning about Taylor Swift and Nicki Minaj.

Radovitzky: It’s true! I never knew I’d become an expert at picking karaoke playlists. But we found our rhythm here, and it’s been deeply rewarding.

Q: What makes the McCormick community special?

Radovitzky: McCormick has a unique spirit. I can step out of our apartment and be greeted by 10 smiling faces. That energy is contagious. It’s not just about events or programming — it’s about building trust. We’ve built traditions around that, like our “make your own pizza” nights in our apartment, a wonderful McCormick event we inherited from our predecessors. We host four sessions each spring in which students roll out dough, choose toppings, and we chat as we cook and eat together. Everyone remembers the pizza nights — they’re mentioned in every testimonial.

Cardarelli: We’ve been lucky to have amazing graduate resident assistants and area directors every year. They’re essential partners in building community. They play a key role in creating community and supporting the students on their floors. They help with everything — from tutoring to events to walking students to urgent care if needed.

Radovitzky: In the fall, we take our residents to Crane Beach and host a welcome brunch. Karaoke in our apartment is a big hit too, and a unique way to make them comfortable coming to our apartment from day one. We do it three times a year — during orientation, and again each semester.

Cardarelli: We also host monthly barbecues open to all dorms and run McFast, our first-year tutoring program. Raul started by tutoring physics and math, four hours a week. Now, upperclass students lead most of the sessions. It’s great for both academic support and social connection.

Radovitzky: We also have an Independent Activities Period pasta night tradition. We cook for around 100 students, using four sauces that Flavia makes from scratch — bolognese, creamy mushroom, marinara, and pesto. Students love it.

Q: What’s unique about working in an all-female residence hall?

Cardarelli: I’ve helped students hem dresses, bake, and even apply makeup. It’s like having hundreds of daughters.

Radovitzky: The students here are incredibly mature and engaged. They show real interest in us as people. Many of the activities and connections we’ve built wouldn’t be possible in a different setting. Every year during “de-stress night,” I get my nails painted every color and have a face mask on. During “Are You Smarter Than an MIT Professor,” they dunk me in a water tank.

Engineering fantasy into reality

Tue, 08/26/2025 - 12:00am

Growing up in the suburban town of Spring, Texas, just outside of Houston, Erik Ballesteros couldn’t help but be drawn in by the possibilities for humans in space.

It was the early 2000s, and NASA’s space shuttle program was the main transport for astronauts to the International Space Station (ISS). Ballesteros’ hometown was less than an hour from Johnson Space Center (JSC), where NASA’s mission control center and astronaut training facility are based. And as often as they could, he and his family would drive to JSC to check out the center’s public exhibits and presentations on human space exploration.

For Ballesteros, the highlight of these visits was always the tram tour, which brings visitors to JSC’s Astronaut Training Facility. There, the public can watch astronauts test out spaceflight prototypes and practice various operations in preparation for living and working on the International Space Station.

“It was a really inspiring place to be, and sometimes we would meet astronauts when they were doing signings,” he recalls. “I’d always see the gates where the astronauts would go back into the training facility, and I would think: One day I’ll be on the other side of that gate.”

Today, Ballesteros is a PhD student in mechanical engineering at MIT, and has already made good on his childhood goal. Before coming to MIT, he interned on multiple projects at JSC, working in the training facility to help test new spacesuit materials, portable life support systems, and a propulsion system for a prototype Mars rocket. He also helped train astronauts to operate the ISS’ emergency response systems.

Those early experiences steered him to MIT, where he hopes to make a more direct impact on human spaceflight. He and his advisor, Harry Asada, are building a system that will quite literally provide helping hands to future astronauts. The system, dubbed SuperLimbs, consists of a pair of wearable robotic arms that extend out from a backpack, similar to the fictional Inspector Gadget, or Doctor Octopus (“Doc Ock,” to comic book fans). Ballesteros and Asada are designing the robotic arms to be strong enough to lift an astronaut back up if they fall. The arms could also crab-walk around a spacecraft’s exterior as an astronaut inspects or makes repairs.

Ballesteros is collaborating with engineers at the NASA Jet Propulsion Laboratory to refine the design, which he plans to introduce to astronauts at JSC in the next year or two, for practical testing and user feedback. He says his time at MIT has helped him make connections across academia and in industry that have fueled his life and work.

“Success isn’t built by the actions of one, but rather it’s built on the shoulders of many,” Ballesteros says. “Connections — ones that you not just have, but maintain — are so vital to being able to open new doors and keep great ones open.”

Getting a jumpstart

Ballesteros didn’t always seek out those connections. As a kid, he counted down the minutes until the end of school, when he could go home to play video games and watch movies, “Star Wars” being a favorite. He also loved to create and had a talent for cosplay, tailoring intricate, life-like costumes inspired by cartoon and movie characters.

In high school, he took an introductory class in engineering that challenged students to build robots from kits, that they would then pit against each other, BattleBots-style. Ballesteros built a robotic ball that moved by shifting an internal weight, similar to Star Wars’ fictional, sphere-shaped BB-8. 

“It was a good introduction, and I remember thinking, this engineering thing could be fun,” he says.

After graduating high school, Ballesteros attended the University of Texas at Austin, where he pursued a bachelor’s degree in aerospace engineering. What would typically be a four-year degree stretched into an eight-year period during which Ballesteros combined college with multiple work experiences, taking on internships at NASA and elsewhere. 

In 2013, he interned at Lockheed Martin, where he contributed to various aspects of jet engine development. That experience unlocked a number of other aerospace opportunities. After a stint at NASA’s Kennedy Space Center, he went on to Johnson Space Center, where, as part of a co-op program called Pathways, he returned every spring or summer over the next five years, to intern in various departments across the center.

While the time at JSC gave him a huge amount of practical engineering experience, Ballesteros still wasn’t sure if it was the right fit. Along with his childhood fascination with astronauts and space, he had always loved cinema and the special effects that forged them. In 2018, he took a year off from the NASA Pathways program to intern at Disney, where he spent the spring semester working as a safety engineer, performing safety checks on Disney rides and attractions.

During this time, he got to know a few people in Imagineering — the research and development group that creates, designs, and builds rides, theme parks, and attractions. That summer, the group took him on as an intern, and he worked on the animatronics for upcoming rides, which involved translating certain scenes in a Disney movie into practical, safe, and functional scenes in an attraction.

“In animation, a lot of things they do are fantastical, and it was our job to find a way to make them real,” says Ballesteros, who loved every moment of the experience and hoped to be hired as an Imagineer after the internship came to an end. But he had one year left in his undergraduate degree and had to move on.

After graduating from UT Austin in December 2019, Ballesteros accepted a position at NASA’s Jet Propulsion Laboratory in Pasadena, California. He started at JPL in February of 2020, working on some last adjustments to the Mars Perseverance rover. After a few months during which JPL shifted to remote work during the Covid pandemic, Ballesteros was assigned to a project to develop a self-diagnosing spacecraft monitoring system. While working with that team, he met an engineer who was a former lecturer at MIT. As a practical suggestion, she nudged Ballesteros to consider pursuing a master’s degree, to add more value to his CV.

“She opened up the idea of going to grad school, which I hadn’t ever considered,” he says.

Full circle

In 2021, Ballesteros arrived at MIT to begin a master’s program in mechanical engineering. In interviewing with potential advisors, he immediately hit it off with Harry Asada, the Ford Professor of Enginering and director of the d'Arbeloff Laboratory for Information Systems and Technology. Years ago, Asada had pitched JPL an idea for wearable robotic arms to aid astronauts, which they quickly turned down. But Asada held onto the idea, and proposed that Ballesteros take it on as a feasibility study for his master’s thesis.

The project would require bringing a seemingly sci-fi idea into practical, functional form, for use by astronauts in future space missions. For Ballesteros, it was the perfect challenge. SuperLimbs became the focus of his master’s degree, which he earned in 2023. His initial plan was to return to industry, degree in hand. But he chose to stay at MIT to pursue a PhD, so that he could continue his work with SuperLimbs in an environment where he felt free to explore and try new things.

“MIT is like nerd Hogwarts,” he says. “One of the dreams I had as a kid was about the first day of school, and being able to build and be creative, and it was the happiest day of my life. And at MIT, I felt like that dream became reality.”

Ballesteros and Asada are now further developing SuperLimbs. The team recently re-pitched the idea to engineers at JPL, who reconsidered, and have since struck up a partnership to help test and refine the robot. In the next year or two, Ballesteros hopes to bring a fully functional, wearable design to Johnson Space Center, where astronauts can test it out in space-simulated settings.

In addition to his formal graduate work, Ballesteros has found a way to have a bit of Imagineer-like fun. He is a member of the MIT Robotics Team, which designs, builds, and runs robots in various competitions and challenges. Within this club, Ballesteros has formed a sub-club of sorts, called the Droid Builders, that aim to build animatronic droids from popular movies and franchises.

“I thought I could use what I learned from Imagineering and teach undergrads how to build robots from the ground up,” he says. “Now we’re building a full-scale WALL-E that could be fully autonomous. It’s cool to see everything come full circle.”

Pages