MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 8 hours 17 min ago

Understanding ammonia energy’s tradeoffs around the world

Tue, 01/13/2026 - 12:00am

Many people are optimistic about ammonia’s potential as an energy source and carrier of hydrogen, and though large-scale adoption would require major changes to the way it is currently manufactured, ammonia does have a number of advantages. For one thing, ammonia is energy-dense and carbon-free. It is also already produced at scale and shipped around the world, primarily for use in fertilizer.

Though current manufacturing processes give ammonia an enormous carbon footprint, cleaner ways to make ammonia do exist. A better understanding of how to guide the ammonia fuel industry’s continued development could improve carbon emissions, energy costs, and regional energy balances.

In a new paper, MIT Energy Initiative (MITEI) researchers created the largest combined dataset showing the economic and environmental impact of global ammonia supply chains under different scenarios. They examined potential ammonia flows across 63 countries and considered a variety of country-specific economic parameters as well as low- and no-carbon ammonia production technologies. The results should help researchers, policymakers, and industry stakeholders calculate the cost and lifecycle emissions of different ammonia production technologies and trade routes.

“This is the most comprehensive work on the global ammonia landscape,” says senior author Guiyan Zang, a research scientist at MITEI. “We developed many of these frameworks at MIT to be able to make better cost-benefit analyses. Hydrogen and ammonia are the only two types of fuel with no carbon at scale. If we want to use fuel to generate power and heat, but not release carbon, hydrogen and ammonia are the only options, and ammonia is easier to transport and lower-cost.”

The study provides the clearest view yet of the tradeoffs associated with various ammonia production technologies. The researchers found, for instance, that a full transition to ammonia produced using conventional processes paired with carbon capture could cut global greenhouse gas emissions by nearly 71 percent for a 23.2 percent cost increase. A transition to electrolyzed ammonia produced using renewable energy could reduce greenhouse gas emissions by 99.7 percent for a 46 percent cost increase.

“Before this, there were no harmonized datasets quantifying the impacts of this transition,” says lead author Woojae Shin, a postdoc at MITEI. “Everyone is talking about ammonia as a super important hydrogen carrier in the future, and also ammonia can be directly used in power generation or fertilizer and other industrial uses. But we needed this dataset. It’s filling a major knowledge gap.”

The paper appears in Energy and Environmental Science. Former MITEI postdocs Haoxiang Lai and Gasim Ibrahim are also co-authors.

Filling a data gap

Today ammonia is mainly produced through the Haber-Bosch process, which in 2020 was responsible for about 1.8 percent of global greenhouse gas emissions. Although current ammonia production is energy-intensive and polluting (referred to as gray ammonia), ammonia can also be produced sustainably using renewable sources (green ammonia) or with natural gas and carbon sequestration (blue ammonia).

As ammonia has increasingly attracted interest as a carbon-free energy source and a medium for hydrogen transport, it’s become more important to quantify the costs and life-cycle emissions associated with various ammonia production technologies, as well as ammonia storage and shipping routes. But existing studies were too narrowly focused.

“The previous studies and datasets were fragmented,” Shin says. “They focused on specific regions or single technologies, like gray ammonia only, or blue ammonia only. They would also only cover the cost or the greenhouse emissions of ammonia in isolation. Finally, they use different scopes and methodologies. It meant you couldn’t make global comparisons or draw definitive conclusions.”

To build their database, the MIT researchers combined data from dozens of studies analyzing specific technologies, regions, economic parameters, and trade flows. They also used frameworks they previously developed to calculate the total cost of ammonia production in each country and estimated lifecycle greenhouse gas emissions across the supply chain, factoring in storage and shipping between different regions.

Emissions calculations included activities related to feedstock extraction, production, transport, and import processing. Major cost factors included each country’s renewable and grid electricity prices, natural gas prices, and location. Other factors like interest rates and equity premiums were also included.

The researchers used their calculations to find ammonia costs and life cycle emissions across six ammonia production technologies. In the context of the U.S. average, they found the lowest production cost came from using a popular form of the Haber Bosch process known as natural gas steam methane reforming (SMR) without carbon capture and storage (gray ammonia), at 48 cents per kilogram of ammonia. Unfortunately, that economic advantage came with the highest greenhouse gas emissions, at 2.46 kilograms of CO2 equivalent per kilogram of ammonia. In contrast, SMR with carbon capture and storage achieves an approximately 61 percent reduction in emissions while incurring a 29 percent increase in production costs.

Another method for producing ammonia that uses natural gas as a feedstock called auto-thermal reforming (ATR) with air combustion, when combined with carbon capture and storage, exhibited a 10 percent higher cost than conventional SMR while generating emissions of 0.75 kilograms of CO2 equivalent per kilogram of ammonia, representing a more cost-effective decarbonization option than SMR with carbon capture and storage.

Among production pathways including carbon capture (blue ammonia), a variation of ATR that uses oxygen combustion and carbon capture had the lowest emissions, with a production cost of about 57 cents per kilogram of ammonia. Producing ammonia with electricity generally had higher production costs than blue ammonia pathways. When nuclear energy is powering ammonia production, as opposed to the grid, greenhouse gas emissions are near zero at 0.03 kilograms of CO2 equivalent per kilogram of ammonia produced.

Across the 63 countries studied, major cost and emissions differences were driven by energy costs, sources of energy for the grid, and financing environments. China emerged as an optimal future supplier of green ammonia to many countries, while the Middle East also offered competitive low-carbon ammonia production pathways. Generally, blue ammonia pathways are most attractive for countries with low-cost natural gas resources, and ammonia made using grid electricity proved more expensive and more carbon-intensive than conventionally produced ammonia.

From data to policy

Low-carbon ammonia use is projected to grow dramatically by 2050, with that ammonia procured via global trade. Japan and Korea, for example, have included ammonia in their national energy strategies and conducted trials using ammonia to generate power. They even offer economic credits for verified CO2 reductions from clean ammonia projects.

“Ammonia researchers, producers, as well as government officials require this data to understand the impact of different technologies and global supply corridors,” Shin says.

The authors also believe industry stakeholders and other researchers will get a lot of value from their database, which allows users to explore the impact of changing specific parameters.

“We collaborate with companies, and they need to know the full costs and lifecycle emissions associated with different options,” Zang says. “Governments can also use this to compare options and set future policies. Any country producing ammonia needs to know which countries they can deliver to economically.”

The research was supported by the MIT Energy Initiative’s Future Energy Systems Center.

This new tool could tell us how consciousness works

Mon, 01/12/2026 - 1:00pm

Consciousness is famously a “hard problem” of science: We don’t precisely know how the physical matter in our brains translates into thoughts, sensations, and feelings. But an emerging research tool called transcranial focused ultrasound may enable researchers to learn more about the phenomenon.

The technology has entered use in recent years, but it isn’t yet fully integrated into research. Now, two MIT researchers are planning experiments with it, and have published a new paper they term a “roadmap” for using the tool to study consciousness.

“Transcranial focused ultrasound will let you stimulate different parts of the brain in healthy subjects, in ways you just couldn’t before,” says Daniel Freeman, an MIT researcher and co-author of a new paper on the subject. “This is a tool that’s not just useful for medicine or even basic science, but could also help address the hard problem of consciousness. It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.”

Transcranial focused ultrasound is noninvasive and reaches deeper into the brain, with greater resolution, than other forms of brain stimulation, such as transcranial magnetic or electrical stimulation.

“There are very few reliable ways of manipulating brain activity that are safe but also work,” says Matthias Michel, an MIT philosopher who studies consciousness and co-authored the new work.

The paper, “Transcranial focused ultrasound for identifying the neural substrate of conscious perception,” is published in Neuroscience and Biobehavioral Reviews. The authors are Freeman, a technical staff member at MIT Lincoln Laboratory; Brian Odegaard, an assistant professor of psychology at the University of Florida; Seung-Schik Yoo, an associate professor of radiology at Brigham and Women’s Hospital and Harvard Medical School; and Michel, an associate professor in MIT’s Department of Philosophy and Linguistics.

Pinpointing causality

Brain research is especially difficult because of the challenge of studying healthy individuals. Apart from neurosurgery, there are very limited ways to gain knowledge of the deepest structures in the human brain. From the outside of the head, noninvasive approaches like MRIs and other kinds of ultrasounds yield some imaging information, while the electroencephalogram (EEG) shows electrical activity in the brain. Conversely, with transcranial focused ultrasound, acoustic waves are transmitted through the skull, focusing down to a target area of a few millimeters, allowing specific brain structures to be stimulated to study the resulting effect. It could therefore be a productive tool for robust experiments.

“It truly is the first time in history that one can modulate activity deep in the brain, centimeters from the scalp, examining subcortical structures with high spatial resolution,” Freeman says. “There’s a lot of interesting emotional circuits that are deep in the brain, but until now you couldn’t manipulate them outside of the operating room.”

Crucially, the technology may help researchers determine cause-and-effect patterns, precisely because its ultrasound waves modulate brain activity. Many studies of consciousness today may measure brain activity in relation to, say, visual stumuli, since visual processing is among the core components of consciousness. But it’s not necessarily clear if the brain activity being measured represents the generation of consciousness, or a mere consequence of consciousness. By manipulating the brain’s activity, researchers can better grasp which actions help constitute consciousness, or are byproducts of it.

“Transcranial focused ultrasound gives us a solution to that problem,” says Michel.

The “roadmap” laid out in the new paper aims to help distinguish between two main conceptions of consciousness. Broadly, the “cognitivist” conception holds that the neural activity that generates conscious experience must involve higher-level mental processes, such as reasoning or self-reflection. These processes link information from many different parts of the brain into a coherent whole, likely using the frontal cortex of the brain.

By contrast, the “non-cognitivist” idea of consciousness takes the position that conscious experience does not require such cognitive machinery; instead, specific patterns of neural activity give rise directly to particular subjective experiences, without the need for sophisticated interpretive processes. In this view, brain activity responsible for consciousness may be more localized, at the back of the cortex or in subcortical structures at the back of the brain.

To use transcranial focused ultrasound productively, the researchers lay out a series of more specific questions that experiments might address: What is the role of the prefrontal cortex in conscious perception? Is perception generated locally, or are brain-wide networks required? If consciousness arises across distant regions of the brain, how are perceptions from those areas linked into one unified experience? And what is the role of subcortical structures in conscious activity?

By modulating brain activity in experiments involving, say, visual stimuli, researchers could draw closer to answers about the brain areas that are necessary in the production of conscious thought. The same goes for studies of, for instance, pain, another core sensation linked with consciousness. We pull our hand back from a hot stove before the pain hits us. But how is the conscious sensation of pain generated, and where in the brain does that happen?

“It’s a basic science question, how is pain generated in the brain,” Freeman says. “And it’s surprising there is such uncertainty … Pain could stem from cortical areas, or it could be deeper brain structures. I’m interested in therapies, but I’m also curious if subcortical structures may play a bigger role than appreciated. It could be the physical manifestation of pain is subcortical. That’s a hypothesis. But now we have a tool to examine it.”

Experiments ahead

Freeman and Michel are not just abstractly charting a course for others to follow; they are planning forthcoming experiments centered on stimulation of the visual cortex, before moving on to higher-level areas in frontal cortex. While methods of recording brain activity, such as an EEG reveal areas that are visually responsive, these new experiments are aiming to build a more complete, causal picture of the entire process of visual perception and its associated brain activity.

“It’s one thing to say if these neurons reponded electrically. It’s another thing to say if a person saw light,” Freeman says.

Michel, for his part, is also playing an active role in generating further interest in studies of consciousness at MIT. Along with Earl Miller, the Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, Michel is a co-founder of the MIT Consciousness Club, a cross-disciplinary effort to spur further academic study of consciousness, on campus and at other Boston-area institutions.

The MIT Consciousness Club is supported in part by MITHIC, the MIT Human Insight Collaborative, an initiative backed by the School of Humanities, Arts, and Social Sciences. The program aims to hold monthly events, while grappling with the cutting edge of consciousness research.

At the moment, Michel believes, the cutting edge very much involves transcranial focused ultrasound.

“It’s a new tool, so we don’t really know to what extent it’s going to work,” Michel says. “But I feel there’s low risk and high reward. Why wouldn’t you take this path?”

The research for the paper was supported by the U.S. Department of the Air Force. 

Fueling research in nuclear thermal propulsion

Sun, 01/11/2026 - 12:00am

Going to the moon was one thing; going to Mars will be quite another. The distance alone is intimidating. While the moon is 238,855 miles away, the distance to Mars is between 33 million and 249 million miles. The propulsion systems that got us to the moon just won’t work.

Taylor Hampson, a master’s student in the Department of Nuclear Science and Engineering (NSE), is well aware of the problem. It’s one of the many reasons he’s excited about his NASA-sponsored research into nuclear thermal propulsion (NTP).

The technique uses nuclear energy to heat a propellant, like hydrogen, to an extremely high temperature and expel it through a nozzle. The resultant thrust can significantly reduce travel times to Mars, compared to chemical rockets. “You can get double the efficiency, or more, from a nuclear propulsion engine with the same thrust. Besides, being in microgravity is not ideal for astronauts, so you want to get them there faster, which is a strong motivation for using nuclear propulsion over the chemical equivalents,” Hampson says.

Understanding nuclear thermal propulsion

It’s worth taking a quick survey of rocket propulsion techniques to understand where Hampson’s work fits.

There are three broad types of rocket propulsion: chemical, where thrust is achieved by the combustion of rocket propellants; electrical, where electric fields accelerate charged particles to high velocities to achieve thrust; and nuclear, where nuclear energy delivers needed propulsion.

Nuclear propulsion, which is only used in space, not to get to space, further falls into one of two categories: nuclear electric propulsion uses nuclear energy to generate electricity and accelerate the propellant. Nuclear thermal propulsion, which is what Hampson is researching, heats a propellant using nuclear power. A significant advantage of NTP is that it can deliver double the efficiency (or more) of the chemical equivalent for the same thrust. A disadvantage: cost and regulatory hurdles. “Sure, you can get double the efficiency or more from a nuclear propulsion engine, but there hasn’t been a mission case that has needed it enough to justify the higher cost,” Hampson says.

Until now.

With a human mission to Mars becoming a very real possibility — NASA plans on sending astronauts to Mars as early as the 2030s — NTP might soon come under the spotlight.

"It's almost futuristic"

Growing up on Florida’s Space Coast and watching space shuttle launches stoked Hampson’s early interest in science. Loving many other subjects, including history and math, it wasn’t until his senior year that Hampson cast his lot into the engineering category. While space exploration got him hooked on aerospace engineering, Hampson was also intrigued by the possibility of nuclear engineering as a way to a greener future.

Wracked by indecision, he applied to schools in both fields and completed his undergraduate degree in aerospace engineering from Georgia Tech. It was here that a series of internships in space technology companies like Blue Origin and Stoke Space, and participation in Georgia Tech’s rocket team, cemented Hampson’s love for rocket propulsion.

Looking to pursue graduate studies, MIT seemed like the next logical step. “I think MIT has the best combination of nuclear and aerospace education, and is really strong in the field of testing nuclear fuels,” Hampson says. Facilities in the MIT Reactor enable testing of nuclear fuel under conditions they would see in a nuclear propulsion engine. It helped that Koroush Shirvan, associate professor of NSE and Atlantic Richfield Career Development Professor in Energy Studies, was working on nuclear thermal propulsion efforts with NASA while focusing most of his efforts on the testing of nuclear fuels.

At MIT, Hampson works under the advisement of Shirvan. Hampson has had the chance to pursue further research in a project he started with an internship at NASA: studies of a nuclear thermal propulsion engine. “Nuclear propulsion is itself advanced, and I’m working on what comes after that. It’s almost futuristic,” he says.

Modeling the effects of nuclear thermal propulsion

While the premise of NTP sounds promising, its execution will likely not be straightforward. For one thing, with NTP, the rocket engine won’t start up and shut down like simple combustion engines. The startup is complex because rapid increase in temperatures can cause material failures. And the engines can take longer to shut down because of heat from nuclear decay. As a result, the components have to continue to be cooled until enough fission products decay away so there isn’t a lot of heat left, Hampson says.

Hampson is modeling the entirety of the rocket engine system — the tank, the pump, and more — to understand how these and many other parameters work together. Evaluating the entire engine is important because different configurations of parts (and even the fuel) can affect performance. To simplify calculations and to have simulations run faster, he’s working with a relatively simple one-dimensional model. Using it, Hampson can follow the effects of variables on parameters like temperature and pressure on each of the components throughout the engine operation.

“The challenge is in coupling the thermodynamic effects with the neutronic effects,” he says.

Ready for more challenges ahead

After years of indecision, delaying practically every academics-related decision to the last minute, Hampson seems to have zeroed in on what he expects to be his life’s work — inspired by the space shuttle launches many years ago — and hopes to pursue doctoral studies after graduation.

Hampson always welcomes a challenge, and it’s what motivates him to run. Training for the Boston Marathon, he fractured his leg, an injury that surfaced when he was running for yet another race, the Beantown Marathon. He’s not bowed by the incident. “I learned that you’re a lot more capable than you think,” Hampson says, “although you have to ask yourself about the cost,” he laughs. (He was in crutches for weeks after).

A thirst for a challenge is also one of the many reasons he chose to research thermal nuclear propulsion. It helps that the research indulges his love for the field. “Relatively speaking, it’s a field in need of much more advancement; there are many more unsolved problems,” he says. 

MIT named to prestigious 2026 honor roll for mental health services

Fri, 01/09/2026 - 12:20pm

MIT is often recognized as one of the leading institutions of higher learning not only in the United States, but in the world, by several publications, including U.S. News & World Report, QS World University Rankings, Times Higher Education, and Forbes.

Now, MIT also has the distinction of being one of just 30 colleges and universities out of hundreds recognized by Princeton Review’s 2026 Mental Health Services Honor Roll for providing exemplary mental health and well-being services to its students. This is the second year in a row that MIT has received this honor.

The honor roll was created to be a resource for enrolled students and prospective students who may seek such services when applying to colleges. The survey asked more than a dozen questions about training for students, faculty, and staff; provisions for making new policies and procedures; peer-to-peer offerings; screenings and referral services available to all students; residence hall mental health resources; and other criteria, such as current online information that is updated and accessible.

Overall, the 2025 survey findings for all participating institutions are noteworthy, with Princeton Review reporting double-digit increases in campus counseling, wellness, and student support programs compared with its 2024 survey results. Earning a place on the honor roll underscores MIT’s commitment to providing exceptional services for graduate and undergraduate students alike.

Karen Singleton, deputy chief health officer and chief of mental health and counseling services at MIT Health, says, “This honor highlights the hard work and collaboration that we do here at MIT to support students in their well-being journey. This is a recognition of how we are doing those things effectively, and a recognition of MIT’s investment in these support services.”

MIT Health hosts 36 clinicians to meet the needs of the community, and it recently added an easy online scheduling system at the request of students.

Many mental health and well-being services are offered through several departments housed in the Division of Student Life (DSL). They often collaborate with MIT Health and partners across the Institute, including in the Division of Graduate and Undergraduate Education, to provide the best services for the best outcomes for MIT students. 

Support resources in DSL are highly utilized and valued by students. For instance, 82 percent of the Class of 2025 had visited Student Support Services (S3) at least once before graduating, and on a regular satisfaction survey, 91 percent of students who visited S3 said they would return if needed.

“Student Support Services supports over 80 percent of all undergraduates by the time they graduate, and over 60 percent each year. Our offices, including ORSEL, GradSupport, S3, SMHC, the CARE Team, and Residential and Community Life work incredibly well together to support our students,” says Kate McCarthy, senior associate dean of support, wellbeing, and belonging.

“The magic in our support system is the deeply collaborative nature of it. There are many different places students can enter the support network, and each of these teams works closely together to ensure students get connected to the help they need. We always say that students shouldn’t think too much about where they turn … if they get to one of us, they get to all of us,” says David Randall, dean of student life.

Division of Student Life Vice Chancellor Suzy Nelson adds, “It is an honor to see MIT included among colleges and universities recognized for excellent mental health services. Promoting student well-being is central to our mission and guides so much of what we do. This recognition reflects the work of many in our community who are dedicated to creating a campus environment where students can thrive academically and personally.”

3 Questions: How AI could optimize the power grid

Fri, 01/09/2026 - 12:00am

Artificial intelligence has captured headlines recently for its rapidly growing energy demands, and particularly the surging electricity usage of data centers that enable the training and deployment of the latest generative AI models. But it’s not all bad news — some AI tools have the potential to reduce some forms of energy consumption and enable cleaner grids.

One of the most promising applications is using AI to optimize the power grid, which would improve efficiency, increase resilience to extreme weather, and enable the integration of more renewable energy. To learn more, MIT News spoke with Priya Donti, the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), whose work focuses on applying machine learning to optimize the power grid.

Q: Why does the power grid need to be optimized in the first place?

A: We need to maintain an exact balance between the amount of power that is put into the grid and the amount that comes out at every moment in time. But on the demand side, we have some uncertainty. Power companies don’t ask customers to pre-register the amount of energy they are going to use ahead of time, so some estimation and prediction must be done.

Then, on the supply side, there is typically some variation in costs and fuel availability that grid managers need to be responsive to. That has become an even bigger issue because of the integration of energy from time-varying renewable sources, like solar and wind, where uncertainty in the weather can have a major impact on how much power is available. Then, at the same time, depending on how power is flowing in the grid, there is some power lost through resistive heat on the power lines. So, as a grid operator, how do you make sure all that is working all the time? That is where optimization comes in.

Q: How can AI be most useful in power grid optimization?

A: One way AI can be helpful is to use a combination of historical and real-time data to make more precise predictions about how much renewable energy will be available at a certain time. This could lead to a cleaner power grid by allowing us to handle and better utilize these resources.

AI could also help tackle the complex optimization problems that power grid operators must solve to balance supply and demand in a way that also reduces costs. These optimization problems are used to determine which power generators should produce power, how much they should produce, and when they should produce it, as well as when batteries should be charged and discharged, and whether we can leverage flexibility in power loads. These optimization problems are so computationally expensive that operators use approximations so they can solve them in a feasible amount of time. But these approximations are often wrong, and when we integrate more renewable energy into the grid, they are thrown off even farther. AI can help by providing more accurate approximations in a faster manner, which can be deployed in real-time to help grid operators responsively and proactively manage the grid.

AI could also be useful in the planning of next-generation power grids. Planning for power grids requires one to use huge simulation models, so AI can play a big role in running those models more efficiently. The technology can also help with predictive maintenance by detecting where anomalous behavior on the grid is likely to happen, reducing inefficiencies that come from outages. More broadly, AI could also be applied to accelerate experimentation aimed at creating better batteries, which would allow the integration of more energy from renewable sources into the grid.

Q: How should we think about the pros and cons of AI, from an energy sector perspective?

A: One important thing to remember is that AI refers to a heterogeneous set of technologies. There are different types and sizes of models that are used, and different ways that models are used. If you are using a model that is trained on a smaller amount of data with a smaller number of parameters, that is going to consume much less energy than a large, general-purpose model.

In the context of the energy sector, there are a lot of places where, if you use these application-specific AI models for the applications they are intended for, the cost-benefit tradeoff works out in your favor. In these cases, the applications are enabling benefits from a sustainability perspective — like incorporating more renewables into the grid and supporting decarbonization strategies.

Overall, it’s important to think about whether the types of investments we are making into AI are actually matched with the benefits we want from AI. On a societal level, I think the answer to that question right now is “no.” There is a lot of development and expansion of a particular subset of AI technologies, and these are not the technologies that will have the biggest benefits across energy and climate applications. I’m not saying these technologies are useless, but they are incredibly resource-intensive, while also not being responsible for the lion’s share of the benefits that could be felt in the energy sector.

I’m excited to develop AI algorithms that respect the physical constraints of the power grid so that we can credibly deploy them. This is a hard problem to solve. If an LLM says something that is slightly incorrect, as humans, we can usually correct for that in our heads. But if you make the same magnitude of a mistake when you are optimizing a power grid, that can cause a large-scale blackout. We need to build models differently, but this also provides an opportunity to benefit from our knowledge of how the physics of the power grid works.

And more broadly, I think it’s critical that those of us in the technical community put our efforts toward fostering a more democratized system of AI development and deployment, and that it’s done in a way that is aligned with the needs of on-the-ground applications.

2.009 mechanical engineering students embrace “cycles”

Thu, 01/08/2026 - 5:10pm

MIT’s senior capstone course 2.009 (Product Engineering Processes), an iconic class known colloquially on campus as “two double-oh nine,” emulates what engineers experience while working as part of a design team at a product development firm. The annual prototype launch is a colorful and exciting culmination of a semester’s worth of work.

“This fall, 97 students split into six teams entered the rapid-fire cycle of product engineering, looping between ideas, prototypes, failures, fixes, and breakthroughs,” said Josh Wiesman, 2.009 lecturer, in the program’s opening remarks. “They pushed themselves out of their comfort zone and learned to oscillate between creativity and technical rigor. Thermal, fluids, mechanics, materials, instrumentation — everything you can imagine came back around in new and unexpected ways.”

Wiesman’s remarks hinted at this year’s theme, which co-instructor Peko Hosoi, the Pappalardo Professor of Mechanical Engineering, reminded spectators was announced this year as “Cycles!”

“Engineering doesn’t move in a straight line,” Hosoi elaborated. “It loops, it resets, accelerates, and builds momentum, just like our students.” She continued, “Tonight, we’re celebrating the energy, grit, and creativity that comes from embracing those cycles.”

Starting with ideation, the teams ventured out to talk to people from a variety of walks of life and uncover what Hosoi referred to as “exciting problems worth solving.” From there — with mentors, access to makerspaces, and a budget to turn their ideas into working products — the teams, each represented by a color, spent 13 weeks designing, building, and drafting a business plan for their product.

Spectators packed Kresge Auditorium on Dec. 8, waiving colorful pompoms and cheering on the teams, with thousands more watching online. The six teams demonstrated their prototypes and shared business plans, with breaks between presentations featuring dance and musical performances by MIT Ridonkulous, MIT Ohms, and MIT Live, and short animated films created by the 2.009 team which, this year, incorporated popular movie references.

A recording of the event livestream is available on the 2.009 website, which includes full demonstrations of the product prototypes discussed below, along with audience questions.

Green Team

In the United States, some 350,000 people suffer cardiac arrest each year. Immediate intervention by bystanders can be the difference between life and death. The Green Team presented HeartBridge, an automated CPR device.

“For every minute someone who needs it goes without effective CPR, their chance of survival decreases by roughly 10 percent,” Green Team presenters told the audience. But, they added, CPR is exhausting at the recommended speed and compression depth, with research showing decreases in effectiveness of manual compressions after just three minutes.

HeartBridge is a durable mechanical device that administers steady compressions to a patient and provides textual, visual, and auditory cues to users.

Purple Team

The Purple Team painted the picture of a quintessential fall activity in New England, inviting the audience to imagine “it’s a beautiful Saturday in October, and you decide to go apple picking.” At family-run orchards, thousands of apples fall to the ground each season, creating more than just a mess. Rotting apples invite pests or can spread fungus, decreasing crop yield.

AgriSweep, the Purple Team’s prototype, is a hydraulic powered tractor attachment that collects fallen apples into a produce bin, saving time and labor costs, decreasing the need for sprays, and potentially generating revenue for farmers who sell the windfalls for hard cider, livestock feed, or compost.

Nodding to the video references punctuating the show, the team closed their presentation with a reference to an iconic film with an MIT connection: “How do you like them apples?”

Red Team

Hand embroidery is a popular pastime, but drawing or transferring patterns can be time-consuming or messy. The Red Team aims to solve this problem with their product, Scribbly, a “user-friendly and software-free printer” designed to let hobbyists to create their own designs and make transfers easier.

The machine, which can accommodate a variety of fabrics and embroidery hoop sizes up to 10 inches in diameter, reads design files from a USB, then transfers the image via a pen that can be “erased” with heat if the user wants to change the design.

To demonstrate their product, the team created a transfer pattern of the MIT Department of Mechanical Engineering logo.

Blue Team

Boating safety was top-of-mind for the Blue Team. Propeller-related injuries are a big concern for recreational boaters. Fixed propeller guards, or prop guards, are the most common solution but have drawbacks, including reducing fuel efficiency and decreasing maneuverability. DORI, the Blue Team prototype, is a deployable prop guard that is stowed above the waterline and can be lowered into place when needed.

Yellow Team

The Yellow Team tackled a problem faced by “pond skating enthusiasts and people who maintain their own backyard rinks,” namely, rough patches, bumps, and uneven ice. Their product, Polar, is a compact device that smooths out backyard surfaces to improve skate-ability.

The system includes a chassis on a welded steel frame with a motorized drivetrain, a cutter to shave the ice surface, and an onboard water distribution system with heating mechanism and drip bar for resurfacing.

Pink Team

The final team of the night, the Pink Team, conquered a challenge rooted in one of the most demanding and real-world contexts: rescue diving. In a drowning emergency, rescue divers have just minutes to save a life. Using a retractable strap, carabiner, and locking mechanism, the Pink Team’s product, HydroHold, attaches directly to a diver’s buoyancy control device and offers a hands-free way to secure a drowning victim during a rescue mission.

The product was developed following consultations with divers from local fire departments, the state police, and Woods Hole Oceanographic Institute. “When we took these prototypes to rescue divers, we heard them ask for two things over and over,” the presenters said. “Something simple, and something safe.”

Rather than choosing complexity, Hosoi told the audience, the Pink Team pursued refinement. “They kept testing with users, shaping the interface, and polishing the details until everything felt natural.”

Wiesman added that the product is a reminder that “powerful engineering isn’t about flashy things … sometimes it’s about reducing friction, elevating usability, and building something that just works when it matters.”

Thank you and goodnight

The night ended with a final “thank you” song celebrating the products, the teams, and all the contributors who make the class possible because, as Hosoi said, “It really does take a team to make this class ‘cycle’ forward.” 

The clever AI-generated tribute, which weaves in the names of class participants and instructors, while rhyming “pizza with pepperoni” and “pond-sized Zamboni,” can also be watched in its entirety at the end of the livestream recording, following the product demonstrations. 

Decoding the Arctic to predict winter weather

Thu, 01/08/2026 - 4:55pm

Every autumn, as the Northern Hemisphere moves toward winter, Judah Cohen starts to piece together a complex atmospheric puzzle. Cohen, a research scientist in MIT’s Department of Civil and Environmental Engineering (CEE), has spent decades studying how conditions in the Arctic set the course for winter weather throughout Europe, Asia, and North America. His research dates back to his postdoctoral work with Bacardi and Stockholm Water Foundations Professor Dara Entekhabi that looked at snow cover in the Siberian region and its connection with winter forecasting.

Cohen’s outlook for the 2025–26 winter highlights a season characterized by indicators emerging from the Arctic using a new generation of artificial intelligence tools that help develop the full atmospheric picture.

Looking beyond the usual climate drivers

Winter forecasts rely heavily on El Niño–Southern Oscillation (ENSO) diagnostics, which are the tropical Pacific Ocean and atmosphere conditions that influence weather around the world. However, Cohen notes that ENSO is relatively weak this year.

“When ENSO is weak, that’s when climate indicators from the Arctic becomes especially important,” Cohen says.

Cohen monitors high-latitude diagnostics in his subseasonal forecasting, such as October snow cover in Siberia, early-season temperature changes, Arctic sea-ice extent, and the stability of the polar vortex. “These indicators can tell a surprisingly detailed story about the upcoming winter,” he says. 

One of Cohen’s most consistent data predictors is October’s weather in Siberia. This year, when the Northern Hemisphere experienced an unusually warm October, Siberia was colder than normal with an early snow fall. “Cold temperatures paired with early snow cover tend to strengthen the formation of cold air masses that can later spill into Europe and North America,” says Cohen — weather patterns that are historically linked to more frequent cold spells later in winter.

Warm ocean temperatures in the Barents–Kara Sea and an “easterly” phase of the quasi-biennial oscillation also suggest a potentially weaker polar vortex in early winter. When this disturbance couples with surface conditions in December, it leads to lower-than-normal temperatures across parts of Eurasia and North America earlier in the season.

AI subseasonal forecasting

While AI weather models have made impressive strides showcasing in short-range (one-to–10-day) forecasts, these advances have not yet applied to longer periods. The subseasonal prediction covering two to six weeks remains one of the toughest challenges in the field.

That gap is why this year could be a turning point for subseasonal weather forecasting. A team of researchers working with Cohen won first place for the fall season in the 2025 AI WeatherQuest subseasonal forecasting competition, held by the European Centre for Medium-Range Weather Forecasts (ECMWF). The challenge evaluates how well AI models capture temperature patterns over multiple weeks, where forecasting has been historically limited.

The winning model combined machine-learning pattern recognition with the same Arctic diagnostics Cohen has refined over decades. The system demonstrated significant gains in multi-week forecasting, surpassing leading AI and statistical baselines.

“If this level of performance holds across multiple seasons, it could represent a real step forward for subseasonal prediction,” Cohen says

The model also detected a potential cold surge in mid-December for the U.S. East Coast much earlier than usual, weeks before such signals typically arise. The forecast was widely publicized in the media in real-time. If validated, Cohen explains, it would show how combining Arctic indicators with AI could extend the lead time for predicting impactful weather.

“Flagging a potential extreme event three to four weeks in advance would be a watershed moment,” he adds. “It would give utilities, transportation systems, and public agencies more time to prepare.”

What this winter may hold

Cohen’s model shows a greater chance of colder-than-normal conditions across parts of Eurasia and central North America later in the winter, with the strongest anomalies likely mid-season.

“We’re still early, and patterns can shift,” Cohen says. “But the ingredients for a colder winter pattern are there.”

As Arctic warming speeds up, its impact on winter behavior is becoming more evident, making it increasingly important to understand these connections for energy planning, transportation, and public safety. Cohen’s work shows that the Arctic holds untapped subseasonal forecasting power, and AI may help unlock it for time frames that have long been challenging for traditional models.

In November, Cohen even appeared as a clue in The Washington Post crossword, a small sign of how widely his research has entered public conversations about winter weather.

“For me, the Arctic has always been the place to watch,” he says. “Now AI is giving us new ways to interpret its signals.”

Cohen will continue to update his outlook throughout the season on his blog.

Eighteen MIT faculty honored as “Committed to Caring” for 2025-27

Thu, 01/08/2026 - 4:35pm

At MIT, a strong spirit of mentorship shapes how students learn, collaborate, and imagine the future. In a time of accelerating change — from breakthroughs in artificial intelligence to the evolving realities of global research and work — guidance for technical challenges and personal growth is more important than ever. 

The Committed to Caring (C2C) program recognizes the outstanding professors who extend this dedication beyond the classroom, nurturing resilience, curiosity, and compassion in a new generation of innovators. The latest cohort of C2C honorees exemplify these values, demonstrating the lasting impact that faculty can have on students’ academic and personal journeys.

The Committed to Caring program is a student-driven initiative that has celebrated exceptional mentorship since 2014. In this cycle, 18 MIT professors have been selected as recipients of the C2C award for 2025-27, joining the ranks of nearly 100 previous honorees. 

The following faculty members comprise the 2025-27 Committed to Caring cohort:

  • Iwnetim Abate, Department of Materials Science and Engineering
  • Abdullah Almaatouq, MIT Sloan School of Management
  • Marc A. Baldo, Department of Electrical Engineering and Computer Science
  • Anantha P. Chandrakasan, Department of Electrical Engineering and Computer Science
  • Anna-Christina Eilers, Department of Physics
  • Herbert Einstein, Department of Civil and Environment Engineering
  • Dennis M. Freeman, Department of Electrical Engineering and Computer Science
  • Daniel Hidalgo, Department of Political Science
  • Erin Kara, Department of Physics
  • Laura Lewis, Department of Electrical Engineering and Computer Science
  • Lina Necib, Department of Physics
  • Sara Prescott, Department of Biology
  • Ellen Roche, Department of Mechanical Engineering
  • Loza Tadesse, Department of Mechanical Engineering
  • Haruko Murakami Wainwright, Department of Nuclear Science
  • Fan Wang, Department of Brain and Cognitive Sciences
  • Forest White, Department of Biological Engineering
  • Bin Zhang, Department of Chemistry

Since its launch, the C2C program has placed students at the heart of its nomination process. Graduate students across all departments are invited to share letters recognizing faculty whose mentorship has made a lasting impact on their academic and personal journeys. A selection committee, consisting of both graduate students and staff, reviews nominations to identify those who have meaningfully strengthened the graduate community at MIT.

The selection committee this year included: Zoë Wright (Office of Graduate Education, or OGE), Ryan Rideau, Elizabeth Guttenberg (OGE), Beth Marois (OGE), Sharikka Finley-Moise (OGE), Indrani Saha (History, Theory, and Criticism of Art and Architecture, OGE), Chen Liang (graduate student, MIT Sloan School of Management), Jasmine Aloor (grad student, Department of Aeronautics and Astronautics), Leila Hudson (grad student, Department of Electrical Engineering and Computer Science), and Chair Suraiya Baluch (OGE).

“I wanted to be part of this committee after nominating my own professor in the last cycle, and the experience has been incredibly meaningful,” says Aloor. “I was continually amazed by the ways that so many professors show deep care for their students behind the scenes … What stood out to me most was the breadth of ways these faculty members support their students, check in on them, provide mentorship, and cultivate lifelong bonds, despite being successful and pressed for time as leaders at the top Institute in the world.”

Guttenberg agrees, saying, “Even when these gestures appear simple, they leave a profound and lasting impact on students’ lives and help cultivate the thriving academic community we value.”

Nomination letters illustrate how the efforts of these MIT faculty reflect a deep and enduring commitment to their students’ growth, well-being, and sense of purpose. Their advisees praise these educators for their consistent impact beyond lectures and labs, and for fostering inclusion, support, and genuine connection. Their care and guidance cultivates spaces where students are encouraged not only to excel academically, but also to develop confidence, balance, and a clearer vision of their goals.

Liang underlined that the selection experience “has shown me how many faculty at MIT … help students grow into thoughtful, independent researchers and, just as importantly, into fuller versions of themselves in the world.”

In the months ahead, a series of articles will showcase the honorees in pairs, with a reception this April to recognize their lasting impact. By highlighting these faculty, the Committed to Caring program continues to celebrate and strengthen MIT’s culture of mentorship, respect, and collaboration. 

Pills that communicate from the stomach could improve medication adherence

Thu, 01/08/2026 - 5:00am

In an advance that could help ensure people are taking their medication on schedule, MIT engineers have designed a pill that can report when it has been swallowed.

The new reporting system, which can be incorporated into existing pill capsules, contains a biodegradable radio frequency antenna. After it sends out the signal that the pill has been consumed, most components break down in the stomach while a tiny RF chip passes out of the body through the digestive tract.

This type of system could be useful for monitoring transplant patients who need to take immunosuppressive drugs, or people with infections such as HIV or TB, who need treatment for an extended period of time, the researchers say.

“The goal is to make sure that this helps people receive the therapy they need to help maximize their health,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and an associate member of the Broad Institute of MIT and Harvard.

Traverso is the senior author of the new study, which appears today in Nature Communications. Mehmet Girayhan Say, an MIT research scientist, and Sean You, a former MIT postdoc, are the lead authors of the paper.

A pill that communicates

Patients’ failure to take their medicine as prescribed is a major challenge that contributes to hundreds of thousands of preventable deaths and billions of dollars in health care costs annually.

To make it easier for people to take their medication, Traverso’s lab has worked on delivery capsules that can remain in the digestive tract for days or weeks, releasing doses at predetermined times. However, this approach may not be compatible with all drugs.

“We’ve developed systems that can stay in the body for a long time, and we know that those systems can improve adherence, but we also recognize that for certain medications, we can’t change the pill,” Traverso says. “The question becomes: What else can we do to help the person and help their health care providers ensure that they’re receiving the medication?”

In their new study, the researchers focused on a strategy that would allow doctors to more closely monitor whether patients are taking their medication. Using radio frequency — a type of signal that can be easily detected from outside the body and is safe for humans — they designed a capsule that can communicate after the patient has swallowed it.

There have been previous efforts to develop RF-based signaling devices for medication capsules, but those were all made from components that don’t break down easily in the body and would need to travel through the digestive system.

To minimize the potential risk of any blockage of the GI tract, the MIT team decided to create an RF-based system that would be bioresorbable, meaning that it can be broken down and absorbed by the body. The antenna that sends out the RF signal is made from zinc, and it is embedded into a cellulose particle.

“We chose these materials recognizing their very favorable safety profiles and also environmental compatibility,” Traverso says.

The zinc-cellulose antenna is rolled up and placed inside a capsule along with the drug to be delivered. The outer layer of the capsule is made from gelatin coated with a layer of cellulose and either molybdenum or tungsten, which blocks any RF signal from being emitted.

Once the capsule is swallowed, the coating breaks down, releasing the drug along with the RF antenna. The antenna can then pick up an RF signal sent from an external receiver and, working with a small RF chip, sends back a signal to confirm that the capsule was swallowed. This communication happens within 10 minutes of the pill being swallowed.

The RF chip, which is about 400 by 400 micrometers, is an off-the-shelf chip that is not biodegradable and would need to be excreted through the digestive tract. All of the other components would break down in the stomach within a week.

“The components are designed to break down over days using materials with well-established safety profiles, such as zinc and cellulose, which are already widely used in medicine,” Say says. “Our goal is to avoid long-term accumulation while enabling reliable confirmation that a pill was taken, and longer-term safety will continue to be evaluated as the technology moves toward clinical use.”

Promoting adherence

Tests in an animal model showed that the RF signal was successfully transmitted from inside the stomach and could be read by an external receiver at a distance up to 2 feet away. If developed for use in humans, the researchers envision designing a wearable device that could receive the signal and then transmit it to the patient’s health care team.

The researchers now plan to do further preclinical studies and hope to soon test the system in humans. One patient population that could benefit greatly from this type of monitoring is people who have recently had organ transplants and need to take immunosuppressant drugs to make sure their body doesn’t reject the new organ.

“We want to prioritize medications that, when non-adherence is present, could have a really detrimental effect for the individual,” Traverso says.

Other populations that could benefit include people who have recently had a stent inserted and need to take medication to help prevent blockage of the stent, people with chronic infectious diseases such as tuberculosis, and people with neuropsychiatric disorders whose conditions may impair their ability to take their medication.

The research was funded by Novo Nordisk, MIT’s Department of Mechanical Engineering, the Division of Gastroenterology at Brigham and Women’s Hospital, and the U.S. Advanced Research Projects Agency for Health (ARPA-H), which notes that the views and conclusions contained in this article are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Government.

This work was carried out, in part, through the use of MIT.nano’s facilities.

Celebrating worm science

Wed, 01/07/2026 - 4:40pm

For decades, scientists with big questions about biology have found answers in a tiny worm. That worm — a millimeter-long creature called Caenorhabditis elegans — has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel Prizes and have led to the development of new treatments for human disease.

In a perspective piece published in the November 2025 issue of the journal PNAS, 11 biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health, and highlight how a uniquely collaborative community among worm researchers has fueled the field.

MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andrew Fire PhD ’83 and Paul Sternberg PhD ’84, now at Stanford University and Caltech, respectively; and two past members of Horvitz’s lab, Victor Ambros ’75, PhD ’79, who is now at the University of Massachusetts Medical School, and former postdoc Gary Ruvkun of Massachusetts General Hospital. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.

“This tiny worm is beautiful — elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Physiology or Medicine, along with colleagues Sydney Brenner and John Sulston, for discoveries that helped explain how genes regulate programmed cell death and organ development. 

Early worm discoveries

Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.

Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans. 

“Many aspects of biology are ancient and evolutionarily conserved,” Horvitz, who is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as an investigator at the Howard Hughes Medical Institute. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”

In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.

In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood, and are instead eliminated by a process termed programmed cell death.

By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.

In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.

Collaborative worm community

Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists — including many who trained in Horvitz’s lab — has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:

  • Andrew Fire and Craig Mello, whose discovery of an RNA-based system of gene silencing led to powerful new tools to manipulate gene activity. The innate process they discovered in worms, known as RNA interference, is now used as the basis of six FDA-approved therapeutics for genetic disorders, silencing faulty genes to stop their harmful effects.
  • Martin Chalfie, who used a fluorescent protein made by jellyfish to visualize and track specific cells in C. elegans, helping launch the development of a set of tools that transformed biologists’ ability to observe molecules and processes that are important for both health and disease.
  • Victor Ambros and Gary Ruvkun, who discovered a class of molecules called microRNAs that regulate gene activity not just in worms, but in all multicellular organisms. This prize-winning work was started when Ambros and Ruvkun were postdocs in Horvitz’s lab. Humans rely on more than 1,000 microRNAs to ensure our genes are used at the right times and places. Disruptions to microRNAs have been linked to neurological disorders, cancer, cardiovascular disease, and autoimmune disease, and researchers are now exploring how these small molecules might be used for diagnosis or treatment.

Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.

Today, scientists who study C. elegans — whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems — contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.

Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.

C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.

Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and The Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. (Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees.)

As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.

Stone Center on Inequality and Shaping the Future of Work Launches at MIT

Wed, 01/07/2026 - 3:30pm

The James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work officially launched on Nov. 3, 2025, bringing together scholars, policymakers, and practitioners to explore critical questions about economic opportunity, technology, and democracy.

Co-directed by MIT professors Daron AcemogluDavid Autor, and Simon Johnson, the new Stone Center analyzes the forces that contribute to growing income and wealth inequality through the erosion of job quality and labor market opportunities for workers without a college degree. The center identifies innovative ways to move the economy onto a more equitable trajectory.

MIT Provost Anantha Chandrakasan opened the launch event by emphasizing the urgency and importance of the center's mission. “As artificial intelligence tools become more powerful, and as they are deployed more broadly,” he said, “we will need to strive to ensure that people from all kinds of backgrounds can find opportunity in the economy.”

Here are some of the key takeaways from participants in the afternoon’s discussions on wealth inequalityliberalism, and pro-worker AI.

Wealth inequality is driven by private business and public policy

Owen Zidar of Princeton University stressed that owners of businesses like car dealerships, construction firms, and franchises make up a significant portion of the top 1 percent. “For every public company CEO that gets a lot of attention,” he explained, “there are a thousand private business owners who have at least $25 million in wealth.” These business owners have outsized political influence through overrepresentation, lobbying, and donations.

Atif Mian of Princeton University connected high inequality to the U.S. debt crisis, arguing that massive savings at the top aren’t being channeled into productive investment. Instead, falling interest rates push the government to run increasingly large fiscal deficits.

To mitigate wealth inequality, speakers highlighted policy proposals including rolling back the 20 percent deduction for private business owners and increasing taxes on wealth.

However, policies must be carefully designed. Antoinette Schoar of the MIT Sloan School of Management explained how mortgage subsidy policies after the 2008 financial crisis actually worsened inequality by disadvantaging poorer potential homeowners.

Governments must provide basic public goods and economic security

Marc Dunkelman of the Watson School of International and Public Affairs at Brown University identified excessive red tape as a key problem for modern liberal democracy. “We can’t build high-speed rail. You can’t build enough housing,” he explained. “That spurs ordinary people who want government to work into the populist camp. We did this to ourselves.”

Josh Cohen of Apple University/the University of California at Berkeley emphasized that liberalism must deliver shared prosperity and fair opportunities, not just protect individual freedoms. When people lack economic security, they may turn to leaders who abandon liberal principles altogether.

Liberal democracy needs to adapt while keeping its core values

Helena Rosenblatt Dhar of the City University of New York Graduate Center noted that liberalism and democracy have not always been allies. Historically, “civil equality was very important, but not political equality,” she said. “Liberals were very wary of the masses.”

Speakers emphasized that liberalism’s challenge today is maintaining its commitments to limiting authoritarian power and protecting fundamental freedoms, while addressing its failures.

Doing so, in Dunkelman’s view, would mean working to “eliminate the sowing [of] the seeds of populism by making government properly balance individual rights and the will of the many.”

People-centric politics requires regulating social media

In his keynote at the launch, U.S. Representative Jake Auchincloss (Massachusetts 4th District) connected these notions of government effectiveness and public trust to the influence of technology. He emphasized the need to regulate social media platforms.

“In my opinion, media is upstream of culture, which is upstream of politics,” he said. “If we want a better culture, and certainly if we want a better politics, we need a better media.”

Auchincloss proposed that regulation should include holding social media companies liable for content and banning targeted advertising to minors.

He also echoed the urgency and importance of the center’s research agenda, particularly to understand whether AI will augment or replace labor.

“My bias has always been: Technology creates more jobs,” he said. “Maybe it’s different this time. Maybe I’m wrong.”

Augmentation is key to pro-worker AI — but it may require alternative AI architectures

Stone Center co-director Daron Acemoglu argued that expanding what humans can do, rather than automating their tasks, is essential for achieving pro-worker AI.

However, Acemoglu cautioned that this won’t happen by itself, noting that the business models of tech companies and their focus on artificial general intelligence are not aligned with a pro-worker vision for AI. This vision may require public investment in alternative AI architectures focused on “domain-specific, reliable knowledge.”

Ethan Mollick of the Wharton School of the University of Pennsylvania noted that AI labs are explicitly trying to “replace people at everything” and are “absolutely convinced that they can do this in the very near term.”

Meanwhile, companies have “no model for AI adoption,” Mollick explained. “There is absolute confusion.” Even so, “there’s enough money at stake [that] the machine keeps moving forward,” underscoring the urgency of intervention.

In a glimpse of what such intervention could look like, Zana Buçinca of Microsoft shared research findings that accounting for workers’ values and cognition in AI design can enable better complementarity.

“The impact of AI on human work is not destiny,” she emphasized. “It’s design.”

A new lens on humanity

Wed, 01/07/2026 - 2:20pm

When the MIT Human Insight Collaborative (MITHIC) launched in fall 2024, it was designed to elevate scholars at the frontiers of human-centered research and education, and to provide them with resources to pursue their most innovative and ambitious ideas. 

At the inaugural MITHIC Annual Event on Nov. 17, 2025, faculty from across the Institute shared the progress and impact of the projects they’ve advanced this past year with support from the presidential initiative. 

In opening remarks, MIT President Sally Kornbluth noted the “incredible range of opportunities for faculty and students to ask new questions and arrive at better, bolder, and more nuanced answers, grounded in the wisdom of the humanities, arts, and social sciences,” that MITHIC has sparked in its first year. 

Kornbluth highlighted the Living Climate Futures Lab as an example of the kind of work MITHIC was designed to support. “The lab works with people in communities from Massachusetts to Mongolia who are grappling with the impacts of climate change on their daily lives — on health and food security, housing, and jobs,” she said. The initiative, which was the focus of a panel discussion during the event, received MITHIC’s inaugural Faculty-Driven Initiative (FDI) seed grant.

“Like all the projects that MITHIC supports, the Living Climate Futures Lab also embodies MIT’s singular brand of excellence: collaborative, hands-on, and is deeply relevant to the world and the people around us,” added Kornbluth. 

MIT Provost Anantha Chandrakasan welcomed the audience, noting that “MITHIC is off to a strong start, advancing work across the Institute that broadens our perspective on global challenges.

“MITHIC is about inspiring our community to think differently and work together in new ways. It is about embedding human-centered thinking throughout our research, innovation, and education,” added Chandrakasan, who serves as co-chair of MITHIC.

Keynote speaker Rick Locke, the John C. Head III Dean of the MIT Sloan School of Management, spoke to the “Human Side of Enterprise,” zeroing in on the challenges and opportunities that will determine the future of management education — and how MIT Sloan can position itself at the forefront. In practice, that means the work of MIT Sloan and MITHIC can shape how new technologies like artificial intelligence will reconfigure industries and careers. 

Of equal importance, Locke said, will be how new enterprises are created and run, how people work and live, how business practices become more sustainable, and how national economies develop and adapt.

“MIT has a history of charting and paving pathways to an exciting and productive future of work that not only includes humans, but makes the most of our humanity. Together we can invent this future,” said Locke, who earned his doctorate in MIT’s Department of Political Science and later served as head of the department.

After his address, Locke joined Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences and co-chair of MITHIC, for a fireside chat.

Bringing the classics back to life

In a session exploring innovations in MIT education, Kieran Setiya, the Peter de Florez Professor of Philosophy, detailed what he and his colleagues are calling a “Great Books” initiative. 

As part of a three-year pilot, faculty in the Department of Linguistics and Philosophy have developed a two-semester sequence that focuses on books that reward repeated reading. The courses are loosely integrated and offered as electives, filling what Setiya calls an “urgent need for students to grapple with expansive questions about human nature, human knowledge, ethics, society, and politics” at a time of rapid social and technological change.

As students explore the work of authors like Plato and Aristotle, Homer and Virgil, Virginia Woolf, W.E.B. DuBois, and Simone de Beauvoir, they develop a deeper understanding of history, culture, and social change. These attributes, Setiya says, “will make students better people and better citizens. We're not just preparing MIT students to land high-paying jobs, but to solve human problems and to make the world a better place.”

AI and its impact

During a session on the use of AI, Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics, shared research she is working on in India with co-project lead Marzyeh Ghassemi, associate professor and the Germeshausen Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS). 

Duflo explained that the team is using AI to identify undiagnosed “silent” heart attacks, aiming to improve diagnosis and treatment of heart disease, the country’s No. 1 cause of death. The research team harnessed the power of a cheap diagnostic tool — a handheld electrocardiogram (ECG) device — to collect data on 6,000 patients who visited local health camps to predict their risk of a heart attack. 

They then paired the initial data with follow-up data from a cardiac ultrasound, which was able to confirm if patients experienced one. The researchers used this paired data and their own novel algorithm to train the ECG devices to more accurately assess a patient’s risk. The results are encouraging: 

“What is remarkable compared to existing tests is that it catches young people who are less likely to have had a silent heart attack, but still have a high risk. Right now, those young people are completely excluded from the current screening, because it’s basically based only on age,” Duflo said.

Reconstructing the music of the past

The day also featured a musical demonstration using three different replicas of an ancient Paracas whistle that a team from MIT recreated in collaboration with the Museum of Fine Arts, Boston (MFA).

It was a practical example of how Mark Rau, an assistant professor in music and theater arts with a shared appointment in EECS, and Benjamin Sabatini, a senior postdoc in the Department of Materials Science and Engineering, are using CT scan technology to create models of ancient instruments, measure their vibrations and acoustic parameters, and produce functional reproductions. 

The team offered a step-by-step overview of the process they’ve used to assess the instruments and create the 3D-printed plaster molds, working alongside Jared Katz, the Pappalardo Curator of Musical Instruments at the MFA, resulting in a playable replica of an instrument used centuries ago. 

“What we’re really excited about is getting these kinds of replicas in the hands of students and musicians, and having experimental engagements. We’re also really excited about the printed replicas that allow the collection to be activated in new ways,” Katz explained.

The event featured Q&A opportunities throughout the day, as well as a reception at the close of the day. MITHIC’s second call for proposals this fall yielded nearly 80 submissions, which are under review for funding in 2026. 

A new call for proposals for the SHASS+ Connectivity Fund will be held in spring 2026. SHASS+ supports projects led by a SHASS scholar and a collaborator from another part of the Institute. Another call for proposals for the next FDI seed grant will also take place in spring 2026. 

Fewer layovers, better-connected airports, more firm growth

Wed, 01/07/2026 - 5:00am

Waiting in an airport for a connecting flight is often tedious. A new study by MIT researchers shows it’s bad for business, too.

Looking at air travel and multinational firm formation over a 30-year period, the researchers measured how much a strong network of airline connections matters for economic growth. They found that multinational firms are more likely to locate their subsidiaries in cities they can reach with direct flights, and that this trend is particularly pronounced in knowledge industries. The degree to which a city is embedded within a larger network of high-use flights matters notably for business expansion too.

The team examined 142 countries over the period from 1993 through 2023 and concluded that pairs of cities reachable only by flights with one stopover had 20 percent fewer multinational firm subsidiaries than cities with direct flights. If two changes of planes were needed to connect cities, they had 34 percent fewer subsidiaries. That equates to 1.8 percent and 3.0 percent fewer new firms per year, respectively.

“What we found is how much it matters for a city to be embedded within the global air transportation network,” says Ambra Amico, an MIT researcher and co-author of a new paper detailing the study’s results. “And we also highlight the importance of this for knowledge-intensive business sectors.”

Siqi Zheng, an MIT professor and co-author of the paper, adds: “We found a very strong empirical result about the relationship of parent and subsidiary firms, and how much connectivity matters. The important role that connectivity plays to facilitate face-to-face interactions, build trust, and reduce information asymmetry between such firms is crucial.”

The paper, “Air Connectivity Boosts Urban Attractiveness for Global Firms,” is published today in Nature Cities.

The co-authors are Amico, a postdoc at the MIT-Singapore Alliance for Research and Technology (SMART); Fabio Duarte, associate director of MIT’s Senseable City Lab; Wen-Chi Liao, a visiting associate professor at the MIT Center for Real Estate (CRE) and an associate professor at NUS Business School at the National University of Singapore; and Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at CRE and MIT’s Department of Urban Studies and Planning.

The study analyzes 7.5 million firms in 800 cities with airports, comprising a total of over 400,000 international flight routes. The research focused only on multinational firms, and thus international flights, excluding domestic flights in large countries.

To conduct the analysis and build their new database, the researchers used flight data from the International Civil Aviation Organization as well as firm data from the Orbis database, run by Moody’s, which has company data for over 469 million firms globally. That includes ownership data, allowing the researchers to track relationships between companies. The study included firms located within 37 miles (60 kilometers) of an airport, and accounted for additional factors influencing new-firm location, including city size.

By analyzing industry types, the researchers observed that air connectivity matters relatively more in knowledge industries, such as finance, where face-to-face activity seems to matter more. Alternately, a knowledge-industry firm with auditors periodically showing up to conduct work can lower costs by being more reachable.

“We were fascinated by the heterogenity across industries,” Liao says. “The results are intuitive, but it surprised us that the pattern is so consistent. If the nature of the industy requires face-to-face interaction, air connectivity matters more.” By contrast, for manufacturing, he notes, road infrastructure and ocean shipping will matter relatively more.

To be sure, there are multiple ways to define how connected a city is within the global air transportation network, and the study examines how specific measures relate to firm growth. One measure is what the paper calls “degree centrality,” or how many other places a city is connected to by direct flights. Over a 10-year period, a 10 percent increase in a city’s degree centrality leads to a 4.3 percent increase in the number of subsidiaries located there.

However, another kind of connectedness is even more strongly associated with subsidiary growth. It’s not just how many cities one place is linked to, but in turn, how many direct connections those linked cities themselves have. This turns out to be the strongest predictor of subsidiary growth.

“What matters is not just how many neighbor [directly linked] cities you have,” Duarte says. “It’s important to choose strategically which ones you’re connected to, as well. If you tell me who you are connected to, I tell you how successful your city will be.”

Intriguingly, the relationship between direct flights and multinational firm growth patterns has held up throughout the 30-year study period, despite the rise of teleconferencing, the Covid-19 pandemic, shifts in global growth, and other factors.

“There is consistency across a 30-year period, which is not something to underestimate,” Amico says. “We needed face-to-face interaction 30 years ago, 20 years ago, and 10 years ago, and we need it now, despite all the big changes we have seen.”

Indeed, Zheng adds, “Ironically, I think even with trade and geopolitical frictions, it’s more and more important to have face-to-face interactions to build trust for global trade and business. You still need to reach an actual place and see your business partners, so air connectivity really influences how global business copes with global uncertainties.”

The research was supported by the National Research Foundation of Singapore within the Office of the Prime Minister of Singapore, under its Campus for Research Excellence and Technological Enterprise program, and the MIT Asia Real Estate Initiative. 

3 Questions: Why meritocracy is hard to achieve

Tue, 01/06/2026 - 5:15pm

Can an organization ever be truly meritocratic? That’s a question Emilio J. Castilla, the NTU Professor of Management at the MIT Sloan School of Management, explores in his new book, “The Meritocracy Paradox: Where Talent Management Strategies Go Wrong and How to Fix Them” (Columbia University Press, 2025). Castilla, who is co-director of MIT’s Institute for Work and Employment Research (IWER), researches how talent is managed inside organizations and why — even with the best intentions — workplace practices often fail to deliver fairness and effectiveness.

Castilla’s book brings together decades of research to explain why organizations struggle to achieve meritocracy in practice — and what leaders can do to build fairer, more effective, and higher-performing workplaces. In the following Q&A, he unpacks how bias can unintentionally seep into hiring, evaluation, promotion, and reward systems, and offers concrete strategies to counteract these dynamics and design processes that recognize and support merit.

Q: One central argument of your book is that true meritocracy is not easy for organizations to achieve in practice. Why is that? 

A. A large body of research has found that bias and unfairness can creep into the workplace, affecting talent management processes such as who gets interviewed for jobs, who gets hired, what kind of performance evaluations employees receive, and how employees are rewarded. So it’s not easy for an organization to be truly meritocratic.

In fact, research I conducted with Stephen Benard found that, ironically, emphasizing that an organization is a meritocracy may lead decision-makers to behave in more biased ways. Specifically, in our study, we found that when participants were told they were making decisions for an organization that emphasized meritocracy, they were more likely to recommend higher bonuses for male employees than for their equally-performing female peers, compared to when meritocracy wasn’t emphasized. We called this phenomenon the “paradox of meritocracy,” and it may stem from managers paying less attention to monitoring their own biases when they are assured the organization is fair.

A study I conducted with Aruna Ranganathan PhD ’14 further showed that managers’ understanding of what constitutes “merit” varies widely even within the same organization. There is no universally agreed-upon definition, and our research found that managers often apply the concept of merit in ways that reflect their own experiences as employees. This variability can lead to inconsistent, and sometimes inequitable, outcomes.

Q. What are some of the things organizations can do to make their talent management practices more meritocratic?

A. The encouraging news is that making your organization’s talent management processes fairer and more meritocratic doesn’t have to be complex or expensive. It does, however, require buy-in from top management. The key factors, my research in organizations has shown, are organizational transparency and accountability.

To improve organizational transparency, you need to be very explicit and open about the criteria and procedures you use in talent management processes such as hiring, evaluation, promotion, and reward decisions. That’s because research has shown that having clear and specific merit-based criteria and well-defined processes can help reduce biases.

On the accountability side, you need to have at least one person responsible for monitoring the organization’s talent management processes and outcomes to ensure fairness and effectiveness. In practice, companies often give this responsibility to a group from different parts of the organization. Research has shown that knowing that your decisions will be reviewed by others causes managers to think carefully about their decisions — something that can reduce the impact of unconscious biases in the workplace.

Q. How realistic is it to think that organizations can ever be true meritocracies and why do you nonetheless believe meritocracy is worth striving for?

A. It’s true that organizations are unlikely to ever be perfectly meritocratic. Still, striving for meritocracy and fairness in your talent management strategies is beneficial, and you should be aware of the pitfalls. Employers that hire, reward, and advance the most talented and hard-working employees, regardless of their demographic background, are likely to benefit in the long run. That’s the promise and enduring appeal of meritocracy.

Many in the United States may not realize that one of the world’s earliest formal meritocracies emerged in China during the Han and Qin dynasties more than 2,000 years ago. As early as 200 B.C.E., the Chinese empire began developing a system of civil service exams in order to identify and appoint competent and talented officials to help administer government operations throughout the empire.

Those Chinese emperors were on to something. Once an organization reaches a certain size, leaders won’t achieve the most effective performance if they make talent management decisions based on non-meritocratic factors such as nepotism, aristocracy/social class, corruption, or friendship. When it comes to choosing a guiding principle for people management decisions within an organization, meritocracy beats a lot of the alternatives.

Positioning Massachusetts as a hub for climate tech and economic development

Tue, 01/06/2026 - 4:55pm

Massachusetts is uniquely positioned to become a leader in climate tech, said Emily Reichert MBA ’12, the CEO of the Massachusetts Clean Energy Center (MassCEC) and former CEO of Greentown Labs, to members of the MIT community at a seminar in November. 

Reichert outlined the interconnectedness of economic development and clean energy innovation in MassCEC’s efforts to advance the energy transition and address climate change, as part of the MITEI Presents: Advancing the Energy Transition speaker series. An MIT Sloan School of Management alumna, Reichert stepped aside as the agency’s CEO in late November and the MITEI speaker series was her final presentation in that role.

“There’s not another [agency] in the country exactly like us focused on innovation and economic development for clean energy and climate tech,” stated Reichert. Created in 2008, MassCEC is the state’s economic development agency dedicated to the growth of the clean energy and climate tech sector. Reichert stressed that economic development is just as much about businesses as it is about the jobs they create.

The organization’s economic development plan is built on its knowledge of the commonwealth’s infrastructure, talent capabilities, academic resources, startup culture, and regional strengths. Reichert explained that there are four areas at the core of MassCEC’s work.

First, bringing emerging climate-tech ideas out of the laboratory and into the world. To do this, MassCEC provides grants, internships, and has a small investment fund that is co-invested with different investors in the area. “We are increasingly focusing on the longer-term growth trajectory of these young companies,” said Reichert, adding that the hope is for these startups to stay, grow, and create jobs in Massachusetts.

Second, MassCEC aims to accelerate decarbonization by taking commercialized technologies and helping to get them into as many homes and businesses as possible. This can often require specialized knowledge of Massachusetts’ infrastructure, given that the state has relatively older buildings and unique structures, such as triple-deckers. One example is finding a way to make charging technology available to electric vehicle owners when they don’t have a single-family home with a garage.

MassCEC is also focused on enabling the large-scale deployment of offshore wind. “It’ll be 400,000 homes that are powered by the clean energy that’s being generated by offshore wind right off the coast of Martha’s Vineyard. MassCEC’s role is to support the port infrastructure from which we marshal those offshore wind projects,” stated Reichert. “We also support innovation that is needed to do all the things that support the offshore wind industry, in general.”

Finally, Reichert reiterated that MassCEC’s overarching goal is to support clean energy workforce development through job creation, as well as professional development opportunities such as providing internships, training for high school and community college students, and supporting students returning to school for a second career in clean energy.

Reichert emphasized that Massachusetts is particularly well-equipped to house this level of climate-tech innovation since the state is already a leader in the life sciences. The Healey-Driscoll administration charged MassCEC with spearheading the state’s Climatetech Economic Development Strategy and Implementation Plan, a 10-year strategy to position Massachusetts as a global climate tech leader and drive a more equitable and resilient climate future.

To complement this plan and further position the state as an epicenter for energy innovation, the Healey-Driscoll administration also passed the Mass Leads Act, which established the Climatetech Tax Incentive Program, an annual tax incentive to be administered by MassCEC. “This is the money piece,” said Reichert. “How we do it. How we implement it.”

To unlock Massachusetts’ full potential, MassCEC uses a regional approach to take advantage of the strengths held in each area of the state. “We have a fantastic ecosystem. We have more startups per capita than any other state,” said Reichert. The quantity of startups is in large part due to the strengths of the Greater Boston region, with its strong venture capital community and good research institutions, said Reichert, who also highlighted MIT as a key factor. MIT spinout companies like Sublime Systems, Commonwealth Fusion Systems, Boston Metal, and The Engine are all part of MassCEC’s ecosystem.

For the agency, retaining talent in Massachusetts is just as important as supporting its development. “How can we help companies to do their processes, find their facilities, build their facilities, do their demonstrations, do their testing, and find the talent?” asked Reichert. “How can we reduce the time and money barriers to all of that, and therefore make it as easy as possible and as inexpensive as possible for the company to stay here and grow here?”

Reichert expressed her confidence in climate-tech innovation’s ability to endure the changing energy landscape. “The rest of the world is going in this direction. We can decide not to compete as a country, or we can decide that we want to compete and that we want to be part of the future,” said Reichert. “Innovation isn’t going anywhere. I think when you have places like MIT, who are very focused on climate innovation and the energy transition, that activity helps move the ball forward.”

This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.

AI-generated sensors open new paths for early cancer detection

Tue, 01/06/2026 - 5:00am

Detecting cancer in the earliest stages could dramatically reduce cancer deaths because cancers are usually easier to treat when caught early. To help achieve that goal, MIT and Microsoft researchers are using artificial intelligence to design molecular sensors for early detection.

The researchers developed an AI model to design peptides (short proteins) that are targeted by enzymes called proteases, which are overactive in cancer cells. Nanoparticles coated with these peptides can act as sensors that give off a signal if cancer-linked proteases are present anywhere in the body.

Depending on which proteases are detected, doctors would be able to diagnose the particular type of cancer that is present. These signals could be detected using a simple urine test that could even be done at home.

“We’re focused on ultra-sensitive detection in diseases like the early stages of cancer, when the tumor burden is small, or early on in recurrence after surgery,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).

Bhatia and Ava Amini ’16, a principal researcher at Microsoft Research and a former graduate student in Bhatia’s lab, are the senior authors of the study, which appears today in Nature Communications. Carmen Martin-Alonso PhD ’23, a founding scientist at Amplifyer Bio, and Sarah Alamdari, a senior applied scientist at Microsoft Research, are the paper’s lead authors.

Amplifying cancer signals

More than a decade ago, Bhatia’s lab came up with the idea of using protease activity as a marker of early cancer. The human genome encodes about 600 proteases, which are enzymes that can cut through other proteins, including structural proteins such as collagen. They are often overactive in cancer cells, as they help the cells escape their original locations by cutting through proteins of the extracellular matrix, which normally holds cells in place.

The researchers’ idea was to coat nanoparticles with peptides that can be cleaved by a specific protease. These particles could then be ingested or inhaled. As they traveled through the body, if they encountered any cancer-linked proteases, the peptides on the particles would be cleaved.

Those peptides would be secreted in the urine, where they could be detected using a paper strip similar to a pregnancy test strip. Measuring those signals would reveal the overactivity of proteases deep within the body.

“We have been advancing the idea that if you can make a sensor out of these proteases and multiplex them, then you could find signatures of where these proteases were active in diseases. And since the peptide cleavage is an enzymatic process, it can really amplify a signal,” Bhatia says.

The researchers have used this approach to demonstrate diagnostic sensors for lungovarian, and colon cancers.

However, in those studies, the researchers used a trial-and-error process to identify peptides that would be cleaved by certain proteases. In most cases, the peptides they identified could be cleaved by more than one protease, which meant that the signals that were read could not be attributed to a specific enzyme.

Nonetheless, using “multiplexed” arrays of many different peptides yielded distinctive sensor signatures that were diagnostic in animal models of many different types of cancer, even if the precise identity of the proteases responsible for the cleavage remained unknown.

In their new study, the researchers moved beyond the traditional trial-and-error process by developing a novel AI system, named CleaveNet, to design peptide sequences that could be cleaved efficiently and specifically by target proteases of interest.

Users can prompt CleaveNet with design criteria, and CleaveNet will generate candidate peptides likely to fit those criteria. In this way, CleaveNet enables users to tune the efficiency and specificity of peptides generated by the model, opening a path to improving the sensors’ diagnostic power.

“If we know that a particular protease is really key to a certain cancer, and we can optimize the sensor to be highly sensitive and specific to that protease, then that gives us a great diagnostic signal,” Amini says. “We can leverage the power of computation to try to specifically optimize for these efficiency and selectivity metrics.”

For a peptide that contains 10 amino acids, there are about 10 trillion possible combinations. Using AI to search that immense space allows for prediction, testing, and identification of useful sequences much faster than humans would be able to find them, while also considerably reducing experimental costs.

Predicting enzyme activity

To create CleaveNet, the researchers developed a protein language model to predict the amino acid sequences of peptides, analogous to how large language models can predict sequences of text. For the training data, they used publicly available data on about 20,000 peptides and their interactions with different proteases from a family known as matrix metalloproteinases (MMPs).

Using these data, the researchers trained one model to generate peptide sequences that are predicted to be cleaved by proteases. These sequences could then be fed into another model that predicted how efficiently each peptide would be cleaved by any protease of interest.

To demonstrate this approach, the researchers focused on a protease called MMP13, which cancer cells use to cut through collagen and help them metastasize from their original locations. Prompting CleaveNet with MMP13 as a target allowed the models to design peptides that could be cut by MMP13 with considerable selectivity and efficiency. This cleavage profile is particularly useful for diagnostic and therapeutic applications.

“When we set the model up to generate sequences that would be efficient and selective for MMP13, it actually came up with peptides that had never been observed in training, and yet these novel sequences did turn out to be both efficient and selective,” Martin-Alonso says. “That was very exciting to see.”

This kind of selectivity could help to reduce the number of different peptides needed to diagnose a given type of cancer, to identify novel biomarkers, and to provide insight into specific biological pathways for study and therapeutic testing, the researchers say.

Bhatia’s lab is currently part of an ARPA-H funded project to create reporters for an at-home diagnostic kit that could potentially detect and distinguish between 30 different types of cancer, in early stages of disease, based on measurements of protease activity. These sensors could include detection of not only MMP-mediated cleavage, but other enzymes such as serine proteases and cysteine proteases.

Peptides designed using CleaveNet could also be incorporated into cancer therapeutics such as antibody treatments. Using a specific peptide to attach a therapeutic such as a cytokine or small molecule drug to a targeting antibody could enable the medicine to be released only when the peptides are exposed to proteases in the tumor environment, improving efficacy and reducing side effects.

Beyond direct applications in diagnostics and therapeutics, combining efforts from the ARPA-H work with this modeling framework could enable the creation of a comprehensive “protease activity atlas” that spans multiple protease classes and cancers. Such a resource could further accelerate research in early cancer detection, protease biology, and AI models for peptide design.

The research was funded by La Caixa Foundation, the Ludwig Center at MIT, and the Marble Center for Cancer Nanomedicine.

Sean Luk: Addressing the urgent need for better immunotherapy

Tue, 01/06/2026 - 12:00am

In elementary school, Sean Luk loved donning an oversized lab coat and helping her mom pipette chemicals at Johns Hopkins University. A few years later, she started a science blog and became fascinated by immunoengineering, which is now her concentration as a biological engineering major at MIT.

Her grandparents’ battles with cancer made Luk, now a senior, realize how urgently patients need advancements in immunotherapy, which leverages a patient’s immune system to fight tumors or pathogens.

“The idea of creating something that is actually able to improve human health is what really drives me now. You want to fight that sense of helplessness when you see a loved one suffering through this disease, and it just further motivates me to be excellent at what I do,” Luk says.

A varsity athlete and entrepreneur as well as a researcher, Luk thrives when bringing people together for a common cause.

Working with immunotherapies

Luk was introduced to immunotherapies in high school after she listened to a seminar about using components of the immune system, such as antibodies and cytokines, to improve graft tolerance.

“The complexity of the immune system really fascinated me, and it is incredible that we can build antibodies in a very logical way to address disease,” Luk says.

She worked in several Johns Hopkins labs as a high school student in Maryland, and a professor there connected her to MIT Professor Dane Wittrup. Luk has worked in the Wittrup lab throughout her time at MIT. One of her main projects involves developing ultra-stable cyclic peptide drugs to help treat autoimmune diseases, which could potentially be taken orally instead of injected.

Luk has been a co-author on two published articles and has become increasingly interested in the intersection between computational and experimental protein design. Currently, she is working on engineering an interferon gamma construct that preferentially targets myeloid cells in the tumor microenvironment.

“We're trying to target and reprogram the immunosuppressive myeloid cells surrounding the cancer cells, so that they can license T cells to attack cancer cells and kickstart the cancer immunity cycle,” she explains.

Communication for all

Through her work in high school with Best Buddies, an organization that aims to promote one-on-one friendships between students with and without intellectual and developmental disabilities, Luk became passionate about empowering people with special needs. At MIT, she started a project focusing on children with Down syndrome, with support from the Sandbox Innovation Fund.

“Through talking to a lot of parents and caretakers, the biggest issue that people with Down syndrome face is communication. And when you think about it, communication is crucial to everything that we do,” Luk says, “We want to communicate our thoughts. We want to be able to interact with our peers. And if people are unable to do that, it’s isolating, it’s frustrating.”

Her solution was to co-found EasyComm, an online game platform that helps children with Down syndrome work on verbal communication.

“We thought it would be a great way to improve their verbal communication skills while having fun and incentivize that kind of learning through gamification,” Luk says. She and her co-founder recently filed a provisional patent and plan to make the platform available to a wider audience.

A global perspective

Luk grew up in Hong Kong before moving to Maryland in the fifth grade. She’s always been athletic; in Hong Kong, she was a competitive jump roper. At just 9 years old, she won bronze in the Asian Jump Rope Championships among children 14 years old and younger. At 7 years old, she started playing soccer on her brother’s team, despite being the only girl. She says the sport was considered “manly” in Hong Kong, and girls were discouraged from joining, but her coaches and family were supportive.

Moving to the U.S. meant that her time in competitive jump roping was cut short, and Luk focused more on soccer. Her team in the U.S. felt far more intense than boys soccer in Hong Kong, but the Luk family was in it together, Luk says. She credits her success to the combination of her hard-working nature she learned from Hong Kong, and the innovation and experiences she was exposed to in the U.S.

“We had a really close bond within the family,” Luk says, “Figuring out taxes for my dad and our family, like driving and houses and all that stuff, it was totally new. But I think we really took it in stride, just adjusting as we went.”

Luk continued soccer throughout high school and eventually committed to play on the MIT team. She likes that the team allows players to prioritize academics while still being competitive. Last season, she was elected captain.

“It’s really a pleasure to be captain, and it’s challenging, but it’s also very rewarding when you see the team be cohesive. When you see the team out there winning games through grit,” Luk says.

During her first year at MIT, Luk got back in touch with her old soccer coach from Hong Kong, who then worked on the national team. After sending over some tape, she was offered a spot on the U-20 national team, and played in the U20 Asian Football Championship Qualifiers.

“It was so, so cool to be able to represent Hong Kong because I played soccer all my life but it just carries a different weight to it when you’re wearing your country’s jersey,” Luk says.

Besides her cross-cultural background, Luk is also proud of her international experiences playing soccer, staying with host families and doing lab work in Copenhagen, Denmark; Stuttgart, Germany; and Ancona, Italy. She speaks English, Cantonese, and Mandarin fluently.

“Aside from the textbook academic knowledge, I feel like a global perspective is so important when you’re trying to collaborate with other people from different walks of life,” Luk says, “When you’re just thinking about science or the impact that you can have in general, it’s important to realize you don’t have all the answers and to learn from the world outside your little bubble.”

MIT scientists investigate memorization risk in the age of clinical AI

Mon, 01/05/2026 - 4:55pm

What is patient privacy for? The Hippocratic Oath, thought to be one of the earliest and most widely known medical ethics texts in the world, reads: “Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or not, which ought not to be spoken of outside, I will keep secret, as considering all such things to be private.” 

As privacy becomes increasingly scarce in the age of data-hungry algorithms and cyberattacks, medicine is one of the few remaining domains where confidentiality remains central to practice, enabling patients to trust their physicians with sensitive information.

But a paper co-authored by MIT researchers investigates how artificial intelligence models trained on de-identified electronic health records (EHRs) can memorize patient-specific information. The work, which was recently presented at the 2025 Conference on Neural Information Processing Systems (NeurIPS), recommends a rigorous testing setup to ensure targeted prompts cannot reveal information, emphasizing that leakage must be evaluated in a health care context to determine whether it meaningfully compromises patient privacy.

Foundation models trained on EHRs should normally generalize knowledge to make better predictions, drawing upon many patient records. But in “memorization,” the model draws upon a singular patient record to deliver its output, potentially violating patient privacy. Notably, foundation models are already known to be prone to data leakage.

“Knowledge in these high-capacity models can be a resource for many communities, but adversarial attackers can prompt a model to extract information on training data,” says Sana Tonekaboni, a postdoc at the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard and first author of the paper. Given the risk that foundation models could also memorize private data, she notes, “this work is a step towards ensuring there are practical evaluation steps our community can take before releasing models.”

To conduct research on the potential risk EHR foundation models could pose in medicine, Tonekaboni approached MIT Associate Professor Marzyeh Ghassemi, who is a principal investigator at the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and a member of the Computer Science and Artificial Intelligence Lab. Ghassemi, a faculty member in the MIT Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science, runs the Healthy ML group, which focuses on robust machine learning in health.

Just how much information does a bad actor need to expose sensitive data, and what are the risks associated with the leaked information? To assess this, the research team developed a series of tests that they hope will lay the groundwork for future privacy evaluations. These tests are designed to measure various types of uncertainty, and assess their practical risk to patients by measuring various tiers of attack possibility.  

“We really tried to emphasize practicality here; if an attacker has to know the date and value of a dozen laboratory tests from your record in order to extract information, there is very little risk of harm. If I already have access to that level of protected source data, why would I need to attack a large foundation model for more?” says Ghassemi. 

With the inevitable digitization of medical records, data breaches have become more commonplace. In the past 24 months, the U.S. Department of Health and Human Services has recorded 747 data breaches of health information affecting more than 500 individuals, with the majority categorized as hacking/IT incidents.

Patients with unique conditions are especially vulnerable, given how easy it is to pick them out. “Even with de-identified data, it depends on what sort of information you leak about the individual,” Tonekaboni says. “Once you identify them, you know a lot more.”

In their structured tests, the researchers found that the more information the attacker has about a particular patient, the more likely the model is to leak information. They demonstrated how to distinguish model generalization cases from patient-level memorization, to properly assess privacy risk. 

The paper also emphasized that some leaks are more harmful than others. For instance, a model revealing a patient’s age or demographics could be characterized as a more benign leakage than the model revealing more sensitive information, like an HIV diagnosis or alcohol abuse. 

The researchers note that patients with unique conditions are especially vulnerable given how easy it is to pick them out, which may require higher levels of protection. “Even with de-identified data, it really depends on what sort of information you leak about the individual,” Tonekaboni says. The researchers plan to expand the work to become more interdisciplinary, adding clinicians and privacy experts as well as legal experts. 

“There’s a reason our health data is private,” Tonekaboni says. “There’s no reason for others to know about it.”

This work supported by the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, Wallenberg AI, the Knut and Alice Wallenberg Foundation, the U.S. National Science Foundation (NSF), a Gordon and Betty Moore Foundation award, a Google Research Scholar award, and the AI2050 Program at Schmidt Sciences. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.

New research may help scientists predict when a humid heat wave will break

Mon, 01/05/2026 - 12:00am

A long stretch of humid heat followed by intense thunderstorms is a weather pattern historically seen mostly in and around the tropics. But climate change is making humid heat waves and extreme storms more common in traditionally temperate midlatitude regions such as the midwestern U.S., which has seen episodes of unusually high heat and humidity in recent summers.

Now, MIT scientists have identified a key condition in the atmosphere that determines how hot and humid a midlatitude region can get, and how intense related storms can become. The results may help climate scientists gauge a region’s risk for humid heat waves and extreme storms as the world continues to warm.

In a study appearing this week in the journal Science Advances, the MIT team reports that a region’s maximum humid heat and storm intensity are limited by the strength of an “atmospheric inversion”— a weather condition in which a layer of warm air settles over cooler air.

Inversions are known to act as an atmospheric blanket that traps pollutants at ground level. Now, the MIT researchers have found atmospheric inversions also trap and build up heat and moisture at the surface, particularly in midlatitude regions. The more persistent an inversion, the more heat and humidity a region can accumulate at the surface, which can lead to more oppressive, longer-lasting humid heat waves.

And, when an inversion eventually weakens, the accumulated heat energy is released as convection, which can whip up the hot and humid air into intense thunderstorms and heavy rainfall.

The team says this effect is especially relevant for midlatitude regions, where atmospheric inversions are common. In the U.S., regions to the east of the Rocky Mountains often experience inversions of this kind, with relatively warm air aloft sitting over cooler air near the surface.

As climate change further warms the atmosphere in general, the team suspects that inversions may become more persistent and harder to break. This could mean more frequent humid heat waves and more intense storms for places that are not accustomed to such extreme weather.

“Our analysis shows that the eastern and midwestern regions of U.S. and the eastern Asian regions may be new hotspots for humid heat in the future climate,” says study author Funing Li, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

“As the climate warms, theoretically the atmosphere will be able to hold more moisture,” adds co-author and EAPS Assistant Professor Talia Tamarin-Brodsky. “Which is why new regions in the midlatitudes could experience moist heat waves that will cause stress that they weren’t used to before.”

Air energetics

The atmosphere’s layers generally get colder with altitude. In these typical conditions, when a heat wave comes through a region, it warms the air at ground level. Since warm air is lighter than cold air, it will eventually rise, like a hot air balloon, prompting colder air to sink. This rise and fall of air sets off convection, like bubbles in boiling water. When warm air hits colder altitudes, it condenses into droplets that rain out, typically as a thunderstorm, that can often relieve a heat wave.

For their new study, Li and Tamarin-Brodsky wondered: What would it take to get air at the surface to convect and ultimately end a heat wave? Put another way: What sets the limit to how hot a region can get before air begins to convect to eventually rain?

The team treated the question as a problem of energy. Heat is energy that can be thought of in two forms: the energy that comes from dry heat (i.e., temperature), and the energy that comes from latent, or moist, heat. The scientists reasoned that, for a given portion or “parcel” of air, there is some amount of moisture that, when condensed, contributes to that air parcel’s total energy. Depending on how much energy an air parcel has, it could start to convect, rise up, and eventually rain out.

“Imagine putting a balloon around a parcel of air and asking, will it stay in the same place, will it go up, or will it sink?” Tamarin-Brodsky says. “It’s not just about warm air that’s lifting. You also have to think about the moisture that’s there. So we consider the energetics of an air parcel while taking into account the moisture in that air. Then we can find the maximum ‘moist energy’ that can accumulate near the surface before the air becomes unstable and convects.”

Heat barrier

As they worked through their analysis, the researchers found that the maximum amount of moist energy, or the highest level of heat and humidity that the air can hold, is set by the presence and strength of an atmospheric inversion. In cases where atmospheric layers are inverted (when a layer of warm or light air settles over colder or heavier, ground-level air), the air has to accumulate more heat and moisture in order for an air parcel to build up enough energy to lift up and break through the inversion layer. The more persistent the inversion is, the hotter and more humid air must get before it can rise up and convect.

Their analysis suggests that an atmospheric inversion can increase a region’s capacity to hold heat and humidity. How high this heat and humidity can get depends on how stable the inversion is. If a blanket of warm air parks over a region without moving, it allows more humid heat to build up, versus if the blanket is quickly removed. When the air eventually convects, the accumulated heat and moisture will generate stronger, more intense storms.

“This increasing inversion has two effects: more severe humid heat waves, and less frequent but more extreme convective storms,” Tamarin-Brodsky says.

Inversions in the atmosphere form in various ways. At night, the surface that warmed during the day cools by radiating heat to space, making the air in contact with it cooler and denser than the air above. This creates a shallow layer in which temperature increases with height, called a nocturnal inversion. Inversions can also form when a shallow layer of cool marine air moves inland from the ocean and slides beneath warmer air over the land, leaving cool air near the surface and warmer air above. In some cases, persistent inversions can form when air heated over sun-warmed mountains is carried over colder low-lying regions, so that a warm layer aloft caps cooler air near the ground.

“The Great Plains and the Midwest have had many inversions historically due to the Rocky Mountains,” Li says. “The mountains act as an efficient elevated heat source, and westerly winds carry this relatively warm air downstream into the central and midwestern U.S., where it can help create a persistent temperature inversion that caps colder air near the surface.”

“In a future climate for the Midwest, they may experience both more severe thunderstorms and more extreme humid heat waves,” Tamarin-Brodsky says. “Our theory gives an understanding of the limit for humid heat and severe convection for these communities that will be future heat wave and thunderstorm hotspots.”

This research is part of the MIT Climate Grand Challenge on Weather and Climate Extremes. Support was provided by Schmidt Sciences.

One pull of a string is all it takes to deploy these complex structures

Tue, 12/23/2025 - 12:00am

MIT researchers have developed a new method for designing 3D structures that can be transformed from a flat configuration into their curved, fully formed shape with only a single pull of a string.

This technique could enable the rapid deployment of a temporary field hospital at the site of a disaster such as a devastating tsunami — a situation where quick medical action is essential to save lives.

The researchers’ approach converts a user-specified 3D structure into a flat shape composed of interconnected tiles. The algorithm uses a two-step method to find the path with minimal friction for a string that can be tightened to smoothly actuate the structure.

The actuation mechanism is easily reversible, and if the string is released, the structure quickly returns to its flat configuration. This could enable complex, 3D structures to be stored and transported more efficiently and with less cost.

In addition, the designs generated by their system are agnostic to the fabrication method, so complete structures can be produced using 3D printing, CNC milling, molding, or other techniques.

This method could enable the creation of transportable medical devices, foldable robots that can flatten to enter hard-to-reach spaces, or even modular space habitats that can be actuated by robots working on the surface of Mars.

“The simplicity of the whole actuation mechanism is a real benefit of our approach. The user just needs to provide their intended design, and then our method optimizes it in such a way that it holds the shape after just one pull on the string, so the structure can be deployed very easily. I hope people will be able to use this method to create a wide variety of different, deployable structures,” says Akib Zaman, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this new method.

He is joined on the paper by MIT graduate student Jacqueline Aslarus; postdoc Jiaji Li; Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mina Konaković Luković, an assistant professor and leader of the Algorithmic Design Group in CSAIL. The research was presented at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.

From ancient art to an algorithm

Creating deployable structures from flat pieces simplifies on-site assembly and could be especially useful in constructing emergency shelters after natural disasters. On a smaller scale, items like foldable bike helmets could improve the safety of riders who would otherwise be unable to carry a bulky helmet.

But converting flat, deployable objects into their 3D shape often requires specialized equipment or multiple steps, and the actuation mechanism is typically difficult to reverse.

“Because of these challenges, deployable structures tend to be manually designed and quite simple, geometrically. But if we can create more complex geometries, while simplifying the actuation mechanism, we could enhance the capabilities of these deployables,” Zaman says.

To do this, the researchers created a method that automatically converts a user’s 3D design into a flat structure comprised of tiles, connected by rotating hinges at the corners, which can be fully actuated by pulling a single string one time.

Their method breaks a user design into a grid of quadrilateral tiles inspired by kirigami, the ancient Japanese art of paper cutting. With kirigami, by cutting a material in certain ways, they can encode it with unique properties. In this case, they use kirigami to create an auxetic mechanism, which is a structure that gets thicker when stretched and thinner when compressed.

After encoding the 3D geometry into a flat set of auxetic tiles, the algorithm computes the minimum number of points that the tightening string must lift to fully deploy the 3D structure. Then, it finds the shortest path that connects those lift points, while including all areas of the object’s boundary that must be connected to guide the structure into its 3D configuration. It does these calculations in such a way that the optimal string path minimizes friction, enabling the structure to be smoothly actuated with just one pull.

“Our method makes it easy for the user. All they have to do is input their design, and our algorithm automatically takes care of the rest. Then all the user needs to do is to fabricate the tiles exactly the way it has been computed by the algorithm,” Zaman says.

For instance, one could fabricate a structure using a multi-material 3D printer that prints the hinges of the tiles with a flexible material and the other surfaces with a hard material.

A scale independent method

One of the biggest challenges the researchers faced was figuring out how the string route and the friction within the string channel can be effectively modeled as close to physical reality.

“While playing with a few fabricated models, we observed that closing boundary tiles is a must to enable a successful deployment and the string must be routed through them. Later, we proved this observation mathematically. Then, we looked back at an age-old physics equation and used it to formulate the optimization problem for friction minimization,” he says.

They built their automatic algorithm into an interactive user interface that allows one to design and optimize configurations to generate manufacturable objects.

The researchers used their method to design several objects of different sizes, from personalized medical items including a splint and a posture corrector to an igloo-like portable structure. They also fabricated a deployable, human-scale chair they designed using their method.

This method is scale independent, so it could be used to create tiny deployable objects that are injected and actuated inside the body, or architectural structures, like the frame of a building, that are deployed and actuated on-site using cranes.

In the future, the researchers want to further explore the design of tiny structures, while also tackling the engineering challenges involved in creating architectural installations, such as determining the ideal cable thickness and the necessary strength of the hinges. In addition, they want to create a self-deploying mechanism, so the structures do not need to be actuated by a human or robot.

This research is funded, in part, by an MIT Research Support Committee Award.

Pages