MIT Latest News

MIx helps innovators tackle challenges in national security
Startups and government defense agencies have historically seemed like polar opposites. Startups thrive on speed and risk, while defense agencies are more cautious. Over the past few years, however, things have changed. Many startups are eager to work with these organizations, which are always looking for innovative solutions to their hardest problems.
To help bridge that gap while advancing research along the way, MIT Lecturer Gene Keselman launched MIT’s Mission Innovation X (MIx) along with Sertac Karaman, a professor in the MIT Department of Aeronautics and Astronautics, and Fiona Murray, the William Porter Professor of Entrepreneurship at the MIT Sloan School of Management. MIx develops educational programming, supports research at MIT, and facilitates connections among government organizations, startups, and researchers.
“Startups know how to commercialize their tech, but they don’t necessarily know how to work with the government, and especially how to understand the needs of defense customers,” explains MIx Senior Program Manager Keenan Blatt. “There are a lot of different challenges when it comes to engaging with defense, not only from a procurement cycle and timeline perspective, but also from a culture perspective.”
MIx’s work helps innovators secure crucial early funding while giving defense agencies access to cutting-edge technologies, boosting America’s security capabilities in the process. Through the work, MIx has also become a thought leader in the emerging “dual-use” space, in which researchers and founders make strategic choices to advance technologies that have both civilian and defense applications.
Gene Keselman, the executive director of MIx as well as managing director of MIT’s venture studio Proto Ventures and a colonel in the U.S. Air Force Reserve, believes MIT is uniquely positioned to deliver on MIx’s mission.
“It’s not a coincidence MIx is happening at MIT,” says Keselman, adding that supporting national security “is part of MIT’s ethos.”
A history of service
MIx’s work has deep roots at the Institute.
“MIT has worked with the Department of Defense since at least since the 1940s, but really going back to its founding years,” says Karaman, who is also the director of MIT’s Laboratory for Information and Decision Systems (LIDS), a research group with its own long history of working with the government.
“The difference today,” adds Murray, who teaches courses on building deep tech ventures and regional innovation ecosystems and is the vice chair of NATO's Innovation Fund, “is that defense departments and others looking to support the defense, security, and resilience agenda are looking to several innovation ecosystem stakeholders — universities, startup ventures, and venture capitalists — for solutions. Not only from the large prime contractors. We have learned this lesson from Ukraine, but the same ecosystem logic is at the core of our MIx offer.”
MIx was borne out of the MIT Innovation Initiative in response to interest Keselman saw from researchers and defense officials in expanding MIT’s work with the defense and global security communities. About seven years ago, he hired Katie Person, who left MIT last year to become a battalion commander, to handle all that interest as a program manager with the initiative. MIx activities, like mentoring and educating founders, began shortly after, and MIx officially launched at MIT in 2021.
“It was a good example of the ways in which MIT responds to its students’ interests and external demand,” Keselman says.
One source of early interest was from startup founders who wanted to know how to work with the defense industry and commercialize technology that could have dual commercial and defense applications. That led the team to launch the Dual Use Ventures course, which helps startup founders and other innovators work with defense agencies. The course has since been offered annually during MIT’s Independent Activities Period (IAP) and tailored for NATO’s Defense Innovation Accelerator for the North Atlantic (DIANA).
Personnel from agencies including U.S. Special Operations Command were also interested in working with MIT students, which led the MIx team to develop course 15.362/6.9160 (Engineering Innovation: Global Security Systems), which is taken each spring by students across MIT and Harvard University.
“There are the government organizations that want to be more innovative and work with startups, and there are startups that want to get access to funding from government and have government as a customer,” Keselman says. “We’re kind of the middle layer, facilitating connections, educating, and partnering on research.”
MIx research activities give student and graduate researchers opportunities to work on pressing problems in the real world, and the MIT community has responded eagerly: More than 150 students applied for MIx’s openings in this summer’s Undergraduate Research Opportunities Program.
"We’re helping push the boundaries of what’s possible and explore the frontiers of technology, but do it in a way that is publishable," says MIx Head Research Scientist A.J. Perez ’13, MEng ’14, PhD ’23. “More broadly, we want to unlock as much support for students and researchers at MIT as possible to work on problems that we know matter to defense agencies.”
Early wins
Some of MIx’s most impactful research so far has come in partnership with startups. For example, MIx helped the startup Picogrid secure a small business grant from the U.S. Air Force to build an early wildfire detection system. As part of the grant, MIT students built a computer vision model for Picogrid’s devices that can detect smoke in the sky, proving the technical feasibility of the system and describing a promising new pathway in the field of machine learning.
In another recent project with the MIT alumni-founded startup Nominal, MIT students helped improve and automate post-flight data analysis for the U.S. Air Force’s Test Pilot School.
MIx’s work connecting MIT’s innovators and the wider innovation ecosystem with defense agencies has already begun to bear fruit, and many members of MIx believe early collaborations are a sign of things to come.
“We haven’t even scratched the surface of the potential for MIx,” says Karaman, “This could be the start of something much bigger.”
LLMs factor in unrelated information when recommending medical treatments
A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.
They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.
Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.
This work “is strong evidence that models must be audited before use in health care — which is a setting where they are already in use,” says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and senior author of the study.
These findings indicate that LLMs take nonclinical information into account for clinical decision-making in previously unknown ways. It brings to light the need for more rigorous studies of LLMs before they are deployed for high-stakes applications like making treatment recommendations, the researchers say.
“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” adds Abinitha Gourabathina, an EECS graduate student and lead author of the study.
They are joined on the paper, which will be presented at the ACM Conference on Fairness, Accountability, and Transparency, by graduate student Eileen Pan and postdoc Walter Gerych.
Mixed messages
Large language models like OpenAI’s GPT-4 are being used to draft clinical notes and triage patient messages in health care facilities around the globe, in an effort to streamline some tasks to help overburdened clinicians.
A growing body of work has explored the clinical reasoning capabilities of LLMs, especially from a fairness point of view, but few studies have evaluated how nonclinical information affects a model’s judgment.
Interested in how gender impacts LLM reasoning, Gourabathina ran experiments where she swapped the gender cues in patient notes. She was surprised that formatting errors in the prompts, like extra white space, caused meaningful changes in the LLM responses.
To explore this problem, the researchers designed a study in which they altered the model’s input data by swapping or removing gender markers, adding colorful or uncertain language, or inserting extra space and typos into patient messages.
Each perturbation was designed to mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians.
For instance, extra spaces and typos simulate the writing of patients with limited English proficiency or those with less technological aptitude, and the addition of uncertain language represents patients with health anxiety.
“The medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population. We wanted to see how these very realistic changes in text could impact downstream use cases,” Gourabathina says.
They used an LLM to create perturbed copies of thousands of patient notes while ensuring the text changes were minimal and preserved all clinical data, such as medication and previous diagnosis. Then they evaluated four LLMs, including the large, commercial model GPT-4 and a smaller LLM built specifically for medical settings.
They prompted each LLM with three questions based on the patient note: Should the patient manage at home, should the patient come in for a clinic visit, and should a medical resource be allocated to the patient, like a lab test.
The researchers compared the LLM recommendations to real clinical responses.
Inconsistent recommendations
They saw inconsistencies in treatment recommendations and significant disagreement among the LLMs when they were fed perturbed data. Across the board, the LLMs exhibited a 7 to 9 percent increase in self-management suggestions for all nine types of altered patient messages.
This means LLMs were more likely to recommend that patients not seek medical care when messages contained typos or gender-neutral pronouns, for instance. The use of colorful language, like slang or dramatic expressions, had the biggest impact.
They also found that models made about 7 percent more errors for female patients and were more likely to recommend that female patients self-manage at home, even when the researchers removed all gender cues from the clinical context.
Many of the worst results, like patients told to self-manage when they have a serious medical condition, likely wouldn’t be captured by tests that focus on the models’ overall clinical accuracy.
“In research, we tend to look at aggregated statistics, but there are a lot of things that are lost in translation. We need to look at the direction in which these errors are occurring — not recommending visitation when you should is much more harmful than doing the opposite,” Gourabathina says.
The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots.
But in follow-up work, the researchers found that these same changes in patient messages don’t affect the accuracy of human clinicians.
“In our follow up work under review, we further find that large language models are fragile to changes that human clinicians are not,” Ghassemi says. “This is perhaps unsurprising — LLMs were not designed to prioritize patient medical care. LLMs are flexible and performant enough on average that we might think this is a good use case. But we don’t want to optimize a health care system that only works well for patients in specific groups.”
The researchers want to expand on this work by designing natural language perturbations that capture other vulnerable populations and better mimic real messages. They also want to explore how LLMs infer gender from clinical text.
Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” he continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
Introducing the L. Rafael Reif Innovation Corridor
The open space connecting Hockfield Court with Massachusetts Avenue, in the heart of MIT’s campus, is now the L. Rafael Reif Innovation Corridor, in honor of the Institute’s 17th president. At a dedication ceremony Monday, Reif’s colleagues, friends, and family gathered to honor his legacy and unveil a marker for the walkway that was previously known as North Corridor or “the Outfinite.”
“It’s no accident that the space we dedicate today is not a courtyard, but a corridor — a channel for people and ideas to flow freely through the heart of MIT, and to carry us outward, to limits of our aspirations,” said Sally Kornbluth, who succeeded Reif as MIT president in 2023.
“With his signature combination of new-world thinking and old-world charm, and as a grand thinker and doer, Rafael left an indelible mark on MIT,” Kornbluth said. “As a permanent testament to his service and his achievements in service to MIT, the nation, and the world, we now dedicate this space as the L. Rafael Reif Innovation Corridor.”
Reif served as president for more than 10 years, following seven years as provost. He has been at MIT since 1980, when he joined the faculty as an assistant professor of electrical engineering.
“Through all those roles, what stood out most was his humility, his curiosity, and his remarkable ability to speak with clarity and conviction,” said Corporation Chair Mark Gorenberg, who opened the ceremony. “Under his leadership, MIT not only stayed true to its mission, it thrived, expanding its impact and strengthening its global voice.”
Gorenberg introduced Abraham J. Siegel Professor of Management and professor of operations research Cindy Barnhart, who served as chancellor, then provost, during Reif’s term as president. Barnhart, who will be stepping down as provost on July 1, summarized the many highlights from Reif’s presidency, such as the establishment of MIT Schwarzman College of Computing, the revitalization of Kendall Square, and the launch of The Engine, as well as the construction or modernization of many buildings, from the Wright Brothers Wind Tunnel to the new Edward and Joyce Linde Music Building, among other accomplishments.
“Beyond space, Rafael’s bold thinking and passion extends to MIT’s approach to education,” Barnhart continued, describing how Reif championed the building of OpenCourseWare, MITx, and edX. She also noted his support for the health and well-being of the MIT community, through efforts such as addressing student sexual misconduct and forming the MindHandHeart initiative. He also hosted dance parties and socials, joined students in the dining halls for dinner, chatted with faculty and staff over breakfasts and at forums, and more.
“At gatherings over the years, Rafael’s wife, Chris, was there by his side,” Barnhart noted, adding, “I’d like to take this opportunity to acknowledge her and thank her for her welcoming and gracious spirit.”
In summary, “I am grateful to Rafael for his visionary leadership and for his love of MIT and its people,” Barnhart said as she presented Reif with a 3D-printed replica of the Maclaurin buildings (MIT Buildings 3, 4, and 10), which was created through a collaboration between the Glass Lab, Edgerton Center, and Project Manus.
Next, Institute Professor Emeritus John Harbison played an interlude on the piano, and a musical ensemble reprised the “Rhumba for Rafael,” which Harbison composed for Reif’s inauguration in 2012.
When Reif took the podium, he reflected on the location of the corridor and its significance to early chapters in his own career; his first office and lab were in Building 13, overlooking what is now the eponymous walkway.
He also considered the years ahead: “The people who pass through this corridor in the future will surely experience the unparalleled excitement of being young at MIT, with the full expectation of upending the world to improve it,” he said.
Faculty and staff walking through the corridor may experience the “undimmed excitement” of working and studying alongside extraordinary students and colleagues, and feeling the “deep satisfaction of having created infinite memories here throughout a long career.”
“Even if none of them gives me a thought,” Reif continued, “I would like to believe that my spirit will be here, watching them with pride as they continue the never-ending mission of creating a better world.”
Island rivers carve passageways through coral reefs
Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.
Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.
Their findings provide the first quantitative evidence of rivers forming reef passes. Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.
“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.
“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”
The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.
Drawing the lines
The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.
“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”
Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.
“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”
Reefs unraveled
To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.
The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.
“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”
As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.
When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.
“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.
However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.
“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”
The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.
As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.
Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.
“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”
This research was supported, in part, by the WHOI Watson and Von Damm fellowships.
MIT engineers uncover a surprising reason why tissues are flexible or rigid
Water makes up around 60 percent of the human body. More than half of this water sloshes around inside the cells that make up organs and tissues. Much of the remaining water flows in the nooks and crannies between cells, much like seawater between grains of sand.
Now, MIT engineers have found that this “intercellular” fluid plays a major role in how tissues respond when squeezed, pressed, or physically deformed. Their findings could help scientists understand how cells, tissues, and organs physically adapt to conditions such as aging, cancer, diabetes, and certain neuromuscular diseases.
In a paper appearing today in Nature Physics, the researchers show that when a tissue is pressed or squeezed, it is more compliant and relaxes more quickly when the fluid between its cells flows easily. When the cells are packed together and there is less room for intercellular flow, the tissue as a whole is stiffer and resists being pressed or squeezed.
The findings challenge conventional wisdom, which has assumed that a tissue’s compliance depends mainly on what’s inside, rather than around, a cell. Now that the researchers have shown that intercellular flow determines how tissues will adapt to physical forces, the results can be applied to understand a wide range of physiological conditions, including how muscles withstand exercise and recover from injury, and how a tissue’s physical adaptability may affect the progression of aging, cancer, and other medical conditions.
The team envisions the results could also inform the design of artificial tissues and organs. For instance, in engineering artificial tissue, scientists might optimize intercellular flow within the tissue to improve its function or resilience. The researchers suspect that intercellular flow could also be a route for delivering nutrients or therapies, either to heal a tissue or eradicate a tumor.
“People know there is a lot of fluid between cells in tissues, but how important that is, in particular in tissue deformation, is completely ignored,” says Ming Guo, associate professor of mechanical engineering at MIT. “Now we really show we can observe this flow. And as the tissue deforms, flow between cells dominates the behavior. So, let’s pay attention to this when we study diseases and engineer tissues.”
Guo is a co-author of the new study, which includes lead author and MIT postdoc Fan Liu PhD ’24, along with Bo Gao and Hui Li of Beijing Normal University, and Liran Lei and Shuainan Liu of Peking Union Medical College.
Pressed and squeezed
The tissues and organs in our body are constantly undergoing physical deformations, from the large stretch and strain of muscles during motion to the small and steady contractions of the heart. In some cases, how easily tissues adapt to deformation can relate to how quickly a person can recover from, for instance, an allergic reaction, a sports injury, or a brain stroke. However, exactly what sets a tissue’s response to deformation is largely unknown.
Guo and his group at MIT looked into the mechanics of tissue deformation, and the role of intercellular flow in particular, following a study they published in 2020. In that study, they focused on tumors and observed the way in which fluid can flow from the center of a tumor out to its edges, through the cracks and crevices between individual tumor cells. They found that when a tumor was squeezed or pressed, the intercellular flow increased, acting as a conveyor belt to transport fluid from the center to the edges. Intercellular flow, they found, could fuel tumor invasion into surrounding regions.
In their new study, the team looked to see what role this intercellular flow might play in other, noncancerous tissues.
“Whether you allow the fluid to flow between cells or not seems to have a major impact,” Guo says. “So we decided to look beyond tumors to see how this flow influences how other tissues respond to deformation.”
A fluid pancake
Guo, Liu, and their colleagues studied the intercellular flow in a variety of biological tissues, including cells derived from pancreatic tissue. They carried out experiments in which they first cultured small clusters of tissue, each measuring less than a quarter of a millimeter wide and numbering tens of thousands of individual cells. They placed each tissue cluster in a custom-designed testing platform that the team built specifically for the study.
“These microtissue samples are in this sweet zone where they are too large to see with atomic force microscopy techniques and too small for bulkier devices,” Guo says. “So, we decided to build a device.”
The researchers adapted a high-precision microbalance that measures minute changes in weight. They combined this with a step motor that is designed to press down on a sample with nanometer precision. The team placed tissue clusters one at a time on the balance and recorded each cluster’s changing weight as it relaxed from a sphere into the shape of a pancake in response to the compression. The team also took videos of the clusters as they were squeezed.
For each type of tissue, the team made clusters of varying sizes. They reasoned that if the tissue’s response is ruled by the flow between cells, then the bigger a tissue, the longer it should take for water to seep through, and therefore, the longer it should take the tissue to relax. It should take the same amount of time, regardless of size, if a tissue’s response is determined by the structure of the tissue rather than fluid.
Over multiple experiments with a variety of tissue types and sizes, the team observed a similar trend: The bigger the cluster, the longer it took to relax, indicating that intercellular flow dominates a tissue’s response to deformation.
“We show that this intercellular flow is a crucial component to be considered in the fundamental understanding of tissue mechanics and also applications in engineering living systems,” Liu says.
Going forward, the team plans to look into how intercellular flow influences brain function, particularly in disorders such as Alzheimer’s disease.
“Intercellular or interstitial flow can help you remove waste and deliver nutrients to the brain,” Liu adds. “Enhancing this flow in some cases might be a good thing.”
“As this work shows, as we apply pressure to a tissue, fluid will flow,” Guo says. “In the future, we can think of designing ways to massage a tissue to allow fluid to transport nutrients between cells.”
“Cold spray” 3D printing technique proves effective for on-site bridge repair
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
When Earth iced over, early life may have sheltered in meltwater ponds
When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.
In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.
The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.
The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.
The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.
“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”
The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.
Polar plunge
“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?
“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”
There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.
“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”
Small pond, big life
For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.
The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.
“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.
These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.
To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.
“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”
This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.
QS ranks MIT the world’s No. 1 university for 2025-26
MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the 14th year in a row MIT has received this distinction.
The full 2026 edition of the rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at TopUniversities.com. The QS rankings are based on factors including academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students.
MIT was also ranked the world’s top university in 11 of the subject areas ranked by QS, as announced in March of this year.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
Memory safety is at a tipping point
Social security numbers stolen. Public transport halted. Hospital systems frozen until ransoms are paid. These are some of the damaging consequences of unsecure memory in computer systems.
Over the past decade, public awareness of such cyberattacks has intensified, as their impacts have harmed individuals, corporations, and governments. Today, this awareness is coinciding with technologies that are finally mature enough to eliminate vulnerabilities in memory safety.
"We are at a tipping point — now is the right time to move to memory-safe systems," says Hamed Okhravi, a cybersecurity expert in MIT Lincoln Laboratory’s Secure Resilient Systems and Technology Group.
In an op-ed earlier this year in Communications of the ACM, Okhravi joined 20 other luminaries in the field of computer security to lay out a plan for achieving universal memory safety. They argue for a standardized framework as an essential next step to adopting memory-safety technologies throughout all forms of computer systems, from fighter jets to cell phones.
Memory-safety vulnerabilities occur when a program performs unintended or erroneous operations in memory. Such operations are prevalent, accounting for an estimated 70 percent of software vulnerabilities. If attackers gain access to memory, they can potentially steal sensitive information, alter program execution, or even take control of the computer system.
These vulnerabilities exist largely because common software programming languages, such as C or C++, are inherently memory-insecure. A simple error by a software engineer, perhaps one line in a system’s multimillion lines of code, could be enough for an attacker to exploit. In recent years, new memory-safe languages, such as Rust, have been developed. But rewriting legacy systems in new, memory-safe languages can be costly and complicated.
Okhravi focuses on the national security implications of memory-safety vulnerabilities. For the U.S. Department of Defense (DoD), whose systems comprise billions of lines of legacy C or C++ code, memory safety has long been a known problem. The National Security Agency (NSA) and the federal government have recently urged technology developers to eliminate memory-safety vulnerabilities from their products. Security concerns extend beyond military systems to widespread consumer products.
"Cell phones, for example, are not immediately important for defense or war-fighting, but if we have 200 million vulnerable cell phones in the nation, that’s a serious matter of national security," Okhravi says.
Memory-safe technology
In recent years, several technologies have emerged to help patch memory vulnerabilities in legacy systems. As the guest editor for a special issue of IEEE Security and Privacy, Okhravi solicited articles from top contributors in the field to highlight these technologies and the ways they can build on one another.
Some of these memory-safety technologies have been developed at Lincoln Laboratory, with sponsorship from DoD agencies. These technologies include TRACER and TASR, which are software products for Windows and Linux systems, respectively, that reshuffle the location of code in memory each time a program accesses it, making it very difficult for attackers to find exploits. These moving-target solutions have since been licensed by cybersecurity and cloud services companies.
"These technologies are quick wins, enabling us to make a lot of immediate impact without having to rebuild the whole system. But they are only a partial solution, a way of securing legacy systems while we are transitioning to safer languages," Okhravi says.
Innovative work is underway to make that transition easier. For example, the TRACTOR program at the U.S. Defense Advanced Research Projects Agency is developing artificial intelligence tools to automatically translate legacy C code to Rust. Lincoln Laboratory researchers will test and evaluate the translator for use in DoD systems.
Okhravi and his coauthors acknowledged in their op-ed that the timeline for full adoption of memory-safe systems is long — likely decades. It will require the deployment of a combination of new hardware, software, and techniques, each with their own adoption paths, costs, and disruptions. Organizations should prioritize mission-critical systems first.
"For example, the most important components in a fighter jet, such as the flight-control algorithm or the munition-handling logic, would be made memory-safe, say, within five years," Okhravi says. Subsystems less important to critical functions would have a longer time frame.
Use of memory-safe programming languages at Lincoln Laboratory
As Lincoln Laboratory continues its leadership in advancing memory-safety technologies, the Secure Resilient Systems and Technology Group has prioritized adopting memory-safe programming languages. "We’ve been investing in the group-wide use of Rust for the past six years as part of our broader strategy to prototype cyber-hardened mission systems and high-assurance cryptographic implementations for the DoD and intelligence community," says Roger Khazan, who leads the group. "Memory safety is fundamental to trustworthiness in these systems."
Rust’s strong guarantees around memory safety, along with its speed and ability to catch bugs early during development, make it especially well-suited for building secure and reliable systems. The laboratory has been using Rust to prototype and transition secure components for embedded, distributed, and cryptographic systems where resilience, performance, and correctness are mission-critical.
These efforts support both immediate U.S. government needs and a longer-term transformation of the national security software ecosystem. "They reflect Lincoln Laboratory’s broader mission of advancing technology in service to national security, grounded in technical excellence, innovation, and trust," Khazan adds.
A technology-agnostic framework
As new computer systems are designed, developers need a framework of memory-safety standards guiding them. Today, attempts to request memory safety in new systems are hampered by the lack of a clear set of definitions and practice.
Okhravi emphasizes that this standardized framework should be technology-agnostic and provide specific timelines with sets of requirements for different types of systems.
"In the acquisition process for the DoD, and even the commercial sector, when we are mandating memory safety, it shouldn’t be tied to a specific technology. It should be generic enough that different types of systems can apply different technologies to get there," he says.
Filling this gap not only requires building industrial consensus on technical approaches, but also collaborating with government and academia to bring this effort to fruition.
The need for collaboration was an impetus for the op-ed, and Okhravi says that the consortium of experts will push for standardization from their positions across industry, government, and academia. Contributors to the paper represent a wide range of institutes, from the University of Cambridge and SRI International to Microsoft and Google. Together, they are building momentum to finally root out memory vulnerabilities and the costly damages associated with them.
"We are seeing this cost-risk trade-off mindset shifting, partly because of the maturation of technology and partly because of such consequential incidents,” Okhravi says. "We hear all the time that such-and-such breach cost billions of dollars. Meanwhile, making the system secure might have cost 10 million dollars. Wouldn’t we have been better off making that effort?"
The MIT Press acquires University Science Books from AIP Publishing
The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).
University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.
Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.
“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.”
“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press.
Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.
AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.
Supercharged vaccine could offer strong protection with just one dose
Researchers at MIT and the Scripps Research Institute have shown that they can generate a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants — materials that help stimulate the immune system.
In a study of mice, the researchers showed that this approach produced a much wider diversity of antibodies against an HIV antigen, compared to the vaccine given on its own or with just one of the adjuvants. The dual-adjuvant vaccine accumulated in the lymph nodes and remained there for up to a month, allowing the immune system to build up a much greater number of antibodies against the HIV protein.
This strategy could lead to the development of vaccines that only need to be given once, for infectious diseases including HIV or SARS-CoV-2, the researchers say.
“This approach is compatible with many protein-based vaccines, so it offers the opportunity to engineer new formulations for these types of vaccines across a wide range of different diseases, such as influenza, SARS-CoV-2, or other pandemic outbreaks,” says J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research and the Ragon Institute of MGH, MIT, and Harvard.
Love and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Translational Medicine. Kristen Rodrigues PhD ’23 and Yiming Zhang PhD ’25 are the lead authors of the paper.
More powerful vaccines
Most vaccines are delivered along with adjuvants, which help to stimulate a stronger immune response to the antigen. One adjuvant commonly used with protein-based vaccines, including those for hepatitis A and B, is aluminum hydroxide, also known as alum. This adjuvant works by activating the innate immune response, helping the body to form a stronger memory of the vaccine antigen.
Several years ago, Irvine developed another adjuvant based on saponin, an FDA-approved adjuvant derived from the bark of the Chilean soapbark tree. His work showed that nanoparticles containing both saponin and a molecule called MPLA, which promotes inflammation, worked better than saponin on its own. That nanoparticle, known as SMNP, is now being used as an adjuvant for an HIV vaccine that is currently in clinical trials.
Irvine and Love then tried combining alum and SMNP and showed that vaccines containing both of those adjuvants could generate even more powerful immune responses against either HIV or SARS-CoV-2.
In the new paper, the researchers wanted to explore why these two adjuvants work so well together to boost the immune response, specifically the B cell response. B cells produce antibodies that can circulate in the bloodstream and recognize a pathogen if the body is exposed to it again.
For this study, the researchers used an HIV protein called MD39 as their vaccine antigen, and anchored dozens of these proteins to each alum particle, along with SMNP.
After vaccinating mice with these particles, the researchers found that the vaccine accumulated in the lymph nodes — structures where B cells encounter antigens and undergo rapid mutations that generate antibodies with high affinity for a particular antigen. This process takes place within clusters of cells known as germinal centers.
The researchers showed that SMNP and alum helped the HIV antigen to penetrate through the protective layer of cells surrounding the lymph nodes without being broken down into fragments. The adjuvants also helped the antigens to remain intact in the lymph nodes for up to 28 days.
“As a result, the B cells that are cycling in the lymph nodes are constantly being exposed to the antigen over that time period, and they get the chance to refine their solution to the antigen,” Love says.
This approach may mimic what occurs during a natural infection, when antigens can remain in the lymph nodes for weeks, giving the body time to build up an immune response.
Antibody diversity
Single-cell RNA sequencing of B cells from the vaccinated mice revealed that the vaccine containing both adjuvants generated a much more diverse repertoire of B cells and antibodies. Mice that received the dual-adjuvant vaccine produced two to three times more unique B cells than mice that received just one of the adjuvants.
That increase in B cell number and diversity boosts the chances that the vaccine could generate broadly neutralizing antibodies — antibodies that can recognize a variety of strains of a given virus, such as HIV.
“When you think about the immune system sampling all of the possible solutions, the more chances we give it to identify an effective solution, the better,” Love says. “Generating broadly neutralizing antibodies is something that likely requires both the kind of approach that we showed here, to get that strong and diversified response, as well as antigen design to get the right part of the immunogen shown.”
Using these two adjuvants together could also contribute to the development of more potent vaccines against other infectious diseases, with just a single dose.
“What’s potentially powerful about this approach is that you can achieve long-term exposures based on a combination of adjuvants that are already reasonably well-understood, so it doesn’t require a different technology. It’s just combining features of these adjuvants to enable low-dose or potentially even single-dose treatments,” Love says.
The research was funded by the National Institutes of Health; the Koch Institute Support (core) Grant from the National Cancer Institute; the Ragon Institute of MGH, MIT, and Harvard; and the Howard Hughes Medical Institute.
New 3D chips could make electronics faster and more energy-efficient
The advanced semiconductor material gallium nitride will likely be key for the next generation of high-speed communication systems and the power electronics needed for state-of-the-art data centers.
Unfortunately, the high cost of gallium nitride (GaN) and the specialization required to incorporate this semiconductor material into conventional electronics have limited its use in commercial applications.
Now, researchers from MIT and elsewhere have developed a new fabrication process that integrates high-performance GaN transistors onto standard silicon CMOS chips in a way that is low-cost and scalable, and compatible with existing semiconductor foundries.
Their method involves building many tiny transistors on the surface of a GaN chip, cutting out each individual transistor, and then bonding just the necessary number of transistors onto a silicon chip using a low-temperature process that preserves the functionality of both materials.
The cost remains minimal since only a tiny amount of GaN material is added to the chip, but the resulting device can receive a significant performance boost from compact, high-speed transistors. In addition, by separating the GaN circuit into discrete transistors that can be spread over the silicon chip, the new technology is able to reduce the temperature of the overall system.
The researchers used this process to fabricate a power amplifier, an essential component in mobile phones, that achieves higher signal strength and efficiencies than devices with silicon transistors. In a smartphone, this could improve call quality, boost wireless bandwidth, enhance connectivity, and extend battery life.
Because their method fits into standard procedures, it could improve electronics that exist today as well as future technologies. Down the road, the new integration scheme could even enable quantum applications, as GaN performs better than silicon at the cryogenic temperatures essential for many types of quantum computing.
“If we can bring the cost down, improve the scalability, and, at the same time, enhance the performance of the electronic device, it is a no-brainer that we should adopt this technology. We’ve combined the best of what exists in silicon with the best possible gallium nitride electronics. These hybrid chips can revolutionize many commercial markets,” says Pradyot Yadav, an MIT graduate student and lead author of a paper on this method.
He is joined on the paper by fellow MIT graduate students Jinchen Wang and Patrick Darmawi-Iskandar; MIT postdoc John Niroula; senior authors Ulrich L. Rohde, a visiting scientist at the Microsystems Technology Laboratories (MTL), and Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and member of MTL; and Tomás Palacios, the Clarence J. LeBel Professor of EECS, and director of MTL; as well as collaborators at Georgia Tech and the Air Force Research Laboratory. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
Swapping transistors
Gallium nitride is the second most widely used semiconductor in the world, just after silicon, and its unique properties make it ideal for applications such as lighting, radar systems and power electronics.
The material has been around for decades and, to get access to its maximum performance, it is important for chips made of GaN to be connected to digital chips made of silicon, also called CMOS chips. To enable this, some integration methods bond GaN transistors onto a CMOS chip by soldering the connections, but this limits how small the GaN transistors can be. The tinier the transistors, the higher the frequency at which they can work.
Other methods integrate an entire gallium nitride wafer on top of a silicon wafer, but using so much material is extremely costly, especially since the GaN is only needed in a few tiny transistors. The rest of the material in the GaN wafer is wasted.
“We wanted to combine the functionality of GaN with the power of digital chips made of silicon, but without having to compromise on either cost of bandwidth. We achieved that by adding super-tiny discrete gallium nitride transistors right on top of the silicon chip,” Yadav explains.
The new chips are the result of a multistep process.
First, a tightly packed collection of miniscule transistors is fabricated across the entire surface of a GaN wafer. Using very fine laser technology, they cut each one down to just the size of the transistor, which is 240 by 410 microns, forming what they call a dielet. (A micron is one millionth of a meter.)
Each transistor is fabricated with tiny copper pillars on top, which they use to bond directly to the copper pillars on the surface of a standard silicon CMOS chip. Copper to copper bonding can be done at temperatures below 400 degrees Celsius, which is low enough to avoid damaging either material.
Current GaN integration techniques require bonds that utilize gold, an expensive material that needs much higher temperatures and stronger bonding forces than copper. Since gold can contaminate the tools used in most semiconductor foundries, it typically requires specialized facilities.
“We wanted a process that was low-cost, low-temperature, and low-force, and copper wins on all of those related to gold. At the same time, it has better conductivity,” Yadav says.
A new tool
To enable the integration process, they created a specialized new tool that can carefully integrate the extremely tiny GaN transistor with the silicon chips. The tool uses a vacuum to hold the dielet as it moves on top of a silicon chip, zeroing in on the copper bonding interface with nanometer precision.
They used advanced microscopy to monitor the interface, and then when the dielet is in the right position, they apply heat and pressure to bond the GaN transistor to the chip.
“For each step in the process, I had to find a new collaborator who knew how to do the technique that I needed, learn from them, and then integrate that into my platform. It was two years of constant learning,” Yadav says.
Once the researchers had perfected the fabrication process, they demonstrated it by developing power amplifiers, which are radio frequency circuits that boost wireless signals.
Their devices achieved higher bandwidth and better gain than devices made with traditional silicon transistors. Each compact chip has an area of less than half a square millimeter.
In addition, because the silicon chip they used in their demonstration is based on Intel 16 22nm FinFET state-of-the-art metallization and passive options, they were able to incorporate components often used in silicon circuits, such as neutralization capacitors. This significantly improved the gain of the amplifier, bringing it one step closer to enabling the next generation of wireless technologies.
“To address the slowdown of Moore’s Law in transistor scaling, heterogeneous integration has emerged as a promising solution for continued system scaling, reduced form factor, improved power efficiency, and cost optimization. Particularly in wireless technology, the tight integration of compound semiconductors with silicon-based wafers is critical to realizing unified systems of front-end integrated circuits, baseband processors, accelerators, and memory for next-generation antennas-to-AI platforms. This work makes a significant advancement by demonstrating 3D integration of multiple GaN chips with silicon CMOS and pushes the boundaries of current technological capabilities,” says Atom Watanabe, a research scientist at IBM who was not involved with this paper.
This work is supported, in part, by the U.S. Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program and CHIMES, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation Program by the Department of Defense and the Defense Advanced Research Projects Agency (DARPA). Fabrication was carried out using facilities at MIT.Nano, the Air Force Research Laboratory, and Georgia Tech.
Combining technology, education, and human connection to improve online learning
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement, she works at the intersection of technology, education, and human connection to improve digital learning platforms.
Morris grew up in rural upstate New York in a family of makers. She learned to sew, cook, and build things with wood at a young age. One of her earlier memories is of a small handsaw she made — with the help of her father, a professional carpenter. It had wooden handles on both sides to make sawing easier for her.
Later, when she needed to learn something, she’d turn to project-based communities, rather than books. She taught herself to code late at night, taking advantage of community-oriented platforms where people answer questions and post sketches, allowing her to see the code behind the objects people made.
“For me, that was this huge, wake-up moment of feeling like there was a path to expression that was not a traditional computer-science classroom,” she says. “I think that’s partly why I feel so passionate about what I’m doing now. That was the big transformation: having that community available in this really personal, project-based way.”
Subsequently, Morris has become involved in community-based learning in diverse ways: She’s a co-organizer of the MIT Media Lab’s Festival of Learning; she leads creative coding community meetups; and she’s been active in the open-source software community development.
“My years of organizing learning and making communities — both in person and online — have shown me firsthand how powerful social interaction can be for motivation and curiosity,” Morris said. “My research is really about identifying which elements of that social magic are most essential, so we can design digital environments that better support those dynamics.”
Even in her artwork, Morris sometimes works with a collective. She’s contributed to the creation of about 10 large art installations that combine movement, sound, imagery, lighting, and other technologies to immerse the visitor in an experience evoking some aspect of nature, such as flowing water, birds in flight, or crowd kinetics. These marvelous installations are commanding and calming at the same time, possibly because they focus the mind, eye, and sometimes the ear.
She did much of this work with New York-based Hypersonic, a company of artists and technologists specializing in large kinetic installations in public spaces. Before that, she earned a BS in psychology and a BS in architectural building sciences from Rensselaer Polytechnic Institute, then an MFA in design and technology from the Parsons School of Design at The New School.
During, in between, after, and sometimes concurrently, she taught design, coding, and other technologies at the high school, undergraduate, and graduate-student levels.
“I think what kind of got me hooked on teaching was that the way I learned as a child was not the same as in the classroom,” Morris explains. “And I later saw this in many of my students. I got the feeling that the normal way of learning things was not working for them. And they thought it was their fault. They just didn’t really feel welcome within the traditional education model.”
Morris says that when she worked with those students, tossing aside tradition and instead saying — “You know, we’re just going to do this animation. Or we’re going to make this design or this website or these graphics, and we’re going to approach it in this totally different way” — she saw people “kind of unlock and be like, ‘Oh my gosh. I never thought I could do that.’
“For me, that was the hook, that’s the magic of it. Because I was coming from that experience of having to figure out those unlock mechanisms for myself, it was really exciting to be able to share them with other people, those unlock moments.”
For her doctoral work with the MIT Media Lab’s Fluid Interfaces Group, she’s focusing on the personal space and emotional gaps associated with learning, particularly online and AI-assisted learning. This research builds on her experience increasing human connection in both physical and virtual learning environments.
“I’m developing a framework that combines AI-driven behavioral analysis with human expert assessment to study social learning dynamics,” she says. “My research investigates how social interaction patterns influence curiosity development and intrinsic motivation in learning, with particular focus on understanding how these dynamics differ between real peers and AI-supported environments.”
The first step in her research is determining which elements of social interaction are not replaceable by an AI-based digital tutor. Following that assessment, her goal is to build a prototype platform for experiential learning.
“I’m creating tools that can simultaneously track observable behaviors — like physical actions, language cues, and interaction patterns — while capturing learners’ subjective experiences through reflection and interviews,” Morris explains. “This approach helps connect what people do with how they feel about their learning experience.
“I aim to make two primary contributions: first, analysis tools for studying social learning dynamics; and second, prototype tools that demonstrate practical approaches for supporting social curiosity in digital learning environments. These contributions could help bridge the gap between the efficiency of digital platforms and the rich social interaction that occurs in effective in-person learning.”
Her goals make Morris a perfect fit for the MIT MAD Fellowship. One statement in MAD’s mission is: “Breaking away from traditional education, we foster creativity, critical thinking, making, and collaboration, exploring a range of dynamic approaches to prepare students for complex, real-world challenges.”
Morris wants to help community organizations deal with the rapid AI-powered changes in education, once she finishes her doctorate in 2026. “What should we do with this ‘physical space versus virtual space’ divide?” she asks. That is the space currently captivating Morris’s thoughts.
Unpacking the bias of large language models
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.
This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.
MIT researchers have discovered the mechanism behind this phenomenon.
They created a theoretical framework to study how information flows through the machine-learning architecture that forms the backbone of LLMs. They found that certain design choices which control how the model processes input data can cause position bias.
Their experiments revealed that model architectures, particularly those affecting how information is spread across input words within the model, can give rise to or intensify position bias, and that training data also contribute to the problem.
In addition to pinpointing the origins of position bias, their framework can be used to diagnose and correct it in future model designs.
This could lead to more reliable chatbots that stay on topic during long conversations, medical AI systems that reason more fairly when handling a trove of patient data, and code assistants that pay closer attention to all parts of a program.
“These models are black boxes, so as an LLM user, you probably don’t know that position bias can cause your model to be inconsistent. You just feed it your documents in whatever order you want and expect it to work. But by understanding the underlying mechanism of these black-box models better, we can improve them by addressing these limitations,” says Xinyi Wu, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS), and first author of a paper on this research.
Her co-authors include Yifei Wang, an MIT postdoc; and senior authors Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of IDSS and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering, a core faculty member of IDSS, and a principal investigator in LIDS. The research will be presented at the International Conference on Machine Learning.
Analyzing attention
LLMs like Claude, Llama, and GPT-4 are powered by a type of neural network architecture known as a transformer. Transformers are designed to process sequential data, encoding a sentence into chunks called tokens and then learning the relationships between tokens to predict what words comes next.
These models have gotten very good at this because of the attention mechanism, which uses interconnected layers of data processing nodes to make sense of context by allowing tokens to selectively focus on, or attend to, related tokens.
But if every token can attend to every other token in a 30-page document, that quickly becomes computationally intractable. So, when engineers build transformer models, they often employ attention masking techniques which limit the words a token can attend to.
For instance, a causal mask only allows words to attend to those that came before it.
Engineers also use positional encodings to help the model understand the location of each word in a sentence, improving performance.
The MIT researchers built a graph-based theoretical framework to explore how these modeling choices, attention masks and positional encodings, could affect position bias.
“Everything is coupled and tangled within the attention mechanism, so it is very hard to study. Graphs are a flexible language to describe the dependent relationship among words within the attention mechanism and trace them across multiple layers,” Wu says.
Their theoretical analysis suggested that causal masking gives the model an inherent bias toward the beginning of an input, even when that bias doesn’t exist in the data.
If the earlier words are relatively unimportant for a sentence’s meaning, causal masking can cause the transformer to pay more attention to its beginning anyway.
“While it is often true that earlier words and later words in a sentence are more important, if an LLM is used on a task that is not natural language generation, like ranking or information retrieval, these biases can be extremely harmful,” Wu says.
As a model grows, with additional layers of attention mechanism, this bias is amplified because earlier parts of the input are used more frequently in the model’s reasoning process.
They also found that using positional encodings to link words more strongly to nearby words can mitigate position bias. The technique refocuses the model’s attention in the right place, but its effect can be diluted in models with more attention layers.
And these design choices are only one cause of position bias — some can come from training data the model uses to learn how to prioritize words in a sequence.
“If you know your data are biased in a certain way, then you should also finetune your model on top of adjusting your modeling choices,” Wu says.
Lost in the middle
After they’d established a theoretical framework, the researchers performed experiments in which they systematically varied the position of the correct answer in text sequences for an information retrieval task.
The experiments showed a “lost-in-the-middle” phenomenon, where retrieval accuracy followed a U-shaped pattern. Models performed best if the right answer was located at the beginning of the sequence. Performance declined the closer it got to the middle before rebounding a bit if the correct answer was near the end.
Ultimately, their work suggests that using a different masking technique, removing extra layers from the attention mechanism, or strategically employing positional encodings could reduce position bias and improve a model’s accuracy.
“By doing a combination of theory and experiments, we were able to look at the consequences of model design choices that weren’t clear at the time. If you want to use a model in high-stakes applications, you must know when it will work, when it won’t, and why,” Jadbabaie says.
In the future, the researchers want to further explore the effects of positional encodings and study how position bias could be strategically exploited in certain applications.
“These researchers offer a rare theoretical lens into the attention mechanism at the heart of the transformer model. They provide a compelling analysis that clarifies longstanding quirks in transformer behavior, showing that attention mechanisms, especially with causal masks, inherently bias models toward the beginning of sequences. The paper achieves the best of both worlds — mathematical clarity paired with insights that reach into the guts of real-world systems,” says Amin Saberi, professor and director of the Stanford University Center for Computational Market Design, who was not involved with this work.
This research is supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, and an Alexander von Humboldt Professorship.
This compact, low-power receiver could give a boost to 5G smart devices
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
“This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
A new standard
A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
“This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.
One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
Shrinking the circuit
Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
“This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
Their receiver has an active area of less than 0.05 square millimeters.
One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
“Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
This research is supported, in part, by the National Science Foundation.
Gaspare LoDuca named VP for information systems and technology and CIO
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.
Executive Vice President and Treasurer Glen Shor announced the appointment today in an email to MIT faculty and staff.
“I believe that Gaspare will be an incredible asset to MIT, bringing wide-ranging experience supporting faculty, researchers, staff, and students and a highly collaborative style,” says Shor. “He is eager to start his work with our talented IS&T team to chart and implement their contributions to the future of information technology at MIT.”
LoDuca will lead the IS&T organization and oversee MIT’s information technology infrastructure and services that support its research and academic enterprise across student and administrative systems, network operations, cloud services, cybersecurity, and customer support. As co-chair of the Information Technology Governance Committee, he will guide the development of IT policy and strategy at the Institute. He will also play a key role in MIT’s effort to modernize its business processes and administrative systems, working in close collaboration with the Business and Digital Transformation Office.
“Gaspare brings to his new role extensive experience leading a complex IT organization,” says Provost Cynthia Barnhart, who served as one of Shor's advisors during the search process. “His depth of experience, coupled with his vision for the future state of information technology and digital transformation at MIT, are compelling, and I am excited to see the positive impact he will have here.”
“As I start my new role, I plan to learn more about MIT’s culture and community to ensure that any decisions or changes we make are shaped by the community’s needs and carried out in a way that fits the culture. I’m also looking forward to learning more about the research and work being done by students and faculty to advance MIT’s mission. It’s inspiring, and I’m eager to support their success,” says LoDuca.
In his role at Columbia, LoDuca has overseen the IT department, headed IT governance committees for school and department-level IT functions, and ensured the secure operation of the university’s enterprise-class systems since 2015. During his tenure, he has crafted a culture of customer service and innovation — building a new student information system, identifying emerging technologies for use in classrooms and labs, and creating a data-sharing platform for university researchers and a grants dashboard for principal investigators. He also revamped Columbia’s technology infrastructure and implemented tools to ensure the security and reliability of its technology resources.
Before joining Columbia, LoDuca was the technology managing director for the education practice at Accenture from 1998 to 2015. In that role, he helped universities to develop and implement technology strategies and adopt modern applications and systems. His projects included overseeing the implementation of finance, human resources, and student administration systems for clients such as Columbia University, University of Miami, Carnegie Mellon University, the University System of Georgia, and Yale University.
“At a research institution, there’s a wide range of activities happening every day, and our job in IT is to support them all while also managing cybersecurity risks. We need to be creative and thoughtful in our solutions, and consider the needs and expectations of our community,” he says.
LoDuca holds a bachelor’s degree in chemical engineering from Michigan State University. He and his wife are recent empty nesters, and are in the process of relocating to Boston.
Closing in on superconducting semiconductors
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
A brief history of the global economy, through the lens of a single barge
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.
A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Volkswagen employees in West Germany, got sent to New York, also became a detention center off the coast of England, then finally was deployed as oil worker housing off the coast of Nigeria. The barge has had nine names, several owners, and flown the flags of five countries.
In this one vessel, then, we can see many currents: globalization, the transience of economic activity, and the hazy world of transactions many analysts and observers call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The offshore presents a quick and potentially cheap solution to a crisis,” says MIT lecturer Ian Kumekawa. “It is not a durable solution. The story of the barge is the story of it being used as a quick fix in all sorts of crises. Then these expediences become the norm, and people get used to them and have an expectation that this is the way the world works.”
Now Kumekawa, a historian who started teaching as a lecturer at MIT earlier this year, explores the ship’s entire history in “Empty Vessel: The Global Economy in One Barge,” just published by Knopf and John Murray. In it, he traces the barge’s trajectory and the many economic and geopolitical changes that helped create the ship’s distinctive deployments around the world.
“The book is about a barge, but it’s also about the developing, emerging offshore world, where you see these layers of globalization, financialization, privatization, and the dissolution of territoriality and orders,” Kumekawa says. “The barge is a vehicle through which I can tell the story of those layers together.”
“Never meant to be permanent”
Kumekawa first found out about the vessel several years ago; New York City obtained another floating detention center in the 1990s, which prompted Kumekawa to start looking into the past of the older jail ship, the former “Bibby Resolution,” from the 1990s. The more he found out about its distinctive past, the more curious he became.
“You start pulling on a thread, and you realize you can keep pulling,” Kumekawa says.
The barge Kumekawa follows in the book was built in Sweden in 1979 as the “Balder Scapa.” Even then, commerce was plenty globalized: The vessel was commissioned by a Norwegian shell company, with negotiations run by an expatriate Swedish shipping agent whose firm was registered in Panama and used a Miami bank.
The barge was built at an inflection point following the economic slowdown and oil shocks of the 1970s. Manufacturing was on the verge of declining in both Western Europe and the U.S.; about half as many people now work in manufacturing in those regions, compared to 1960. Companies were looking to find cheaper global locations for production, reinforcing the sense that economic activity was now less durable in any given place.
The barge became part of this transience. The five-story accommodation block was added in the early 1980s; in 1983 it was re-registered in the UK and sent to the Falkland Islands as a troop accommodation named the “COASTEL 3.” Then it was re-registered in the Bahamas and sent to Emden, West Germany, as housing for Volkswagen workers. The vessel then served its stints as inmate housing — first in New York, then off the coast of England from 1997 to 2005. By 2010, it had been re-re-re-registered, in St. Vincent and Grenadines, and was housing oil workers off the coast of Nigeria.
“Globalization is more about flow than about stocks, and the barge is a great example of that,” Kumekawa says. “It’s always on the move, and never meant to be a permanent container. It’s understood people are going to be passing through.”
As Kumekawa explores in the book, this sense of social dislocation overlapped with the shrinking of state capacity, as many states increasingly encouraged companies to pursue globalized production and lightly regulated financial activities in numerous jurisdictions, in the hope it would enhance growth. And it has, albeit with unresolved questions about who the benefits accrue to, the social dislocation of workers, and more.
“In a certain sense it’s not an erosion of state power at all,” Kumekawa says. “These states are making very active choices to use offshore tools, to circumvent certain roadblocks.” He adds: “What happens in the 1970s and certainly in the 1980s is that the offshore comes into its own as an entity, and didn’t exist in the same way even in the 1950s and 1960s. There’s a money interest in that, and there’s a political interest as well.”
Abstract forces, real materials and people
Kumekawa is a scholar with a strong interest in economic history; his previous book, “The First Serious Optimist: A.C. Pigou and the Birth of Welfare Economics,” was published in 2017. This coming fall, Kumekawa will be team-teaching a class on the relationship between economics and history, along with MIT economists Abhijit Banerjee and Jacob Moscona.
Working on “Empty Vessel” also necessitated that Kumekawa use a variety of research techniques, from archival work to journalistic interviews with people who knew the vessel well.
“I had a wonderful set of conversations with the man who was the last bargemaster,” Kumekawa says. “He was the person in effect steering the vessel for many years. He was so aware of all of the forces at play — the market for oil, the prices of accommodations, the regulations, the fact no one had reinforced the frame.”
“Empty Vessel” has already received critical acclaim. Reviewing it in The New York Times, Jennifer Szalai writes that this “elegant and enlightening book is an impressive feat.”
For his part, Kumekawa also took inspiration from a variety of writings about ships, voyages, commerce, and exploration, recognizing that these vessels contain stories and vignettes that illuminate the wider world.
“Ships work very well as devices connecting the global and the local,” he says. Using the barge as the organizing principle of his book, Kumekawa adds, “makes a whole bunch of abstract processes very concrete. The offshore itself is an abstraction, but it’s also entirely dependent on physical infrastructure and physical places. My hope for the book is it reinforces the material dimension of these abstract global forces.”
Students and staff work together for MIT’s first “No Mow May”
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.
This year, MIT took part in the practice for the first time, with portions of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard forgoing mowing from May 1 through June 6 to make space for local pollinators, decrease water use, and encourage new thinking about the traditional lawn. MIT’s first No Mow May was the result of championing by the Graduate Student Council Sustainability Subcommittee (GSC Sustain) and made possible by the Office of the Vice Provost for Campus Space Management and Planning.
A student idea sprouts
Despite being a dense urban campus, MIT has no shortage of green spaces — from pocket gardens and community-managed vegetable plots to thousands of shade trees — and interest in these spaces continues to grow. In recent years, student-led initiatives supported by Institute leadership and operational staff have transformed portions of campus by increasing the number of native pollinator plants and expanding community gardens, like the Hive Garden. With No Mow May, these efforts stepped out of the garden and into MIT’s many grassy open spaces.
“The idea behind it was to raise awareness for more sustainable and earth-friendly lawn practices,” explains Gianmarco Terrones, GSC Sustain member. Those practices include reducing the burden of mowing, limiting use of fertilizers, and providing shelter and food for pollinators. “The insects that live in these spaces are incredibly important in terms of pollination, but they’re also part of the food chain for a lot of animals,” says Terrones.
Research has shown that holding off on mowing in spring, even in small swaths of green space, can have an impact. The early months of spring have the lowest number of flowers in regions like New England, and providing a resource and refuge — even for a short duration — can support fragile pollinators like bees. Additionally, No Mow May aims to help people rethink their yards and practices, which are not always beneficial for local ecosystems.
Signage at each No Mow site on campus highlighted information on local pollinators, the impact of the project, and questions for visitors to ask themselves. “Having an active sign there to tell people, ‘look around. How many butterflies do you see after six weeks of not mowing? Do you see more? Do you see more bees?’ can cause subtle shifts in people’s awareness of ecosystems,” says GSC Sustain member Mingrou Xie. A mowed barrier around each project also helped visitors know that areas of tall grass at No Mow sites are intentional.
Campus partners bring sustainable practices to life
To make MIT’s No Mow May possible, GSC Sustain members worked with the Office of the Vice Provost and the Open Space Working Group, co-chaired by Vice Provost for Campus Space Management and Planning Brent Ryan and Director of Sustainability Julie Newman. The Working Group, which also includes staff from Open Space Programming, Campus Planning, and faculty in the School of Architecture and Planning, helped to identify potential No Mow locations and develop strategies for educational signage and any needed maintenance. “Massachusetts is a biodiverse state, and No Mow May provides an exciting opportunity for MIT to support that biodiversity on its own campus,” says Ryan.
Students were eager for space on campus with high visibility, and the chosen locations of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard fit the bill. “We wanted to set an example and empower the community to feel like they can make a positive change to an environment they spend so much time in,” says Xie.
For GSC Sustain, that positive change also takes the form of the Native Plant Project, which they launched in 2022 to increase the number of Massachusetts-native pollinator plants on campus — plants like swamp milkweed, zigzag goldenrod, big leaf aster, and red columbine, with which native pollinators have co-evolved. Partnering with the Open Space Working Group, GSC Sustain is currently focused on two locations for new native plant gardens — the President’s Garden and the terrace gardens at the E37 Graduate Residence. “Our short-term goal is to increase the number of native [plants] on campus, but long term we want to foster a community of students and staff interested in supporting sustainable urban gardening,” says Xie.
Campus as a test bed continues to grow
After just a few weeks of growing, the campus No Mow May locations sprouted buttercups, mouse ear chickweed, and small tree saplings, highlighting the diversity waiting dormant in the average lawn. Terrones also notes other discoveries: “It’s been exciting to see how much the grass has sprung up these last few weeks. I thought the grass would all grow at the same rate, but as May has gone on the variations in grass height have become more apparent, leading to non-uniform lawns with a clearly unmanicured feel,” he says. “We hope that members of MIT noticed how these lawns have evolved over the span of a few weeks and are inspired to implement more earth-friendly lawn practices in their own homes/spaces.”
No Mow May and the Native Plant Project fit into MIT’s overall focus on creating resilient ecosystems that support and protect the MIT community and the beneficial critters that call it home. MIT Grounds Services has long included native plants in the mix of what is grown on campus and native pollinator gardens, like the Hive Garden, have been developed and cared for through partnerships with students and Grounds Services in recent years. Grounds, along with consultants that design and install our campus landscape projects, strive to select plants that assist us with meeting sustainability goals, like helping with stormwater runoff and cooling. No Mow May can provide one more data point for the iterative process of choosing the best plants and practices for a unique microclimate like the MIT campus.
“We are always looking for new ways to use our campus as a test bed for sustainability,” says Director of Sustainability Julie Newman. “Community-led projects like No Mow May help us to learn more about our campus and share those lessons with the larger community.”
The Office of the Vice Provost, the Open Space Working Group, and GSC Sustain will plan to reconnect in the fall for a formal debrief of the project and its success. Given the positive community feedback, future possibilities of expanding or extending No Mow May will be discussed.