MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 17 hours 37 min ago

This MIT spinout is taking biomolecule storage out of the freezer

Fri, 09/12/2025 - 12:00am

Ever since freezers were invented, the life sciences industry has been reliant on them. That’s because many patient samples, drug candidates, and other biologics must be stored and transported in powerful freezers or surrounded by dry ice to remain stable.

The problem was on full display during the Covid-19 pandemic, when truckloads of vaccines had to be discarded because they had thawed during transport. Today, the stakes are even higher. Precision medicine, from CAR-T cell therapies to tumor DNA sequencing that guides cancer treatment, depends on pristine biological samples. Yet a single power outage, shipping delay, or equipment failure can destroy irreplaceable patient samples, setting back treatment by weeks or halting it entirely. In remote areas and developing nations, the lack of reliable cold storage effectively locks out entire populations from these life-saving advances.

Cache DNA wants to set the industry free from freezers. At MIT, the company’s founders created a new way to store and preserve DNA molecules at room temperature. Now the company is building biomolecule preservation technologies that can be used in applications across health care, from routine blood tests and cancer screening to rare disease research and pandemic preparedness.

“We want to challenge the paradigm,” says Cache DNA co-founder and former MIT postdoc James Banal. “Biotech has been reliant on the cold chain for more than 50 years. Why hasn’t that changed? Meanwhile, the cost of DNA sequencing has plummeted from $3 billion for the first human genome to under $200 today. With DNA sequencing and synthesis becoming so cheap and fast, storage and transport have emerged as the critical bottlenecks. It’s like having a supercomputer that still requires punch cards for data input.”

As the company works to preserve biomolecules beyond DNA and scale the production of its kits, co-founders Banal and MIT Professor Mark Bathe believe their technology has the potential to unlock new health insights by making sample storage accessible to scientists around the world.

“Imagine if every human on Earth could contribute to a global biobank, not just those living near million-dollar freezer facilities,” Banal says. “That’s 8 billion biological stories instead of just a privileged few. The cures we’re missing might be hiding in the biomolecules of someone we’ve never been able to reach.”

From quantum computing to “Jurassic Park”

Banal came to MIT from Australia to work as a postdoc under Bathe, a professor in MIT’s Department of Biological Engineering. Banal primarily studied in the MIT-Harvard Center for Excitonics, through which he collaborated with researchers from across MIT.

“I worked on some really wacky stuff, like DNA nanotechnology and its intersection with quantum computing and artificial photosynthesis,” Banal recalls.

Another project focused on using DNA to store data. While computers store data as 0s and 1s, DNA can store the same information using the nucleotides A, T, G, and C, allowing for extremely dense storage of data: By one estimate, 1 gram of DNA can hold up to 215 petabytes of data.

After three years of work, in 2021, Banal and Bathe created a system that stored DNA-based data in tiny glass particles. They founded Cache DNA the same year, securing the intellectual property by working with MIT’s Technology Licensing Office, applying the technology to storing clinical nucleic acid samples as well as DNA data. Still, the technology was too nascent to be used for most commercial applications at the time.

Professor of chemistry Jeremiah Johnson had a different approach. His research had shown that certain plastics and rubbers could be made recyclable by adding cleavable molecular bonds. Johnson thought Cache DNA’s technology could be faster and more reliable using his amber-like polymers, similar to how researchers in the “Jurassic Park” movie recover ancient dinosaur DNA from a tree’s fossilized amber resin.

“It started basically as a fun conversation along the halls of Building 16,” Banal recalls. “He’d seen my work, and I was aware of the innovations in his lab.”

Banal immediately saw the potential. He was familiar with the burden of the cold chain. For his MIT experiments, he’d store samples in big freezers kept at -80 degrees Celsius. Samples would sometimes get lost in the freezer or be buried in the inevitable ice build-up. Even when they were perfectly preserved, samples could degrade as they thawed.

As part of a collaboration between Cache DNA and MIT, Banal, Johnson, and two researchers in Johnson’s lab developed a polymer that stores DNA at room temperature. In a nod to their inspiration, they demonstrated the approach by encoding DNA sequences with the “Jurassic Park” theme song.

The researchers’ polymers could encompass a material as a liquid and then form a solid, glass-like block when heated. To release the DNA, the researchers could add a molecule called cysteamine and a special detergent. The researchers showed the process could work to store and access all 50,000 base pairs of a human genome without causing damage.

“Real amber is not great at preservation. It’s porous and lets in moisture and air,” Banal says. “What we built is completely different: a dense polymer network that forms an impenetrable barrier around DNA. Think of it like vacuum-sealing, but at the molecular level. The polymer is so hydrophobic that water and enzymes that would normally destroy DNA simply can’t get through.”

As that research was taking shape, Cache DNA was learning that sample storage was a huge problem from hospitals and research labs. In places like Florida and Singapore, researchers said contending with the effects of humidity on samples was another constant headache. Other researchers across the globe wanted to know if the technology would help them collect samples outside of the lab.

“Hospitals told us they were running out of space,” Banal says. “They had to throw samples out, limit sample collection, and as a last-case scenario, they would use a decades-old storage technology that leads to degradation after a short period of time. It became a north star for us to solve those problems.”

A new tool for precision health

Last year, Cache DNA sent out more than 100 of its first alpha DNA preservation kits to researchers around the world.

“We didn’t tell researchers what to use it for, and our minds were blown by the use cases,” Banal says. “Some used it for collecting samples in the field where cold shipping wasn't feasible. Others evaluated for long term archival storage. The applications were different, but the problem was universal: They all needed reliable storage without the constraint of refrigeration.”

Cache DNA has developed an entire suite of preservation technologies that can be optimized for different storage scenarios. The company also recently received a grant from the National Science Foundation to expand its technology to preserve a broader swath of biomolecules, including RNA and proteins, which could yield new insights into health and disease.

“This important innovation helps eliminate the cold chain and has the potential to unlock millions of genetic samples globally for Cache DNA to empower personalized medicine,” Bathe says. “Eliminating the cold chain is half the equation. The other half is scaling from thousands to millions or even billions of nucleic acid samples. Together, this could enable the equivalent of a ‘Google Books’ for nucleic acids stored at room temperature, either for clinical samples in hospital settings and remote regions of the world, or alternatively to facilitate DNA data storage and retrieval at scale.”

“Freezers have dictated where science could happen,” Banal says. “Remove that constraint, and you start to crack open possibilities: island nations studying their unique genetics without samples dying in transit; every rare disease patient worldwide contributing to research, not just those near major hospitals; the 2 billion people without reliable electricity finally joining global health studies. Room-temperature storage isn’t the whole answer, but every cure starts with a sample that survived the journey.”

New RNA tool to advance cancer and infectious disease research and treatment

Thu, 09/11/2025 - 4:45pm

Researchers at the Antimicrobial Resistance (AMR) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a powerful tool capable of scanning thousands of biological samples to detect transfer ribonucleic acid (tRNA) modifications — tiny chemical changes to RNA molecules that help control how cells grow, adapt to stress, and respond to diseases such as cancer and antibiotic‑resistant infections. This tool opens up new possibilities for science, health care, and industry — from accelerating disease research and enabling more precise diagnostics to guiding the development of more effective medical treatments for diseases such as cancer and antibiotic-resistant infections.

For this study, the SMART AMR team worked in collaboration with researchers at MIT, Nanyang Technological University in Singapore, the University of Florida, the University at Albany in New York, and Lodz University of Technology in Poland.

Addressing current limitations in RNA modification profiling

Cancer and infectious diseases are complicated health conditions in which cells are forced to function abnormally by mutations in their genetic material or by instructions from an invading microorganism. The SMART-led research team is among the world’s leaders in understanding how the epitranscriptome — the over 170 different chemical modifications of all forms of RNA — controls growth of normal cells and how cells respond to stressful changes in the environment, such as loss of nutrients or exposure to toxic chemicals. The researchers are also studying how this system is corrupted in cancer or exploited by viruses, bacteria, and parasites in infectious diseases.

Current molecular methods used to study the expansive epitranscriptome and all of the thousands of different types of modified RNA are often slow, labor-intensive, costly, and involve hazardous chemicals, which limits research capacity and speed.

To solve this problem, the SMART team developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease. This capability allows scientists to map cell regulatory networks, discover novel enzymes, and link molecular patterns to disease mechanisms, paving the way for better drug discovery and development, and more accurate disease diagnostics. 

Unlocking the complexity of RNA modifications

SMART’s open-access research, recently published in Nucleic Acids Research and titled “tRNA modification profiling reveals epitranscriptome regulatory networks in Pseudomonas aeruginosa,” shows that the tool has already enabled the discovery of previously unknown RNA-modifying enzymes and the mapping of complex gene regulatory networks. These networks are crucial for cellular adaptation to stress and disease, providing important insights into how RNA modifications control bacterial survival mechanisms. 

Using robotic liquid handlers, researchers extracted tRNA from more than 5,700 genetically modified strains of Pseudomonas aeruginosa, a bacterium that causes infections such as pneumonia, urinary tract infections, bloodstream infections, and wound infections. Samples were enzymatically digested and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS), a technique that separates molecules based on their physical properties and identifies them with high precision and sensitivity. 

As part of the study, the process generated over 200,000 data points in a high-resolution approach that revealed new tRNA-modifying enzymes and simplified gene networks controlling how cells respond and adapt to stress. For example, the data revealed that the methylthiotransferase MiaB, one of the enzymes responsible for tRNA modification ms2i6A, was found to be sensitive to the availability of iron and sulfur and to metabolic changes when oxygen is low. Discoveries like this highlight how cells respond to environmental stresses, and could lead to future development of therapies or diagnostics.

SMART’s automated system was specially designed to profile tRNA modifications across thousands of samples rapidly and safely. Unlike traditional methods, this tool integrates robotics to automate sample preparation and analysis, eliminating the need for hazardous chemical handling and reducing costs. This advancement increases safety, throughput, and affordability, enabling routine large-scale use in research and clinical labs.

A faster and automated way to study RNA

As the first system capable of quantitative, system‑wide profiling of tRNA modifications at this scale, the tool provides a unique and comprehensive view of the epitranscriptome — the complete set of RNA chemical modifications within cells. This capability allows researchers to validate hypotheses about RNA modifications, uncover novel biology, and identify promising molecular targets for developing new therapies.

“This pioneering tool marks a transformative advance in decoding the complex language of RNA modifications that regulate cellular responses,” says Professor Peter Dedon, co-lead principal investigator at SMART AMR, professor of biological engineering at MIT, and corresponding author of the paper. “Leveraging AMR’s expertise in mass spectrometry and RNA epitranscriptomics, our research uncovers new methods to detect complex gene networks critical for understanding and treating cancer, as well as antibiotic-resistant infections. By enabling rapid, large-scale analysis, the tool accelerates both fundamental scientific discovery and the development of targeted diagnostics and therapies that will address urgent global health challenges.”

Accelerating research, industry, and health-care applications

This versatile tool has broad applications across scientific research, industry, and health care. It enables large-scale studies of gene regulation, RNA biology, and cellular responses to environmental and therapeutic challenges. The pharmaceutical and biotech industry can harness it for drug discovery and biomarker screening, efficiently evaluating how potential drugs affect RNA modifications and cellular behavior. This aids the development of targeted therapies and personalized medical treatments.

“This is the first tool that can rapidly and quantitatively profile RNA modifications across thousands of samples,” says Jingjing Sun, research scientist at SMART AMR and first author of the paper. “It has not only allowed us to discover new RNA-modifying enzymes and gene networks, but also opens the door to identifying biomarkers and therapeutic targets for diseases such as cancer and antibiotic-resistant infections. For the first time, large-scale epitranscriptomic analysis is practical and accessible.”

Looking ahead: advancing clinical and pharmaceutical applications

Moving forward, SMART AMR plans to expand the tool’s capabilities to analyze RNA modifications in human cells and tissues, moving beyond microbial models to deepen understanding of disease mechanisms in humans. Future efforts will focus on integrating the platform into clinical research to accelerate the discovery of biomarkers and therapeutic targets. The translation of the technology into an epitranscriptome-wide analysis tool that can be used in pharmaceutical and health-care settings will drive the development of more effective and personalized treatments.

The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.

Technology originating at MIT leads to approved bladder cancer treatment

Thu, 09/11/2025 - 12:00am

At MIT, a few scribbles on a whiteboard can turn into a potentially transformational cancer treatment.

This scenario came to fruition this week when the U.S. Food and Drug Administration approved a system for treating an aggressive form of bladder cancer. More than a decade ago, the system started as an idea in the lab of MIT Professor Michael Cima at the Koch Institute for Integrative Cancer Research, enabled by funding from the National Institutes of Health and MIT’s Deshpande Center.

The work that started with a few researchers at MIT turned into a startup, TARIS Biomedical LLC, that was co-founded by Cima and David H. Koch Institute Professor Robert Langer, and acquired by Johnson & Johnson in 2019. In developing the core concept of a device for local drug delivery to the bladder — which represents a new paradigm in bladder cancer treatment — the MIT team approached drug delivery like an engineering problem.

“We spoke to urologists and sketched out the problems with past treatments to get to a set of design parameters,” says Cima, a David H. Koch Professor of Engineering and professor of materials science and engineering. “Part of our criteria was it had to fit into urologists’ existing procedures. We wanted urologists to know what to do with the system without even reading the instructions for use. That’s pretty much how it came out.”

To date, the system has been used in patients thousands of times. In one study involving people with high-risk, non-muscle-invasive bladder cancer whose disease had proven resistant to standard care, doctors could find no evidence of cancer in 82.4 percent of patients treated with the system. More than 50 percent of those patients were still cancer-free nine months after treatment.

The results are extremely gratifying for the team of researchers that worked on it at MIT, including Langer and Heejin Lee SM ’04, PhD ’09, who developed the system as part of his PhD thesis. And Cima says far more people deserve credit than just the ones who scribbled on his whiteboard all those years ago.

“Drug products like this take an enormous amount of effort,” says Cima. “There are probably more than 1,000 people that have been involved in developing and commercializing the system: the MIT inventors, the urologists they consulted, the scientists at TARIS, the scientists at Johnson & Johnson — and that’s not including all the patients who participated in clinical trials. I also want to emphasize the importance of the MIT ecosystem, and the importance of giving people the resources to pursue arguably crazy ideas. We need to continue to support those kinds of activities.”

In the mid 2000s, Langer connected Cima with a urologist at Boston Children’s Hospital who was seeking a new treatment for a painful bladder disease known as interstitial cystitis. The standard treatment required frequent drug infusions into a patient’s bladder through a catheter, which provided only temporary relief.

A group of researchers including Cima; Lee; Hong Linh Ho Duc SM ’05, PhD ’09; Grace Kim PhD ’08; and Karen Daniel PhD ’09 began speaking with urologists and people who had run failed clinical trials involving bladder treatments to understand what went wrong. All that information went on Cima’s whiteboard over the course of several weeks. Fortunately, Cima also scribbled “Do not erase!”

“We learned a lot in the process of writing everything down,” Cima says. “We learned what not to build and what to avoid.”

With the problem well-defined, Cima received a grant from MIT’s Deshpande Center for Technological Innovation, which allowed Lee to work on designing a better solution as part of his PhD thesis.

One of the key advances the group made was using a special alloy that gave the device “shape memory” so that it could be straightened out and inserted into the bladder through a catheter. Then it would fold up, preventing it from being expelled during urination.

The new design was able to slowly release drugs over a two-week period — far longer than any other approach — and could then be removed using a thin, flexible tube commonly used in urology, called a cystoscope. The progress was enough for Cima and Langer, who are both serial entrepreneurs, to found TARIS Biomedical and license the technology from MIT. Lee and three other MIT graduates joined the company.

“It was a real pleasure working with Mike Cima, our students, and colleagues on this novel drug delivery system, which is already changing patients’ lives,” Langer says, “It’s a great example of how research at the Koch Institute starts with basic science and engineering and ends up with new treatments for cancer patients.”

The FDA’s approval of the system for the treatment of certain patients with high-risk, non-muscle-invasive bladder cancer now means that patients with this disease may have a better treatment option. Moving forward, Cima hopes the system continues to be explored to treat other diseases.

A better understanding of debilitating head pain

Thu, 09/11/2025 - 12:00am

Everyone gets headaches. But not everyone gets cluster headache attacks, a debilitating malady producing acute pain that lasts an hour or two. Cluster headache attacks come in sets — hence the name — and leave people in complete agony, unable to function. A little under 1 percent of the U.S. population suffers from cluster headache.

But that’s just an outline of the matter. What’s it like to actually have a cluster headache?

“The pain of a cluster headache is such that you can’t sit still,” says MIT-based science journalist Tom Zeller, who has suffered from them for decades. “I’d liken it to putting your hand on a hot burner, except that you can’t take your hand off for an hour or two. Every headache is an emergency. You have to run or pace or rock. Think of another pain you had to dance through, but it just doesn’t stop. It’s that level of intensity, and it’s all happening inside your head.”

And then there is the pain of the migraine headache, which seems slightly less acute than a cluster attack, but longer-lasting, and similarly debilitating. Migraine attacks can be accompanied by extreme sensitivity to light and noise, vision issues, and nausea, among other neurological symptoms, leaving patients alone in dark rooms for hours or days. An estimated 1.2 billion people around the world, including 40 million in the U.S., struggle with migraine attacks.

These are not obscure problems. And yet: We don’t know exactly why migraine and cluster headache disorders occur, nor how to address them. Headaches have never been a prominent topic within modern medical research. How can something so pervasive be so overlooked?

Now Zeller examines these issues in an absorbing book, “The Headache: The Science of a Most Confounding Affliction — and a Search for Relief,” published this summer by Mariner Books. Zeller is the editor-in-chief and co-founder of Undark, a digital magazine on science and society published by the Knight Science Journalism Program at MIT.

One word, but different syndromes

“The Headache,” which is Zeller’s first book, combines a first-person narrative of his own suffering, accounts of the pain and dread that other headache sufferers feel, and thorough reporting on headache-based research in science and medicine. Zeller has experienced cluster headache attacks for 30-plus years, dating to when he was in his 20s.

“In some ways, I suppose I had been writing the book my whole adult life without knowing it,” Zeller says. Indeed, he had collected research material about these conditions for years while grappling with his own headache issues.

A key issue in the book is why society has not taken cluster headache and migraine problems more seriously — and relatedly, why the science of headache disorders is not more advanced. Although in fairness, as Zeller says, “Anything involving the brain or central nervous system is incredibly hard to study.”

More broadly, Zeller suggests in the book, we have conflated regular workaday headaches — the kind you may get from staring at a screen too long — with the far more severe and rather different disorders like cluster headache and migraine. (Some patients refer to cluster headache and migraine in the singular, not plural, to emphasize that this is an ongoing condition, not just successive headaches.)

“Headaches are annoying, and we tough it out,” Zeller says. “But we use the same exact word to talk about these other things,” namely, cluster headache and migraine. This has likely reinforced our general dismissal of severe headache disorders as a pressing and distinct medical problem. Instead, we often consider headache disorders, even severe ones, as something people should simply power through.

“There’s a certain sense of malingering we still attach to a migraine or [other] headache disorder, and I’m not sure that’s going away,” Zeller says.

Then too, about three-quarters of people who experience migraine attacks are women, which has quite plausibly led the ailment to “get short shrift historically,” as Zeller says. Or at least, in recent history: As Zeller chronicles in the book, an awareness of severe headache disorders goes back to ancient times, and it’s possible they have received less relative attention in modernity.

A new shift in medical thinking

In any case, for much of the 20th century, conventional medical wisdom held that migraine and cluster headache stemmed from changes or abnormalities in blood vessels. But in recent decades, as Zeller details, there has been a paradigm shift: These conditions are now seen as more neurological in origin.

A key breakthrough here was the 1980s discovery of a neurotransmitter called calcitonin gene-related peptide, or CGRP. As scientists have discovered, CGRP is released from nerve endings around blood vessels and helps produce migraine symptoms. This offered a new strategy — and target — for combating severe head pain. The first drugs to inhibit the effects of CGRP hit the market in 2018, and most researchers in the field are now focused on idiopathic headache as a neurological disorder, not a vascular problem.

“It’s the way science works,” Zeller says. “Changing course is not easy. It’s like turning a ship on a dime. The same applies to the study of headaches.”

Many medications aimed at blocking these neurotransmitters have since been developed, though only about 20 percent of patients seem to find permanent relief as a result. As Zeller chronicles, other patients feel benefits for about a year, before the effects of a medication wear off; many of them now try complicated combinations of medications.

Severe headache disorders also seem linked to hormonal changes in people, who often see an onset of these ailments in their teens, and a diminishing of symptoms later in life. So, while headache medicine has witnessed a recent breakthrough, much more work lies ahead.

Opening up a discussion

Amid all this, one set of questions still tugging at Zeller is evolutionary in nature: Why do humans experience headache disorders at all? There is no clear evidence that other species get severe headaches — or that the prevalence of severe headache conditions in society has ever diminished.

One hypothesis, Zeller notes, is that “having a highly attuned nervous system could have been a benefit in our more primitive state.” Such a system may have helped us survive, in the past, but at the cost of producing intense disorders in some people when the wiring goes a bit awry. We may learn more about this as neuro-based headache research continues.

“The Headache” has received widespread praise. Writing in The New Yorker, Jerome Groopman heralded the “rich material in the book,” noting that it “weaves together history, biology, a survey of current research, testimony from patients, and an agonizing account of Zeller’s own suffering.”

For his part, Zeller says he is appreciative of the attention “The Headache” has generated, as one of the most widely-noted nonfiction books released this summer.

“It’s opened up room for a kind of conversation that doesn’t usually break through into the mainstream,” Zeller says. “I’m hearing from a lot of patients who just are saying, ‘Thank you for writing this.’ And that’s really gratifying. I’m most happy to hear from people who think it’s giving them a voice. I’m also hearing a lot from doctors and scientists. The moment has opened up for this discussion, and I’m grateful for that.”

MIT software tool turns everyday objects into animated, eye-catching displays

Wed, 09/10/2025 - 3:15pm

Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.

But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.

While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.

MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Ticha Sethapakdi SM ’19, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.

“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”

Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function — not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern. If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.

A simple interface for elaborate ideas

FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.

To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.

You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.

These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.

Artists, including graphic designers and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.

Keep it crisp

FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.

In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.

FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.

Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University PhD student and visiting researcher Mingming Li; MIT EECS PhD student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.

Demo Day features hormone-tracking sensors, desalination systems, and other innovations

Wed, 09/10/2025 - 3:00pm

Kresge Auditorium came alive Friday as MIT entrepreneurs took center stage to share their progress in the delta v startup accelerator program.

Now in its 14th year, delta v Demo Day represents the culmination of a summer in which students work full-time on new ventures under the guidance of the Martin Trust Center for MIT Entrepreneurship.

It also doubles as a celebration, with Trust Center Managing Director (and consummate hype man) Bill Aulet setting the tone early with his patented high-five run through the audience and leap on stage for opening remarks.

“All these students have performed a miracle,” Aulet told the crowd. “One year ago, they were sitting in the audience like all of you. One year ago, they probably didn’t even have an idea or a technology. Maybe they did, but they didn’t have a team, a clear vision, customer models, or a clear path to impact. But today they’re going to blow your mind. They have products — real products — a founding team, a clear mission, customer commitments or letters of intent, legitimate business models, and a path to greatness and impact. In short, they will have achieved escape velocity.”

The two-hour event filled Kresge Auditorium, with a line out the door for good measure, and was followed by a party under a tent on the Kresge lawn. Each presentation began with a short video introducing the company before a student took the stage to expand on the problem they were solving and what their team has learned from talks with potential customers.

In total, 22 startups showcased their ventures and early business milestones in rapid-fire presentations.

Rick Locke, the new dean of the MIT Sloan School of Management, said events like Demo Day are why he came back to the Institute after serving in various roles between 1988 and 2013.

“What’s great about this event is how it crystallizes the spirit of MIT: smart people doing important work, doing it by rolling up their sleeves, doing it with a certain humility but also a vision, and really making a difference in the world,” Locke told the audience. “You can feel the positivity, the energy, and the buzz here tonight. That’s what the world needs more of.”

A program with a purpose

This year’s Demo Day featured 70 students from across MIT, with 16 startups working out of the Trust Center on campus and six working from New York City. Through the delta v program, the students were guided by mentors, received funding, and worked through an action-oriented curriculum full-time between June and September. Aulet also noted that the students presenting benefitted from entrepreneurial support resources from across the Institute.

The odds are in the startups’ favor: A 2022 study found that 69 percent of businesses from the program were still operating five years later. Alumni companies had raised roughly $1 billion in funding.

Demo Day marks the end of delta v and serves to inspire next year’s cohort of entrepreneurs.

“Turn on a screen or look anywhere around you, and you'll see issues with climate, sustainability, health care, the future of work, economic disparities, and more,” Aulet said. “It can all be overwhelming. These entrepreneurs bring light to dark times. Entrepreneurs don’t see problems. As the great Biggie Smalls from Brooklyn said, ‘Turn a negative into a positive.’ That’s what entrepreneurs do.”

Startups in action

Startups in this year’s cohort presented solutions in biotech and health care, sustainability, financial services, energy, and more.

One company, Gees, is helping women with hormonal conditions like polycystic ovary syndrome (PCOS) with a saliva-based sensor that tracks key hormones to help women get personalized insights and manage symptoms.

“Over 200 million women live with PCOS worldwide,” said MIT postdoc and co-founder Walaa Khushaim. “If it goes unmanaged, it can lead to even more serious diseases. The good news is that 80 percent of cases can be managed with lifestyle changes. The problem is women trying to change their lifestyle are left in the dark, unsure if what they are doing is truly helping.”

Gees’ sensor is noninvasive and easier to use than current sensors that track hormones. It provides feedback in minutes from the comfort of users’ homes. The sensor connects to an app that shows results and trends to help women stay on track. The company already has more than 500 sign-ups for its wait list.

Another company, Kira, has created an electrochemical system to increase the efficiency and access of water desalination. The company is aiming to help companies manage their brine wastewater that is often dumped, pumped underground, or trucked off to be treated.

“At Kira, we’re working toward a system that produces zero liquid waste and only solid salts,” says PhD student Jonathan Bessette SM ’22.

Kira says its system increases the amount of clean water created by industrial processes, reduces the amount of brine wastewater, and optimizes the energy flows of factories. The company says next year it will deploy a system at the largest groundwater desalination plant in the U.S.

A variety of other startups presented at the event:

AutoAce builds AI agents for car dealerships, automating repetitive tasks with a 24/7 voice agent that answers inbound service calls and books appointments.

Carbion uses a thermochemical process to convert biomass into battery-grade graphite at half the temperature of traditional synthetic methods.

Clima Technologies has developed an AI building engineer that enables facilities managers to “talk” to their buildings in real-time, allowing teams to conduct 24/7 commissioning, act on fault diagnostics, minimize equipment downtime, and optimize controls.

Cognify uses AI to predict customer interactions with digital platforms, simulating customer behavior to deliver insights into which designs resonate with customers, where friction exists in user journeys, and how to build a user experience that converts.

Durability uses computer vision and AI to analyze movement, predict injury risks, and guide recovery for athletes.

EggPlan uses a simple blood test and proprietary model to assess eligibility for egg freezing with fertility clinics. If users do not have a baby, their fees are returned, making the process risk-free.

Forma Systems developed an optimization software for manufacturers to make smarter, faster decisions about things like materials use while reducing their climate impact.

Ground3d is a social impact organization building a digital tool for crowdsourcing hyperlocal environmental data, beginning with street-level documentation of flooding events in New York City. The platform could help residents with climate resilience and advocacy.

GrowthFactor helps retailers scale their footprint with a fractional real estate analyst while using an AI-powered platform to maximize their chance of commercial success.

Kyma uses AI-powered patient engagement to integrate data from wearables, smart scales, sensors, and continuous glucose monitors to track behaviors and draft physician-approved, timely reminders.

LNK Energies is solving the heavy-duty transport industry’s emissions problem with liquid organic hydrogen carriers (LOHCs): safe, room-temperature liquids compatible with existing diesel infrastructure.

Mendhai Health offers a suite of digital tools to help women improve pelvic health and rehabilitate before and after childbirth.

Nami has developed an automatic, reusable drinkware cleaning station that delivers a hot, soapy, pressurized wash in under 30 seconds.

Pancho helps restaurants improve margins with an AI-powered food procurement platform that uses real-time price comparison, dispute tracking, and smart ordering.

Qadence offers older adults a co-pilot that assesses mobility and fall risk, then delivers tailored guidance to improve balance, track progress, and extend recovery beyond the clinic.

Sensopore offers an at-home diagnostic device to help families test for everyday illnesses at home, get connected with a telehealth doctor, and have prescriptions shipped to their door, reducing clinical visits.

Spheric Bio has developed a personal occlusion device to improve a common surgical procedure used to treat strokes.

Tapestry uses conversational AI to chat with attendees before events and connect them with the right people for more meaningful conversations.

Torque automates financial analysis across private equity portfolios to help investment professionals make better strategic decisions.

Trazo helps interior designers and architects collaborate and iterate on technical drawings and 3D designs of new construction of remodeling projects.

DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions

Wed, 09/10/2025 - 11:45am

The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.

The Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions (CHEFSI) — a joint effort of the MIT Center for Computational Science and Engineering, the MIT Schwarzman College of Computing, and the MIT Institute for Soldier Nanotechnologies (ISN) — plans to harness cutting-edge exascale supercomputers and next-generation algorithms to simulate with unprecedented detail how extremely hot, fast-moving gaseous and solid materials interact. The understanding of these extreme environments — characterized by temperatures of more than 1,500 degrees Celsius and speeds as high as Mach 25 — and their effect on vehicles is central to national security, space exploration, and the development of advanced thermal protection systems.

“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”

CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.

CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.

“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”

Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team. 

CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.

Ten years later, LIGO is a black-hole hunting machine

Wed, 09/10/2025 - 11:00am

The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.

On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.

The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.

Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.

The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.

“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”

Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”

The clearest signal yet

LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.

“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.

By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.

Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.

In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.

Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.

The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.

In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.

Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.

“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”

Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”

Pushing the limits

LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.

“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.

“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”

Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.

Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.

LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.

“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”

In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.

Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.

“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”

The LIGO-Virgo-KAGRA Collaboration

LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.

The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.

KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA

Study explains how a rare gene variant contributes to Alzheimer’s disease

Wed, 09/10/2025 - 11:00am

A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.

Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.

This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.

“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.

Membrane dysfunction

Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.

ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.

To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.

The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).

To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.

These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.

Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.

A boost in choline

Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.

As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.

The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.

In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.

Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.

“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.

In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.

“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.

The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.

Lincoln Laboratory technologies win seven R&D 100 Awards for 2025

Tue, 09/09/2025 - 4:35pm

Seven technologies developed at MIT Lincoln Laboratory, either wholly or with collaborators, have earned 2025 R&D 100 Awards. This annual awards competition recognizes the year's most significant new technologies, products, and materials available on the marketplace or transitioned to use. An independent panel of technology experts and industry professionals selects the winners.

"Winning an R&D 100 Award is a recognition of the exceptional creativity and effort of our scientists and engineers. The awarded technologies reflect Lincoln Laboratory's mission to transform innovative ideas into real-world solutions for U.S. national security, industry, and society," says Melissa Choi, director of Lincoln Laboratory.

Lincoln Laboratory's winning technologies enhance national security in a range of ways, from securing satellite communication links and identifying nearby emitting devices to providing a layer of defense for U.S. Army vehicles and protecting service members from chemical threats. Other technologies are pushing frontiers in computing, enabling the 3D integration of chips and the close inspection of superconducting electronics. Industry is also benefiting from these developments — for example, by adopting an architecture that streamlines the development of laser communications terminals.

The online publication R&D World manages the awards program. Recipients span Fortune 500 companies, federally funded research institutions, academic and government labs, and small companies. Since 2010, Lincoln Laboratory has received 108 R&D 100 Awards.

Protecting lives 

Tactical Optical Spherical Sensor for Interrogating Threats (TOSSIT) is a throwable, baseball-sized sensor that remotely detects hazardous vapors and aerosols. It is designed to alert soldiers, first responders, and law enforcement to the presence of chemical threats, like nerve and blister agents, industrial chemical accidents, or fentanyl dust. Users can simply toss, drone-drop, or launch TOSSIT into an area of concern. To detect specific chemicals, the sensor samples the air with a built-in fan and uses an internal camera to observe color changes on a removable dye card. If chemicals are present, TOSSIT alerts users wirelessly on an app or via audible, light-up, or vibrational alarms in the sensor.

"TOSSIT fills an unmet need for a chemical-vapor point sensor, one that senses the immediate environment around it, that can be kinetically deployed ahead of service personnel. It provides a low-cost sensing option for vapors and solid aerosol threats — think toxic dust particles — that would otherwise not be detectable by small deployed sensor systems,” says principal investigator Richard Kingsborough. TOSSIT has been tested extensively in the field and is currently being transferred to the military. 

Wideband Selective Propagation Radar (WiSPR) is an advanced radar and communications system developed to protect U.S. Army armored vehicles. The system's active electronically scanned antenna array extends signal range at millimeter-wave frequencies, steering thousands of beams per second to detect incoming kinetic threats while enabling covert communications between vehicles. WiSPR is engineered to have a low probability of detection, helping U.S. Army units evade adversaries seeking to detect radio-frequency (RF) energy emitting from radars. The system is currently in production.

"Current global conflicts are highlighting the susceptibility of armored vehicles to adversary anti-tank weapons. By combining custom technologies and commercial off-the-shelf hardware, the Lincoln Laboratory team produced a WiSPR prototype as quickly and efficiently as possible," says program manager Christopher Serino, who oversaw WiSPR development with principal investigator David Conway.

Advancing computing

Bumpless Integration of Chiplets to Al-Optimized Fabric is an approach that enables the fabrication of next-generation 2D, 2.5D, and 3D integrated circuits. As data-processing demands increase, designers are exploring 3D stacked assemblies of small specialized chips (chiplets) to pack more power into devices. Tiny bumps of conductive material are used to electrically connect these stacks, but these microbumps cannot accommodate the extremely dense, massively interconnected components needed for future microcomputers. To address this issue, Lincoln Laboratory developed a technique eliminating microbumps. Key to this technique is a lithographically produced fabric allowing electrical bonding of chiplet stack layers. Researchers used an AI-driven decision-tree approach to optimize the design of this fabric. This bumpless feature can integrate hundreds of chiplets that perform like a single chip, improving data-processing speed and power efficiency, especially for high-performance AI applications.

"Our novel, bumpless, heterogeneous chiplet integration is a transformative approach addressing two semiconductor industry challenges: expanding chip yield and reducing cost and time to develop systems," says principal investigator Rabindra Das.

Quantum Diamond Magnetic Cryomicroscope is a breakthrough in magnetic field imaging for characterizing superconducting electronics, a promising frontier in high-performance computing. Unlike traditional techniques, this system delivers fast, wide-field, high-resolution imaging at the cryogenic temperatures required for superconducting devices. The instrument combines an optical microscopy system with a cryogenic sensor head containing a diamond engineered with nitrogen-vacancy centers — atomic-scale defects highly sensitive to magnetic fields. The cryomicroscope enables researchers to directly visualize trapped magnetic vortices that interfere with critical circuit components, helping to overcome a major obstacle to scaling superconducting electronics.

“The cryomicroscope gives us an unprecedented window into magnetic behavior in superconducting devices, accelerating progress toward next-generation computing technologies,” says Pauli Kehayias, joint principal investigator with Jennifer Schloss. The instrument is currently advancing superconducting electronics development at Lincoln Laboratory and is poised to impact materials science and quantum technology more broadly.

Enhancing communications 

Lincoln Laboratory Radio Frequency Situational Awareness Model (LL RF-SAM) utilizes advances in AI to enhance U.S. service members' vigilance over the electromagnetic spectrum. The modern spectrum can be described as a swamp of mixed signals originating from civilian, military, or enemy sources. In near-real time, LL RF-SAM inspects these signals to disentangle and identify nearby waveforms and their originating devices. For example, LL RF-SAM can help a user identify a particular packet of energy as a drone transmission protocol and then classify whether that drone is part of a corpus of friendly or enemy drones.

"This type of enhanced context helps military operators make data-driven decisions. The future adoption of this technology will have profound impact across communications, signals intelligence, spectrum management, and wireless infrastructure security," says principal investigator Joey Botero. 

Modular, Agile, Scalable Optical Terminal (MAScOT) is a laser communications (lasercom) terminal architecture that facilitates mission-enabling lasercom solutions adaptable to various space platforms and operating environments. Lasercom is rapidly becoming the go-to technology for space-to-space links in low Earth orbit because of its ability to support significantly higher data rates compared to radio frequency terminals. However, it has yet to be used operationally or commercially for longer-range space-to-ground links, as such systems often require custom designs for specific missions. MASCOT's modular, agile, and scalable design streamlines the process for building lasercom terminals suitable for a range of missions, from near Earth to deep space. MAScOT made its debut on the International Space Station in 2023 to demonstrate NASA's first two-way lasercom relay system, and is now being prepared to serve in an operational capacity on Artemis II, NASA's moon flyby mission scheduled for 2026. Two industry-built terminals have adopted the MAScOT architecture, and technology transfer to additional industry partners is ongoing.

"MAScOT is the latest lasercom terminal designed by Lincoln Laboratory engineers following decades of pioneering lasercom work with NASA, and it is poised to support lasercom for decades to come," says Bryan Robinson, who co-led MAScOT development with Tina Shih. 

Protected Anti-jam Tactical SATCOM (PATS) Key Management System (KMS) Prototype addresses the critical challenge of securely distributing cryptographic keys for military satellite communications (SATCOM) during terminal jamming, compromise, or disconnection. Realizing the U.S. Space Systems Command's vision for resilient, protected tactical SATCOM, the PATS KMS Prototype leverages innovative, bandwidth-efficient protocols and algorithms to enable real-time, scalable key distribution over wireless links, even under attack, so that warfighters can communicate securely in contested environments. PATS KMS is now being adopted as the core of the Department of Defense's next-generation SATCOM architecture.

"PATS KMS is not just a technology — it's a linchpin enabler of resilient, modern SATCOM, built for the realities of today's contested battlefield. We worked hand-in-hand with government stakeholders, operational users, and industry partners across a multiyear, multiphase journey to bring this capability to life," says Joseph Sobchuk, co-principal investigator with Nancy List. The R&D 100 Award is shared with the U.S. Space Force Space Systems Command, whose “visionary leadership has been instrumental in shaping the future of protected tactical SATCOM,” Sobchuk adds.

Study finds cell memory can be more like a dimmer dial than an on/off switch

Tue, 09/09/2025 - 11:00am

When cells are healthy, we don’t expect them to suddenly change cell types. A skin cell on your hand won’t naturally morph into a brain cell, and vice versa. That’s thanks to epigenetic memory, which enables the expression of various genes to “lock in” throughout a cell’s lifetime. Failure of this memory can lead to diseases, such as cancer.

Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed, like a permanent Lite-Brite pattern. But MIT engineers have found that the picture has many more shades.

In a new study appearing today in Cell Genomics, the team reports that a cell’s memory is set not by on/off switching but through a more graded, dimmer-like dial of gene expression.

The researchers carried out experiments in which they set the expression of a single gene at different levels in different cells. While conventional wisdom would assume the gene should eventually switch on or off, the researchers found that the gene’s original expression persisted: Cells whose gene expression was set along a spectrum between on and off remained in this in-between state.

The results suggest that epigenetic memory — the process by which cells retain gene expression and “remember” their identity — is not binary but instead analog, which allows for a spectrum of gene expression and associated cell identities.

“Our finding opens the possibility that cells commit to their final identity by locking genes at specific levels of gene expression instead of just on and off,” says study author Domitilla Del Vecchio, professor of mechanical and biological engineering at MIT. “The consequence is that there may be many more cell types in our body than we know and recognize today, that may have important functions and could underlie healthy or diseased states.”

The study’s MIT lead authors are Sebastian Palacios and Simone Bruno, with additional co-authors.

Beyond binary

Every cell shares the same genome, which can be thought of as the starting ingredient for life. As a cell takes shape, it differentiates into one type or another, through the expression of genes in its genome. Some genes are activated, while others are repressed. The combination steers a cell toward one identity versus another.

A process of DNA methylation, by which certain molecules attach to the genes’ DNA, helps lock their expression in place. DNA methylation assists a cell to “remember” its unique pattern of gene expression, which ultimately establishes the cell’s identity.

Del Vecchio’s group at MIT applies mathematics and genetic engineering to understand cellular molecular processes and to engineer cells with new capabilities. In previous work, her group was experimenting with DNA methylation and ways to lock the expression of certain genes in ovarian cells.

“The textbook understanding was that DNA methylation had a role to lock genes in either an on or off state,” Del Vecchio says. “We thought this was the dogma. But then we started seeing results that were not consistent with that.”

While many of the cells in their experiment exhibited an all-or-nothing expression of genes, a significant number of cells appeared to freeze genes in an in-between state — neither entirely on or off.

“We found there was a spectrum of cells that expressed any level between on and off,” Palacios says. “And we thought, how is this possible?”

Shades of blue

In their new study, the team aimed to see whether the in-between gene expression they observed was a fluke or a more established property of cells that until now has gone unnoticed.

“It could be that scientists disregarded cells that don’t have a clear commitment, because they assumed this was a transient state,” Del Vecchio says. “But actually these in-between cell types may be permanent states that could have important functions.”

To test their idea, the researchers ran experiments with hamster ovarian cells — a line of cells commonly used in the laboratory. In each cell, an engineered gene was initially set to a different level of expression. The gene was turned fully on in some cells, completely off in others, and set somewhere in between on and off for the remaining cells.

The team paired the engineered gene with a fluorescent marker that lights up with a brightness corresponding to the gene’s level of expression. The researchers introduced, for a short time, an enzyme that triggers the gene’s DNA methylation, a natural gene-locking mechanism. They then monitored the cells over five months to see whether the modification would lock the genes in place at their in-between expression levels, or whether the genes would migrate toward fully on or off states before locking in.

“Our fluorescent marker is blue, and we see cells glow across the entire spectrum, from really shiny blue, to dimmer and dimmer, to no blue at all,” Del Vecchio says. “Every intensity level is maintained over time, which means gene expression is graded, or analog, and not binary. We were very surprised, because we thought after such a long time, the gene would veer off, to be either fully on or off, but it did not.”

The findings open new avenues into engineering more complex artificial tissues and organs by tuning the expression of certain genes in a cell’s genome, like a dial on a radio, rather than a switch. The results also complicate the picture of how a cell’s epigenetic memory works to establish its identity. It opens up the possibility that cell modifications such as those exhibited in therapy-resistant tumors could be treated in a more precise fashion.

“Del Vecchio and colleagues have beautifully shown how analog memory arises through chemical modifications to the DNA itself,” says Michael Elowitz, professor of biology and biological engineering at the California institute of Technology, who was not involved in the study. “As a result, we can now imagine repurposing this natural analog memory mechanism, invented by evolution, in the field of synthetic biology, where it could help allow us to program permanent and precise multicellular behaviors.”

“One of the things that enables the complexity in humans is epigenetic memory,” Palacios says. “And we find that it is not what we thought. For me, that’s actually mind-blowing. And I think we’re going to find that this analog memory is relevant for many different processes across biology.”

This research was supported, in part, by the National Science Foundation, MODULUS, and a Vannevar Bush Faculty Fellowship through the U.S. Office of Naval Research.

“Bottlebrush” particles deliver big chemotherapy payloads directly to cancer cells

Tue, 09/09/2025 - 5:00am

Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.

To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.

In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.

“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.

MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.

A bigger drug payload

Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.

This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.

To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.

Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.

The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.

“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”

The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.

Effective treatment

For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.

Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.

The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.

“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.

These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.

In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.

The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.

The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program. 

Remembering David Baltimore, influential biologist and founding director of the Whitehead Institute

Mon, 09/08/2025 - 8:00pm

The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.

With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.

Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.

In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.

“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.” 

Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”

As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.

David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”

Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”

“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.

Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”

     David Baltimore - Infinite History (2010)
     Video: MIT | Watch with transcript

Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.

After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)

In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.

For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”  

In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.

“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”

Alzheimer’s erodes brain cells’ control of gene expression, undermining function, cognition

Mon, 09/08/2025 - 4:25pm

Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.

The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.

The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.

Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.

“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”

By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.

“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”

Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.

Compromised compartments and eroded information

Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.

In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).

The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.

For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.

“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.

But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.

Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.

Risk genes and “chromatin guardians”

Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.

Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.

In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.

“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.

“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”

Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.

Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.

Physicists devise an idea for lasers that shoot beams of neutrinos

Mon, 09/08/2025 - 11:30am

At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.

The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.

Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.

In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.

The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.

“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.

As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.

“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.

The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.

Coherent condensate

For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.

Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?

A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.

BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.

Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.

“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.

In sync with optics

Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.

Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.

“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”

To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.

Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.

“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”

The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.

“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”

Study finds exoplanet TRAPPIST-1e is unlikely to have a Venus- or Mars-like atmosphere

Mon, 09/08/2025 - 10:50am

In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.

In an open-access paper published today in The Astrophysical Journal Lettersastronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.

“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.

The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.

“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”

The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.

Improved observations

Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.

“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.

JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.

“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”

Ruling out atmospheric conditions

The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.

The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.

“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.

AI and machine learning for engineering design

Sun, 09/07/2025 - 12:00am

Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.

“When people think about mechanical engineering, they're thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”

In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.

“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.

First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.

The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.

Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.

Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”

The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.

“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.

“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.

Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.

“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.” 

A human-centered approach to data visualization

Fri, 09/05/2025 - 12:00am

The world is awash in data visualizations, from charts accompanying news stories on the economy to graphs tracking the weekly temperature to scatterplots showing relationships between baseball statistics.

At their core, data visualizations convey information, and everyone consumes that information differently. One person might scan the axes, while another may focus on an outlying data point or examine the magnitude of each colored bar.

But how do you consume that information if you can’t see it?

Making a data visualization accessible for blind and low-vision readers often involves writing a descriptive caption that captures some key points in a succinct paragraph.

“But that means blind and low-vision readers don’t get the ability to interpret the data for themselves. What if they had a different question about the data? Suddenly a simple caption doesn’t give them that. The core idea behind our group’s work in accessibility has been to maintain agency for blind and low-vision people,” says Arvind Satyanarayan, a newly tenured associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Satyanarayan’s group has explored making data visualizations accessible for screen readers, which narrate content on a computer screen. His team created a hierarchical platform that allows screen reader users to explore various levels of detail in a visualization with their keyboard, drilling down from high-level information to individual data points.

Under the umbrella of human-computer interaction (HCI) research, Satyanarayan’s Visualization Group also develops programming languages and authoring tools for visualizations, studies the sociocultural elements of visualization design, and uses visualizations to analyze machine-learning models.

For Satyanarayan, HCI is about promoting human agency, whether that means enabling a blind reader to interpret data trends or ensuring designers still feel in control of AI-driven visualization systems.

“We really take a human-centered approach to data visualization,” he says.

An eye for technology

Satyanarayan found the field of data visualization almost by accident.

As a child growing up in India, Bahrain, and Abu Dhabi, his initial interest in science sprouted from his love for tinkering.

Satyanarayan recalls his father bringing home a laptop, which he loaded with simple games. The internet grew up along with him, and as a teenager he became heavily engaged in the popular blogging platform Movable Type.

A teacher at heart even as a teenager, Satyanarayan offered tutorials on how to use the platform and ran a contest for people to style their blog. Along the way, he taught himself the skills to develop plugins and extensions.

He enjoyed designing eye-catching and user-friendly blogs, laying the foundation for his studies in human-computer interaction.

When he arrived at the University of California at San Diego for college, he was interested enough in the HCI field to take an introductory class.

“I’d always been a student of history, and this intro class really appealed to me because it was more about the history of user interfaces, and tracing the provenance and development of the ideas behind them,” he says.

Almost as an afterthought, he spoke with the professor, Jim Hollan — a pioneer of the field. Even though he hadn’t thought much about research beforehand, Satyanarayan ended up spending the summer in Hollan’s lab, studying how people interact with wall-sized displays.

As he prepared to pursue graduate studies (Satyanarayan split his PhD between Stanford University and the University of Washington), he was unsure whether to focus on programming languages or HCI. When it came time to choose, the human-centered focus of HCI and the interdisciplinarity of data visualization drew him in.

“Data visualization is deeply technical, but it also draws from cognitive science, perceptual psychology, and visual arts and aesthetics, and then it also has a big stake in civic and social responsibility,” he says.

He saw how visualization plays a role in civic and social responsibility through his first project with his PhD advisor, Jeffrey Heer. Satyanarayan and his collaborators built a data visualization interface for journalists at newsrooms that couldn’t afford to hire data departments. That drag-and-drop tool allowed journalists to design the visualization and all the data storytelling they wanted to do around it.

That project seeded many elements that became his thesis, for which he studied new programming languages for visualization and developed interactive graphical systems on top of them.

After earning his PhD, Satyanarayan sought a faculty job and spent an exhausting interview season crisscrossing the country, participating in 15 interviews in only two months.

MIT was his very last stop.

“I remember being exhausted and on autopilot, thinking that this is not going well. But then, the first day of my interview at MIT was filled with some of the best conversations I had. People were so eager and interested in understanding my research and how it connected to theirs,” he says.

Charting a collaborative course

The collaborative nature of MIT remained important as he built his research group; one of the group’s first graduate students was pursuing a PhD in MIT’s program in History, Anthropology, and Science, Technology, and Society. They continue to work closely with faculty who study anthropology, topics in the humanities, and clinical machine learning.

With interdisciplinary collaborators, the Visualization Group has explored the sociotechnical implications of data visualizations. For instance, charts are frequently shared, disseminated, and discussed on social media, where they are stripped of their context.

“What happens as a result is they can become vectors for misinformation or misunderstanding. But that is not because they are poorly designed to begin with. We spent a lot of time unpacking those details,” Satyanarayan says.

His group is also studying tactile graphics, which are common in museums to help blind and low-vision individuals interact with exhibits. Often, making a tactile graphic boils down to 3D-printing a chart.

“But a chart was designed to be read with our eyes, and our eyes work very differently than our fingers. We are now drilling into what it means to design tactile-first visualizations,” he says.

Co-design is a driving principle behind all his group’s accessibility work. On many projects, they work closely with Daniel Hajas, a researcher at the University College of London who has been blind since the age of 16.

“That has been really important for us, to make sure as people who are not blind, that we are developing tools and platforms that are actually useful for blind and low-vision people,” he says.

His group is also studying the sociocultural implications of data visualization. For instance, during the height of the Covid-19 pandemic, data visualizations were often turned into memes and social artifacts that were used to support or contest data from experts.

“In reality, neither data nor visualizations are neutral. We’ve been thinking about the data you use to visualize, and the design choices behind specific visualizations, and what that is communicating besides insights about the data,” he says.

Visualizing a real-world impact

Interdisciplinarity is also a theme of Satyanarayan’s interactive data visualization class, which he co-teaches with faculty members Sarah Williams and Catherine D'Ignazio in the Department of Urban Studies and Planning; and Crystal Lee in Comparative Media Studies/Writing, with shared appointments in the School of Arts, Humanities, and Social Sciences and the MIT Schwarzman College of Computing.

In the popular course, students not only learn the technical skills to make data visualizations, but they also build final projects centered on an area of social importance. For the past two years, students have focused on the housing affordability crisis in the Boston area, in partnership with the Massachusetts Area Planning Council. The students enjoy the opportunity to make a real-world impact with their work, Satyanarayan says.

And he enjoys the course as much as they do.

“I love teaching. I really enjoy getting to interact with the students. Our students are so intellectually curious and committed. It reassures me that our future is in good hands,” he says.

One of Satyanarayan’s personal interests is running along the Charles River Esplanade in Boston, which he does almost every day. He also enjoys cooking, especially with ingredients he has never used before.

Satyanarayan and his wife, who met while they were graduate students at Stanford (her PhD is in microbiology), also delight in tending their plot in the Fenway Victory Gardens, which is overflowing with lilies, lavender, lilacs, peonies, and roses.

Their newest addition is a miniature poodle puppy named Fen, which they got when Satyanarayan earned tenure earlier this year.

Thinking toward the future of his research, Satyanarayan is keen to further explore how generative AI might effectively assist people in building visualizations, and its implications for human creativity.

“In the world of generative AI, this question of agency applies to all of us,” he says. “How do we make sure, for these AI-driven systems, that we haven’t lost the parts of the work we find most interesting?”

J-WAFS welcomes Daniela Giardina as new executive director

Thu, 09/04/2025 - 10:00pm

The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced that Daniela Giardina has been named the new J-WAFS executive director. Giardina stepped into the role at the start of the fall semester, replacing founding executive director Renee J. Robins ’83, who is retiring after leading the program since its launch in 2014.

“Daniela brings a deep background in water and food security, along with excellent management and leadership skills,” says Robins. “Since I first met her nearly 10 years ago, I have been impressed with her commitment to working on global water and food challenges through research and innovation. I am so happy to know that I will be leaving J-WAFS in her experienced and capable hands.”

A decade of impact

J-WAFS fuels research, innovation, and collaboration to solve global water and food systems challenges. The mission of J-WAFS is to ensure safe and resilient supplies of water and food to meet the local and global needs of a dramatically growing population on a rapidly changing planet. J-WAFS funding opportunities are open to researchers in every MIT department, lab, and center, spanning all disciplines. Supported research projects include those involving engineering, science, technology, business, social science, economics, architecture, urban planning, and more. J-WAFS research and related activities include early-stage projects, sponsored research, commercialization efforts, student activities and mentorship, events that convene local and global experts, and international-scale collaborations.

The global water, food, and climate emergency makes J-WAFS’ work both timely and urgent. J-WAFS-funded researchers are achieving tangible, real-time solutions and results. Since its inception, J-WAFS has distributed nearly $26 million in grants, fellowships, and awards to the MIT community, supporting roughly 10 percent of MIT’s faculty and 300 students, postdocs, and research staff from 40 MIT departments, labs, and centers. J-WAFS grants have also helped researchers launch 13 startups and receive over $25 million in follow-on funding.

Giardina joins J-WAFS at an exciting time in the program’s history; in the spring, J-WAFS celebrated 10 years of supporting water and food research at MIT. The milestone was commemorated at a special event attended by MIT leadership, researchers, students, staff, donors, and others in the J-WAFS community. As J-WAFS enters its second decade, interest and opportunities for water and food research continue to grow. “I am truly honored to join J-WAFS at such a pivotal moment,” Giardina says.

Putting research into real-world practice

Giardina has nearly two decades of experience working with nongovernmental organizations and research institutions on humanitarian and development projects. Her work has taken her to Africa, Latin America, the Caribbean, and Central and Southeast Asia, where she has focused on water and food security projects. She has conducted technical trainings and assessments, and managed projects from design to implementation, including monitoring and evaluation.

Giardina comes to MIT from Oxfam America, where she directed disaster risk reduction and climate resilience initiatives, working on approaches to strengthen local leadership, community-based disaster risk reduction, and anticipatory action. Her role at Oxfam required her to oversee multimillion-dollar initiatives, supervising international teams, managing complex donor portfolios, and ensuring rigorous monitoring across programs. She connected hands-on research with community-oriented implementation, for example, by partnering with MIT’s D-Lab to launch an innovation lab in rural El Salvador. Her experience will help guide J-WAFS as it pursues impactful research that will make a difference on the ground.

Beyond program delivery, Giardina has played a strategic leadership role in shaping Oxfam’s global disaster risk reduction strategy and representing the organization at high-level U.N. and academic forums. She is multilingual and adept at building partnerships across cultures, having worked with governments, funders, and community-based organizations to strengthen resilience and advance equitable access to water and food.

Giardina holds a PhD in sustainable development from the University of Brescia in Italy. She also holds a master’s degree in environmental engineering from the Politecnico of Milan in Italy and is a chartered engineer since 2005 (equivalent to a professional engineering license in the United States). She also serves as vice chair of the Boston Network for International Development, a nonprofit that connects and strengthens Boston’s global development community.

“I have seen first-hand how climate change, misuse of resources, and inequality are undermining water and food security around the globe,” says Giardina. “What particularly excites me about J-WAFS is its interdisciplinary approach in facilitating meaningful partnerships to solve many of these problems through research and innovation. I am eager to help expand J-WAFS’ impact by strengthening existing programs, developing new initiatives, and building strategic partnerships that translate MIT's groundbreaking research into real-world solutions,” she adds.

A legacy of leadership

Renee Robins will retire with over 23 years of service to MIT. Years before joining the staff, she graduated from MIT with dual bachelor’s degrees in both biology and humanities/anthropology. She then went on to earn a master’s degree in public policy from Carnegie Mellon University. In 1998, she came back to MIT to serve in various roles across campus, including with the Cambridge MIT Institute, the MIT Portugal Program, the Mexico City Program, the Program on Emerging Technologies, and the Technology and Policy Program. She also worked at the Harvard Graduate School of Education, where she managed a $15 million research program as it scaled from implementation in one public school district to 59 schools in seven districts across North Carolina.

In late 2014, Robins joined J-WAFS as its founding executive director, playing a pivotal role in building it from the ground up and expanding the team to six full-time professionals. She worked closely with J-WAFS founding director Professor John H. Lienhard V to develop and implement funding initiatives, develop, and shepherd corporate-sponsored research partnerships, and mentor students in the Water Club and Food and Agriculture Club, as well as numerous other students. Throughout the years, Robins has inspired a diverse range of researchers to consider how their capabilities and expertise can be applied to water and food challenges. Perhaps most importantly, her leadership has helped cultivate a vibrant community, bringing together faculty, students, and research staff to be exposed to unfamiliar problems and new methodologies, to explore how their expertise might be applied, to learn from one another, and to collaborate.

At the J-WAFS 10th anniversary event in May, Robins noted, “it has been a true privilege to work alongside John Lienhard, our dedicated staff, and so many others. It’s been particularly rewarding to see the growth of an MIT network of water and food researchers that J-WAFS has nurtured, which grew out of those few individuals who saw themselves to be working in solitude on these critical challenges.”

Lienhard also spoke, thanking Robins by saying she “was my primary partner in building J-WAFS and [she is] a strong leader and strategic thinker.”

Not only is Robins a respected leader, she is also a dear friend to so many at MIT and beyond. In 2021, she was recognized for her outstanding leadership and commitment to J-WAFS and the Institute with an MIT Infinite Mile Award in the area of the Offices of the Provost and Vice President for Research.

Outside of MIT, Robins has served on the Board of Trustees for the International Honors Program — a comparative multi-site study abroad program, where she previously studied comparative culture and anthropology in seven countries around the world. Robins has also acted as an independent consultant, including work on program design and strategy around the launch of the Université Mohammed VI Polytechnique in Morocco.

Continuing the tradition of excellence

Giardina will report to J-WAFS director Rohit Karnik, the Abdul Latif Jameel Professor of Water and Food in the MIT Department of Mechanical Engineering. Karnik was named the director of J-WAFS in January, succeeding John Lienhard, who retired earlier this year.

As executive director, Giardina will be instrumental in driving J-WAFS’ mission and impact. She will work with Karnik to help shape J-WAFS’ programs, long-term strategy, and goals. She will also be responsible for supervising J-WAFS staff, managing grant administration, and overseeing and advising on financial decisions.

“I am very grateful to John and Renee, who have helped to establish J-WAFS as the Institute’s preeminent program for water and food research and significantly expanded MIT’s research efforts and impact in the water and food space,” says Karnik. “I am confident that with Daniela as executive director, J-WAFS will continue in the tradition of excellence that Renee and John put into place, as we move into the program’s second decade,” he notes.

Giardina adds, “I am inspired by the lab’s legacy of Renee Robins and Professor Lienhard, and I look forward to working with Professor Karnik and the J-WAFS staff.”

A comprehensive cellular-resolution map of brain activity

Thu, 09/04/2025 - 4:50pm

The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. 

Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.

Pages