Feed aggregator

Slice and dice

MIT Latest News - Thu, 04/09/2026 - 2:00pm

What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?

That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.

Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.

New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages. 

Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.

“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.” 

Light it up

For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria. 

“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”

When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection. 

Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE. 

“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says. 

Problems at the periphery

Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes. 

SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.

The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage. 

The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before. 

There may also be other proteins or interactions involved. 

“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.” 

Future directions

Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.

Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role. 

Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.

“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”

Comparison Shopping Is Not a (Computer) Crime

EFF: Updates - Thu, 04/09/2026 - 1:20pm

As long as people have had more than one purchasing option, they’ve been comparing those options and looking for bargains. Online shoppers are no exception; in fact, one of the potential benefits of the internet is that it expands our options for everything from car rentals to airline tickets to dish soap. New AI tools can make the process even easier. These tools could provide some welcome relief for consumers facing sky-high prices that many cannot afford.

Unfortunately, Amazon is trying to block these helpful new tools, which can steer shoppers towards competitors. Taking a page from Facebook and RyanAir, they are trying to use computer crime laws to do it. 

Amazon’s target is Perplexity, which makes an AI-enabled web browser, called Comet, that allows users to browse the web as they normally would, but can also perform certain actions on the user’s behalf. For example, a user could ask Comet to find the best price on a 24-pack of toilet paper, and if satisfied with the results, have the browser order it. Amazon claims that Perplexity violated the Computer Fraud and Abuse Act (CFAA) by building a tool that helps users access information on Amazon and engage with the site.

Unfortunately, a federal district court agreed. The court’s fundamental mistake: relying on the Ninth Circuit’s misguided decision in Facebook v Power Ventures, rather than the court’s much better and more applicable reasoning in hiQ Labs.

Perplexity has appealed to the Ninth Circuit. As we explain in an amicus brief filed in support, the district court’s mistake, if affirmed, could lead to myriad unintended consequences. Overbroad readings of the CFAA have undermined research, security, competition, and innovation. For years, we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Amazon’s claims here, not least because most of Amazon’s website is publicly available.

The court’s approach would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may create separate accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, it can’t just ban the researchers from using the site; it can render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

A broad reading of CFAA in this case would also undermine competition by enabling companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

The Ninth Circuit should follow Van Buren’s lead and interpret the CFAA narrowly, as Congress intended. Website owners do not need new shields against independent accountability.

Related Cases: Facebook v. Power Ventures

EFF is Leaving X

EFF: Updates - Thu, 04/09/2026 - 12:25pm

After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.

The Numbers Aren’t Working Out

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. 

We Expected More

When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing

We called for: 

  • Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
  • Real security improvements: Including genuine end-to-end encryption for direct messages
  • Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.

Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them. 

"But You're Still on Facebook and TikTok?" 

Yes. And we understand why that looks contradictory. Let us explain. 

EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance. 

Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn't always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:

  • You own a small business that depends on Instagram for customers.
  • Your abortion fund uses TikTok to spread crucial information.
  • You're isolated and rely on online spaces to connect with your community.

Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We've spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.

We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we're posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better. 

We'll Keep Fighting. Just Not on X

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members’ support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we’re here to help you take back control.

A new type of electrically driven artificial muscle fiber

MIT Latest News - Thu, 04/09/2026 - 11:00am

Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.

Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.

The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.

The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.

Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.

The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.

A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.

“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.

Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.

To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”

Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers. 

“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”

These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”

This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.

Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”

In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.

Bridging space research and policy

MIT Latest News - Thu, 04/09/2026 - 11:00am

While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.

As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.

Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.

“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”

Likewise, she wants to bring her expertise in public policy into the lab.

“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.

McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.

Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.

In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.

Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”

As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.

When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.

Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.

“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”

In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.

“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”

McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”

One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.

“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.

In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.

McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.

McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.

Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.

She credits the AeroAstro graduate community with keeping her momentum going.

“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”

New technique makes AI models leaner and faster while they’re still learning

MIT Latest News - Thu, 04/09/2026 - 9:00am

Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance. 

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.

The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.

"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."

The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.

"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”

The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.

"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."

What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.

The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.

The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.

The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.

There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.

The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.

Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.

"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."

"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."

The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.

On Microsoft’s Lousy Cloud Security

Schneier on Security - Thu, 04/09/2026 - 6:51am

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security...

At climate contrarian gathering, allies urge Trump to keep Zeldin at EPA

ClimateWire News - Thu, 04/09/2026 - 6:27am
At a Heartland Institute conference, climate contrarians celebrated the rollback of regulations under EPA chief Lee Zeldin while urging President Donald Trump not to elevate him to attorney general, fearing it would stall their agenda.

Alaskan tribes, enviros sue over endangerment finding repeal

ClimateWire News - Thu, 04/09/2026 - 6:26am
The administration's refusal to allow EPA to regulate climate pollution is "akin to a fire department refusing to fight fires," one group said.

Montana drag ban defeat fuels youth fight against Trump energy orders

ClimateWire News - Thu, 04/09/2026 - 6:25am
Climate activists say there is a path for a federal court to hand them even a partial victory against the government's promotion of fossil fuels.

Maine takes half-step toward climate superfund

ClimateWire News - Thu, 04/09/2026 - 6:24am
The statehouse is expected to pass legislation that would assess how much climate impacts are costing Maine. It comes as Vermont and New York defend their own climate superfund laws in court.

Trump admin to renew Biden heat safety program

ClimateWire News - Thu, 04/09/2026 - 6:23am
The move comes as Democratic lawmakers urged OSHA to extend the initiative that led to Biden-era heat-related inspections at workplaces.

March smashes US record as most abnormally hot month, NOAA says

ClimateWire News - Thu, 04/09/2026 - 6:22am
Not only was it the hottest March on record for the U.S., but the amount it was above normal beat any other month in history for the Lower 48 states.

Wildfire report ups pressure on California to overhaul insurance, utilities

ClimateWire News - Thu, 04/09/2026 - 6:21am
The state-commissioned study lays out options from liability overhaul to state-backed insurance.

Emissions reduction bill draws opposition from California homebuilders

ClimateWire News - Thu, 04/09/2026 - 6:21am
SB 1075 would ban local land-use decisions that contribute to poor air quality in disadvantaged communities.

Energy giant Drax pulls out of UK climate plan

ClimateWire News - Thu, 04/09/2026 - 6:20am
Ministers were told to lock the biomass giant into an agreement on carbon capture. The government ignored them — and now Drax has walked away.

The flawed fundamentals of failing banks

MIT Latest News - Thu, 04/09/2026 - 12:00am

Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.

But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.

That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.

“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”

This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.

Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.

“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.

For his research and teaching, Verner received tenure at MIT last year.

Landing in a “great place”

Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.

“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”

As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.

Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.

“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”

For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.

After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.

“MIT is a great place,” Verner says simply. “Great colleagues, great students.”

Focused on fundamentals

Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.

Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.

In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correa and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.

This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.

“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”

The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.

“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.

Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.

“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”

Banning New Foreign Routers Mistargets Products to Fix Real Problem

EFF: Updates - Wed, 04/08/2026 - 3:24pm

On March 23, the FCC issued an update to their Covered List, a list of equipment banned from obtaining regulatory approval necessary for U.S. sale (and thus effectively a ban on sale of new devices), to include all new routers produced in foreign countries unless they are specifically given an exception by the Department of Defense (DoD) or DHS. The Commission cited “security gaps in foreign-made routers” leading to widespread cyberattacks as justification for the ban, mentioning the high-profile attacks by Chinese advanced persistent threat actors Volt, Flax, and Salt Typhoon. Although the stated intention is to stem the very real threat of domestic residential routers being commandeered to initiate attacks and act as residential proxies, this sweeping move serves as a blunt instrument that will impact many harmless products. In addition to being far too broad, it won’t even affect many vulnerable devices that are most active in these types of attacks: IoT and connected smart home devices.

Previously, the FCC had changed the Covered List to ban hardware by specific vendors, such as telecom equipment produced by companies Huawei and Hytera in 2021. This new blanket ban, in contrast, affects the importation and sale of almost all new consumer routers. It does not affect consumer routers produced in the United States, like Starlink in Texas. While some of the affected routers will be vulnerable to compromises that hijack the devices and use them for cybercrime and attacks, this ban does not distinguish between companies with a track-record of producing vulnerable products and those without. As a result, instead of incentivizing security-minded production, this will only limit the options consumers have to US-based manufacturers not affected by the ban—even those that lack stellar security reputations themselves.

While the sale of vulnerable routers in the U.S. will not stop, the announcement quoted an Executive Branch determination that foreign produced routers introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense.” Yet this move does nothing to address the growing number of connected devices involved in the attacks this ban aims to address. As we have previously pointed out, supply chain attacks have resulted in no-name Android TV boxes preloaded with malware, sold by retail giants like Amazon, fuelling the massive Kimwolf and BADBOX 2 fraud and residential proxy botnets. Banning the specific models and manufacturers we know produce dangerous devices putting its purchasers at risk, rather than issuing blanket bans punishing reputable brands that do better, should be the priority.

With the FCCs top commissioner appointed by the President, this ban comes as other parts of the administration impose tariffs and issue dozens of trade-related executive orders aimed at foreign goods. A few larger companies with pockets deep enough to invest in manufacturing plants within the U.S. may see this as an opportune moment, while others not as well poised to begin U.S. operations may attempt to curry enough favor to be added to the DoD or DHS exception lists. At best, this will result in the immediate effect of an ill-targeted policy that does little to improve domestic cybersecurity posture. At worst, it entrenches existing players and deepens problematic quid-pro-quo arrangements.

American consumers deserve better. They deserve the assurance that the devices they use, whether routers or other connected smart home devices, are built to withstand attacks that put themselves and others at risk, no matter where they are manufactured. For this, a nuanced, careful consideration of products (such as was part of the FCC’s 2023-proposed U.S. Cyber Trust Mark) is necessary, rather than blanket bans.

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

EFF: Updates - Wed, 04/08/2026 - 2:13pm

Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control.

UpCodes created a database of building codes—like the National Electrical Code—that includes codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law, and therefore has the right to control how the public accesses and shares them. Fortunately, neither the Constitution nor the Copyright Act support that theory. Faced with similar claims, some courts, including the Fifth Circuit Court of Appeals, have held that the codes lose copyright protection when they are incorporated into law. Others, like the D.C. Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that, whether or not the legal status of the standards changes once they are incorporated into law, making them fully accessible and usable online is a lawful fair use.

In this case, the Third Circuit found that UpCodes’s copying of the codes was a fair use, in a decision closely following the D.C. Circuit’s reasoning. Fair use turns on four factors listed in the Copyright Act, and the court found that all four favored UpCodes to some degree.

On the first factor, the purpose and character of the use, the court found that UpCodes’s use was “transformative” because it had a separate and distinct purpose from ASTM—informing people about the law, rather than just best practices in the building industry. No matter that UpCodes was copying and disseminating entire safety codes verbatim—using the codes for a different purpose was enough. And UpCodes being a commercial venture didn’t change the outcome either, because UpCodes wasn’t charging for access to the codes.

On the second factor, the nature of the copyrighted work, the Third Circuit joined other appeals courts in finding that laws are facts, and stand at “the periphery of copyright’s core protection.” And this included codes that were “indirectly” incorporated—meaning that they were incorporated into other codes that were themselves incorporated into law.

The third factor looks at the amount and substantiality of the material used. The court said that UpCodes could not have accomplished its purpose—providing access to the current binding laws governing building construction—without copying entire codes, so the copying was justified. Importantly, the court noted that UpCodes was justified in copying optional parts of the codes as well as “mandatory” sections because both help people understand what the law is.

Finally, the fourth factor looks at potential harm to the market for the original work, balanced against the public interest in allowing the challenged use. The court rejected an argument frequently raised by copyright holders—that harm can be assumed any time materials are posted to the internet for all to access. Instead, the court held that when a use is transformative, a rightsholder has to bring evidence of harm, and that harm will be balanced against the public benefit. Because “enhanced public access to the law is a clear and significant public benefit,” and ASTM hadn’t shown significant evidence that UpCodes had meaningfully reduced ASTM’s revenues, the fourth factor was at least neutral. It didn’t matter to the court that ASTM offered to provide copies of legally binding standards to the public on request, because “the mere possibility of obtaining a free technical standard does not nullify the public benefits associated with enhanced access to law.”

This is a good result that will expand the public’s access to the laws that bind us—something that’s more important than ever given recent assaults on the rule of law. In the future, we hope that courts will recognize that codes and standards lose copyright when they are incorporated into law, so that people don’t have to spend years and legal fees litigating fair use just to exercise their rights.

Desirée Plata appointed associate dean of engineering

MIT Latest News - Wed, 04/08/2026 - 12:45pm

Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.

In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.

Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.

“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.

Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion.  Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.

Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.

Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.

Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.

As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.

Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.

Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.

Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.

Pages