Feed aggregator

EFFecting Change: This Title Was Written by a Human

EFF: Updates - Wed, 11/12/2025 - 3:58pm

Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss how to address complex questions and risks in AI while protecting civil liberties and human rights online.

Join EFF Director of Policy and Advocacy Katharine Trendacosta, EFF Staff Attorney Tori NobleBerkeley Center for Law & Technology Co-Director Pam Samuelson, and Icarus Salon Artist Şerife Wong for a live discussion with Q&A. 

EFFecting Change Livestream Series:
This Title Was Written by a Human
Thursday, November 13th (New Date!)
10:00 AM - 11:00 AM Pacific
This event is LIVE and FREE!




Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

New lightweight polymer film can prevent corrosion

MIT Latest News - Wed, 11/12/2025 - 11:00am

MIT researchers have developed a lightweight polymer film that is nearly impenetrable to gas molecules, raising the possibility that it could be used as a protective coating to prevent solar cells and other infrastructure from corrosion, and to slow the aging of packaged food and medicines.

The polymer, which can be applied as a film mere nanometers thick, completely repels nitrogen and other gases, as far as can be detected by laboratory equipment, the researchers found. That degree of impermeability has never been seen before in any polymer, and rivals the impermeability of molecularly-thin crystalline materials such as graphene.

“Our polymer is quite unusual. It’s obviously produced from a solution-phase polymerization reaction, but the product behaves like graphene, which is gas-impermeable because it’s a perfect crystal. However, when you examine this material, one would never confuse it with a perfect crystal,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.

The polymer film, which the researchers describe today in Nature, is made using a process that can be scaled up to large quantities and applied to surfaces much more easily than graphene.

Strano and Scott Bunch, an associate professor of mechanical engineering at Boston University, are the senior authors of the new study. The paper’s lead authors are Cody Ritt, a former MIT postdoc who is now an assistant professor at the University of Colorado at Boulder; Michelle Quien, an MIT graduate student; and Zitang Wei, an MIT research scientist.

Bubbles that don’t collapse

Strano’s lab first reported the novel material — a two-dimensional polymer called a 2D polyaramid that self-assembles into molecular sheets using hydrogen bonds — in 2022. To create such 2D polymer sheets, which had never been done before, the researchers used a building block called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can expand in two dimensions, forming nanometer-sized disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.

That polymer, which the researchers call 2DPA-1, is stronger than steel but has only one-sixth the density of steel.

In their 2022 study, the researchers focused on testing the material’s strength, but they also did some preliminary studies of its gas permeability. For those studies, they created “bubbles” out of the films and filled them with gas. With most polymers, such as plastics, gas that is trapped inside will seep out through the material, causing the bubble to deflate quickly.

However, the researchers found that bubbles made of 2DPA-1 did not collapse — in fact, bubbles that they made in 2021 are still inflated. “I was quite surprised initially,” Ritt says. “The behavior of the bubbles didn’t follow what you’d expect for a typical, permeable polymer. This required us to rethink how to properly study and understand molecular transport across this new material.”  

“We set up a series of careful experiments to first prove that the material is molecularly impermeable to nitrogen,” Strano says. “It could be considered tedious work. We had to make micro-bubbles of the polymer and fill them with a pure gas like nitrogen, and then wait. We had to repeatedly check over an exceedingly long period of time that they weren’t collapsed, in order to report the record impermeability value.”

Traditional polymers allow gases through because they consist of a tangle of spaghetti-like molecules that are loosely joined together. This leaves tiny gaps between the strands. Gas molecules can seep through these gaps, which is why polymers always have at least some degree of gas permeability.

However, the new 2D polymer is essentially impermeable because of the way that the layers of disks stick to each other.

“The fact that they can pack flat means there’s no volume between the two-dimensional disks, and that’s unusual. With other polymers, there’s still space between the one-dimensional chains, so most polymer films allow at least a little bit of gas to get through,” Strano says.

George Schatz, a professor of chemistry and chemical and biological engineering at Northwestern University, described the results as “remarkable.”

“Normally polymers are reasonably permeable to gases, but the polyaramids reported in this paper are orders of magnitude less permeable to most gases under conditions with industrial relevance,” says Schatz, who was not involved in the study.

A protective coating

In addition to nitrogen, the researchers also exposed the polymer to helium, argon, oxygen, methane, and sulfur hexafluoride. They found that 2DPA-1’s permeability to those gases was at least 1/10,000 that of any other existing polymer. That makes it nearly as impermeable as graphene, which is completely impermeable to gases because of its defect-free crystalline structure.

Scientists have been working on developing graphene coatings as a barrier to prevent corrosion in solar cells and other devices. However, scaling up the creation of graphene films is difficult, in large part because they can’t be simply painted onto surfaces.

“We can only make crystal graphene in very small patches,” Strano says. “A little patch of graphene is molecularly impermeable, but it doesn’t scale. People have tried to paint it on, but graphene does not stick to itself but slides when sheared. Graphene sheets moving past each other are considered almost frictionless.”

On the other hand, the 2DPA-1 polymer sticks easily because of the strong hydrogen bonds between the layered disks. In this paper, the researchers showed that a layer just 60 nanometers thick could extend the lifetime of a perovskite crystal by weeks. Perovskites are materials that hold promise as cheap and lightweight solar cells, but they tend to break down much faster than the silicon solar panels that are now widely used.

A 60-nanometer coating extended the perovskite’s lifetime to about three weeks, but a thicker coating would offer longer protection, the researchers say. The films could also be applied to a variety of other structures.

“Using an impermeable coating such as this one, you could protect infrastructure such as bridges, buildings, rail lines — basically anything outside exposed to the elements. Automotive vehicles, aircraft and ocean vessels could also benefit. Anything that needs to be sheltered from corrosion. The shelf life of food and medications can also be extended using such materials,” Strano says.

The other application demonstrated in this paper is a nanoscale resonator — essentially a tiny drum that vibrates at a particular frequency. Larger resonators, with sizes around 1 millimeter or less, are found in cell phones, where they allow the phone to pick up the frequency bands it uses to transmit and receive signals.

“In this paper, we made the first polymer 2D resonator, which you can do with our material because it’s impermeable and quite strong, like graphene,” Strano says. “Right now, the resonators in your phone and other communications devices are large, but there’s an effort to shrink them using nanotechnology. To make them less than a micron in size would be revolutionary. Cell phones and other devices could be smaller and reduce the power expenditures needed for signal processing.”

Resonators can also be used as sensors to detect very tiny molecules, including gas molecules. 

The research was funded, in part, by the Center for Enhanced Nanofluidic Transport-Phase 2, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, as well as the National Science Foundation.

This research was carried out, in part, using MIT.nano’s facilities.

On Hacking Back

Schneier on Security - Wed, 11/12/2025 - 7:01am

Former DoJ attorney John Carlin writes about hackback, which he defines thus: “A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are—­by definition­—not passive defensive measures.”

His conclusion:

As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.

At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation...

Meet the Republicans who killed solar subsidies — after using them

ClimateWire News - Wed, 11/12/2025 - 6:39am
POLITICO’s E&E News examined satellite imagery of more than 100 homes owned by Republican lawmakers to see if they have solar panels. Seven had rooftop arrays.

Lots of studies show warming affected Hurricane Melissa. Is that confusing?

ClimateWire News - Wed, 11/12/2025 - 6:38am
Scientists say "many lines of evidence" convey the dangers of extreme weather. But too much information risks muddling the public's perception about the effects of climate change, some researchers say.

Protesters and UN security clash at climate summit in Brazil

ClimateWire News - Wed, 11/12/2025 - 6:37am
The demonstrators waved yellow flags protesting oil drilling in the Amazon.

‘We’re at peak influence’: Gavin Newsom struts at UN climate summit

ClimateWire News - Wed, 11/12/2025 - 6:36am
If the world wants an American climate leader, the California governor is happy to play the part, even if his country isn’t quite ready to follow.

Camp Mystic asked FEMA to change flood maps years before tragedy

ClimateWire News - Wed, 11/12/2025 - 6:36am
The owners of the central Texas girls camp are being accused in two lawsuits of trying to save money on insurance.

IEA: China’s control of critical minerals threatens energy transition

ClimateWire News - Wed, 11/12/2025 - 6:35am
The International Energy Agency warns that the world will exceed the 1.5-degree warming threshold in all scenarios.

Report warns about EU using climate credits to meet emission goals

ClimateWire News - Wed, 11/12/2025 - 6:33am
The climate will suffer under proposal to let nations avoid some emissions cuts by instead funding climate projects elsewhere, experts say.

Açaí berry dishes surprise visitors to Brazil climate summit

ClimateWire News - Wed, 11/12/2025 - 6:32am
This traditional preparation has been a tough sell for visitors accustomed to the frozen and sweetened açaí cream sold in other countries.

UN shipping regulator advocates for emissions fee at COP30

ClimateWire News - Wed, 11/12/2025 - 6:32am
The move comes despite the United States and Saudi Arabia blocking new rules last month.

Governments are flying blind on climate costs, study says

ClimateWire News - Wed, 11/12/2025 - 6:31am
The study found that nine in 10 countries don’t know their climate spending, while seven in 10 lack adequate medium- and long-term strategies to deal with climate impacts.

Melissa shows how climate change is outstripping defenses

ClimateWire News - Wed, 11/12/2025 - 6:24am
The hurricane's Caribbean rampage spotlights a contentious issue of how much industrialized nations should pay to help developing countries adapt to climate change.

Teaching large language models how to absorb new knowledge

MIT Latest News - Wed, 11/12/2025 - 12:00am

In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.

Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.

This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.

Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.

The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.

The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.

While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.   

“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.

He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.

Teaching the model to learn

LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.

But once it is deployed, the weights are static and can’t be permanently updated anymore.

However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.

The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.

The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.

In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.

The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.

Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.

“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.

Choosing the best method

Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.

In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.

“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.

SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.

But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.

The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.

“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.

This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab. 

Artificial light reduces ecosystem carbon sinks

Nature Climate Change - Wed, 11/12/2025 - 12:00am

Nature Climate Change, Published online: 12 November 2025; doi:10.1038/s41558-025-02499-4

As artificial light encroaches upon cities and countryside, natural darkness recedes and circadian rhythms shift in regions worldwide. Now, a study reveals that bright nights are negatively impacting the carbon sinks of ecosystems.

Widespread influence of artificial light at night on ecosystem metabolism

Nature Climate Change - Wed, 11/12/2025 - 12:00am

Nature Climate Change, Published online: 12 November 2025; doi:10.1038/s41558-025-02481-0

The authors combine light intensity data with eddy covariance observations from 86 sites to show that artificial light at night increases ecosystem respiration and alters carbon exchange, with impacts shaped by diel cycles and seasonal dynamics.

Prompt Injection in AI Browsers

Schneier on Security - Tue, 11/11/2025 - 7:08am

This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL...

Retreat or recast? Democrats debate future of climate politics.

ClimateWire News - Tue, 11/11/2025 - 6:21am
Democratic election wins last week reignited arguments on how — or if — candidates should discuss climate change on the campaign trail.

Colorado seeks to extend life of major coal plant

ClimateWire News - Tue, 11/11/2025 - 6:20am
The move comes amid speculation that DOE is preparing to issue emergency orders directing some retiring coal plants to stay open.

Pages