Feed aggregator
Symposium examines the neural circuits that keep us alive and well
Taking an audience of hundreds on a tour around the body, seven speakers at The Picower Institute for Learning and Memory’s symposium “Circuits of Survival and Homeostasis” Oct. 21 shared their advanced and novel research about some of the nervous system’s most evolutionarily ancient functions.
Introducing the symposium that she arranged with a picture of a man at a campfire on a frigid day, Sara Prescott, assistant professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences, pointed out that the brain and the body cooperate constantly just to keep us going, and that when the systems they maintain fail, the consequence is disease.
“[This man] is tightly regulating his blood pressure, glucose levels, his energy expenditure, inflammation and breathing rate, and he’s doing this in the face of a fluctuating external environment,” Prescott said. “Behind each of these processes there are networks of neurons that are working quietly in the background to maintain internal stability. And this is, of course, the brain’s oldest job.”
Indeed, although the discoveries they shared about the underlying neuroscience were new, the speakers each described experiences that are as timeless as they are familiar: the beating of the heart, the transition from hunger to satiety, and the healing of cuts on our skin.
Feeling warm and full
Li Ye, a scientist at Scripps Research, picked right up on the example of coping with the cold. Mammals need to maintain a consistent internal body temperature, and so they will increase metabolism in the cold and then, as energy supplies dwindle, seek out more food. His lab’s 2023 study identified the circuit, centered in the Xiphoid nucleus of the brain’s thalamus, that regulates this behavior by sensing prolonged cold exposure and energy consumption. Ye described other feeding mechanisms his lab is studying as well, including searching out the circuitry that regulates how long an animal will feed at a time. For instance, if you’re worried about predators finding you, it’s a bad idea to linger for a leisurely lunch.
Physiologist Zachary Knight of the University of California at San Francisco also studies feeding and drinking behaviors. In particular, his lab asks how the brain knows when it’s time to stop. The conventional wisdom is that all that’s needed is a feeling of fullness coming from the gut, but his research shows there is more to the story. A 2023 study from his lab found a population of neurons in the caudal nucleus of the solitary tract in the brain stem that receive signals about ingestion and taste from the mouth, and that send that “stop eating” signal. They also found a separate neural population in the brain stem that indeed receives fullness signals from the gut, and teaches the brain over time how much food leads to satisfaction. Both neuron types work together to regulate the pace of eating. His lab has continued to study how brain stem circuits regulate feeding using these multiple inputs.
Energy balance depends not only on how many calories come in, but also on how much energy is spent. When food is truly scarce, many animals will engage in a state of radically lowered metabolism called torpor (like hibernation), where body temperature plummets. The brain circuits that exert control over body temperature are another area of active research. In his talk, Harvard University neurologist Clifford Saper described years of research in which his lab found neurons in the median preoptic nucleus that dictate this metabolic state. Recently, his lab demonstrated that the same neurons that regulate torpor also regulate fever during sickness. When the neurons are active, body temperature drops. When they are inhibited, fever ensues. Thus, the same neurons act as a two-way switch for body temperature in response to different threatening conditions.
Sickness, injury, and stress
As the idea of fever suggests, the body also has evolved circuits (that scientists are only now dissecting) to deal with sickness and injury.
Washington University neuroscientist Qin Liu described her research into the circuits governing coughing and sneezing, which, on one hand, can clear the upper airways of pathogens and obstructions but, on the other hand, can spread those pathogens to others in the community. She described her lab’s 2024 study in which her team pinpointed a population of neurons in the nasal passages that mediate sneezing and a different population of sensory neurons in the trachea that produce coughing. Identifying the specific cells and their unique characteristics makes them potentially viable drug targets.
While Liu tackled sickness, Harvard stem cell biologist Ya-Chieh Hsu discussed how neurons can reshape the body’s tissues during stress and injury, specifically the hair and skin. While it is common lore that stress can make your hair gray and fall out, Hsu’s lab has shown the actual physiological mechanisms that make it so. In 2020 her team showed that bursts of noradrenaline from the hyperactivation of nerves in the sympathetic nervous system kills the melanocyte stem cells that give hair its color. She described newer research indicating a similar mechanism may also make hair fall out by killing off cells at the base of hair follicles, releasing cellular debris and triggering auto-immunity. Her lab has also looked at how the nervous system influences skin healing after injury. For instance, while our skin may appear to heal after a cut because it closes up, many skin cell types actually don’t rebound (unless you’re still an embryo). By looking at the difference between embryos and post-birth mice, Hsu’s lab has traced the neural mechanisms that prevent fuller healing, identifying a role for cells called fibroblasts and the nervous system.
Continuing on the theme of stress, Caltech biologist Yuki Oka discussed a broad-scale project in his lab to develop a molecular and cellular atlas of the sympathetic nervous system, which innervates much of the body and famously produces its “fight or flight” responses. In work partly published last year, their journey touched on cells and circuits involved in functions ranging from salivation to secreting bile. Oka and co-authors made the case for the need to study the system more in a review paper earlier this year.
A new model to study human biology
In their search for the best ways to understand the circuits that govern survival and homeostasis, researchers often use rodents because they are genetically tractable, easy to house, and reproduce quickly, but Stanford University biochemist Mark Krasnow has worked to develop a new model with many of those same traits but a closer genetic relationship to humans: the mouse lemur. In his talk, he described that work (which includes extensive field research in Madagascar) and focused on insights the mouse lemurs have helped him make into heart arrhythmias. After studying the genes and health of hundreds of mouse lemurs, his lab identified a family with “sick sinus syndrome,” an arrhythmia also seen in humans. In a preprint study, his lab describes the specific molecular pathways at fault in disrupting the heart’s natural pace making.
By sharing some of the latest research into how the brain and body work to stay healthy, the symposium’s speakers highlighted the most current thinking about the nervous system’s most primal purposes.
Quantum modeling for breakthroughs in materials science and sustainable energy
Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself.
Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they learned about chemical interactions through the formulas and diagrams they drew together.
“I grew up in a town where it was difficult to find a scientist,” he says.
Today, Opoku has become one himself, recently earning a PhD in quantum chemistry from Auburn University. This year, he joins MIT as a part of the School of Science Dean’s Postdoctoral Fellowship program. Working with the Van Voorhis Group at the Department of Chemistry, Opoku’s goal is to advance computational methods to study how electrons behave — a fundamental research that underlies applications ranging from materials science to drug discovery.
“As a boy who wanted to satisfy my own curiosities at a young age, in addition to the fact that my parents had minimal formal education,” Opoku says, “I knew that the only way I would be able to accomplish my goal was to work hard.”
In pursuit of knowledge
When Opoku was 8 years old, he began independently learning English at school. He would come back with homework, but his parents were unable to help him, as neither of them could read or write in English. Frustrated, his mother asked an older student to help tutor her son.
Every day, the boys would meet at 6 o’clock. With no electricity at either of their homes, they practiced new vocabulary and pronunciations together by a kerosene lamp.
As he entered junior high school, Opoku’s fascination with nature grew.
“I realized that chemistry was the central science that really offered the insight that I wanted to really understand Creation from the smallest level,” he says.
He studied diligently and was able to get into one of Ghana’s top high schools — but his parents couldn’t afford the tuition. He therefore enrolled in Dadease Agric Senior High School in his hometown. By growing tomatoes and maize, he saved up enough money to support his education.
In 2012, he got into Kwame Nkrumah University of Science and Technology (KNUST), a first-ranking university in Ghana and the West Africa region. There, he was introduced to computational chemistry. Unlike many other branches of science, the field required only a laptop and the internet to study chemical reactions.
“Anything that comes to mind, anytime I can grab my computer and I’ll start exploring my curiosity. I don’t have to wait to go to the laboratory in order to interrogate nature,” he says.
Opoku worked from early morning to late night. None of it felt like work, though, thanks to his supervisor, the late quantum chemist Richard Tia, who was an associate professor of chemistry at KNUST.
“Every single day was a fun day,” he recalls of his time working with Tia. “I was being asked to do the things that I myself wanted to know, to satisfy my own curiosity, and by doing that I’ll be given a degree.”
In 2020, Opoku’s curiosity brought him even further, this time overseas to Auburn University in Alabama for his PhD. Under the guidance of his advisor, Professor J. V. Ortiz, Opoku contributed to the development of new computational methods to simulate how electrons bind to or detach from molecules, a process known as electron propagation.
What is new about Opoku’s approach is that it does not rely on any adjustable or empirical parameters. Unlike some earlier computational methods that require tuning to match experimental results, his technique uses advanced mathematical formulations to directly account for first principles of electron interactions. This makes the method more accurate — closely resembling results from lab experiments — while using less computational power.
By streamlining the calculations and eliminating guesswork, Opoku’s work marks a major step toward faster, more trustworthy quantum simulations across a wide range of molecules, including those never studied before — laying the groundwork for breakthroughs in many areas such as materials science and sustainable energy.
For his postdoctoral research at MIT, Opoku aims to advance electron propagator methods to address larger and more complex molecules and materials by integrating quantum computing, machine learning, and bootstrap embedding — a technique that simplifies quantum chemistry calculations by dividing large molecules into smaller, overlapping fragments. He is collaborating with Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, whose expertise in these areas can help make Opoku’s advanced simulations more computationally efficient and scalable.
“His approach is different from any of the ways that we've pursued in the group in the past,” Van Voorhis says.
Passing along the opportunity to learn
Opoku thanks previous mentors who helped him overcome the “intellectual overhead required to make contributions to the field,” and believes Van Voorhis will offer the same kind of support.
In 2021, Opoku joined the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) to gain mentorship, networking, and career development opportunities within a supportive community. He later led the Auburn University chapter as president, helping coordinate k-12 outreach to inspire the next generation of scientists, engineers, and innovators.
“Opoku’s mentorship goes above and beyond what would be typical at his career stage,” says Van Voorhis. “One reason is his ability to communicate science to people, and not just the concepts of science, but also the process of science."
Back home, Opoku founded the Nesvard Institute of Molecular Sciences to support African students to develop not only skills for graduate school and professional careers, but also a sense of confidence and cultural identity. Through the nonprofit, he has mentored 29 students so far, passing along the opportunity for them to follow their curiosity and help others do the same.
“There are many areas of science and engineering to which Africans have made significant contributions, but these contributions are often not recognized, celebrated, or documented,” Opoku says.
He adds: “We have a duty to change the narrative.”
The Patent Office Is About To Make Bad Patents Untouchable
The U.S. Patent and Trademark Office (USPTO) has proposed new rules that would effectively end the public’s ability to challenge improperly granted patents at their source—the Patent Office itself. If these rules take effect, they will hand patent trolls exactly what they’ve been chasing for years: a way to keep bad patents alive and out of reach. People targeted with troll lawsuits will be left with almost no realistic or affordable way to defend themselves.
We need EFF supporters to file public comments opposing these rules right away. The deadline for public comments is December 2. The USPTO is moving quickly, and staying silent will only help those who profit from abusive patents.
Tell USPTO: The public has a right to challenge bad patents
We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use legal or technical vocabulary. The important thing is that everyday users and creators of technology have the chance to speak up, and be counted.
Below is a short, simple comment you can copy and paste. Your comment will carry more weight if you add a personal sentence or two of your own. Please note that comments should be submitted under your real name and will become part of the public record.
Sample comment:
I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.
Why This Rule Change MattersInter partes review, (IPR), isn’t perfect. It hasn’t eliminated patent trolling, and it’s not available in every case. But it is one of the few practical ways for ordinary developers, small companies, nonprofits, and creators to challenge a bad patent without spending millions of dollars in federal court. That’s why patent trolls hate it—and why the USPTO’s new rules are so dangerous.
IPR isn’t easy or cheap, but compared to years of litigation, it’s a lifeline. When the system works, it removes bogus patents from the table for everyone, not just the target of a single lawsuit.
IPR petitions are decided by the Patent Trial and Appeal Board (PTAB), a panel of specialized administrative judges inside the USPTO. Congress designed IPR to provide a fresh, expert look at whether a patent should have been granted in the first place—especially when strong prior art surfaces. Unlike full federal trials, PTAB review is faster, more technical, and actually accessible to small companies, developers, and public-interest groups.
Here are three real examples of how IPR protected the public:
- The “Podcasting Patent” (Personal Audio)
Personal Audio claimed it had “invented” podcasting and demanded royalties from audio creators using its so-called podcasting patent. EFF crowdsourced prior art, filed an IPR, and ultimately knocked out the patent—benefiting the entire podcasting world.
Under the new rules, this kind of public-interest challenge could easily be blocked based on procedural grounds like timing, before the PTAB even examines the patent.
- SportBrain’s “upload your fitness data” patent
SportBrain sued more than 80 companies over a patent that claimed to cover basic gathering of user data and sending it over a network. A panel of PTAB judges canceled every claim.
Under the new rules, this patent could have survived long enough to force dozens more companies to pay up.
- Shipping & Transit: a troll that sued hundreds of businesses
For more than a decade, Shipping & Transit sued companies over extremely broad “delivery notifications”patents. After repeated losses at PTAB and in court (including fee awards), the company finally collapsed.
Under the new rules, a troll like this could keep its patents alive and continue carpet-bombing small businesses with lawsuits.
IPR hasn’t ended patent trolling. But when a troll waves a bogus patent at hundreds or thousands of people, IPR is one of the only tools that can actually fix the underlying problem: the patent itself. It dismantles abusive patent monopolies that never should have existed, saving entire industries from predatory litigation. That’s exactly why patent trolls and their allies have fought so hard to shut it down. They’ve failed to dismantle IPR in court or in Congress—and now they’re counting on the USPTO’s own leadership to do it for them.
What the USPTO Plans To DoFirst, they want you to give up your defenses in court. Under this proposal, a defendant can’t file an IPR unless they promise to never challenge the patent’s validity in court.
For someone actually being sued or threatened with patent infringement, that’s simply not a realistic promise to make. The choice would be: use IPR and lose your defenses—or keep your defenses and lose IPR.
Second, the rules allow patents to become “unchallengeable” after one prior fight. That’s right. If a patent survives any earlier validity fight, anywhere, these rules would block everyone else from bringing an IPR, even years later and even if new prior art surfaces. One early decision—even one that’s poorly argued, or didn’t have all the evidence—would block the door on the entire public.
Third, the rules will block IPR entirely if a district court case is projected to move faster than PTAB.
So if a troll sues you with one of the outrageous patents we’ve seen over the years, like patents on watching an ad, showing picture menus, or clocking in to work, the USPTO won’t even look at it. It’ll be back to the bad old days, where you have exactly one way to beat the troll (who chose the court to sue in)—spend millions on experts and lawyers, then take your chances in front of a federal jury.
The USPTO claims this is fine because defendants can still challenge patents in district court. That’s misleading. A real district-court validity fight costs millions of dollars and takes years. For most people and small companies, that’s no opportunity at all.
IPR was created by Congress in 2013 after extensive debate. It was meant to give the public a fast, affordable way to correct the Patent Office’s own mistakes. Only Congress—not agency rulemaking—can rewrite that system.
The USPTO shouldn’t be allowed to quietly undermine IPR with procedural traps that block legitimate challenges.
Bad patents still slip through every year. The Patent Office issues hundreds of thousands of new patents annually. IPR is one of the only tools the public has to push back.
These new rules rely on the absurd presumption that it’s the defendants—the people and companies threatened by questionable patents—who are abusing the system with multiple IPR petitions, and that they should be limited to one bite at the apple.
That’s utterly upside-down. It’s patent trolls like Shipping & Transit and Personal Audio that have sued, or threatened, entire communities of developers and small businesses.
When people have evidence that an overbroad patent was improperly granted, that evidence should be heard. That’s what Congress intended. These rules twist that intent beyond recognition.
In 2023, more than a thousand EFF supporters spoke out and stopped an earlier version of this proposal—your comments made the difference then, and they can again.
Our principle is simple: the public has a right to challenge bad patents. These rules would take that right away. That’s why it’s vital to speak up now.
Sample comment:
I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.
Strengthen Colorado’s AI Act
Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.
Fortunately, workers, patients, and renters are resisting.
In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.
EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.
What the Colorado AI Act DoesThe Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.
The Act is a solid foundation. Still, EFF urges Colorado to strengthen it
Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.
Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.
Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.
How the Colorado AI Act Should Be StrengthenedThe Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.
Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.
Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.
Next StepsColorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.
EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.
Legal Restrictions on Vulnerability Disclosure
Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the early 2000s was supposed to prevent. This is the talk.
Thirty years ago, a debate raged over whether vulnerability disclosure was good for computer security. On one side, full disclosure advocates argued that software bugs weren’t getting fixed and wouldn’t get fixed if companies that made insecure software wasn’t called out publicly. On the other side, companies argued that full disclosure led to exploitation of unpatched vulnerabilities, especially if they were hard to fix. After blog posts, public debates, and countless mailing list flame wars, there emerged a compromise solution: coordinated vulnerability disclosure, where vulnerabilities were disclosed after a period of confidentiality where vendors can attempt to fix things. Although full disclosure fell out of fashion, disclosure won and security through obscurity lost. We’ve lived happily ever after since...
The strange and totally real plot to blot out the sun and halt global warming
Trump admin backs cruise industry bid to sink Hawaii climate tax
Massachusetts Dems scrap plans to neuter 2030 climate target
82 countries at COP30 urge to double down on push to abandon fossil fuels
9th Circuit halts California climate disclosure law
California environmental justice adviser quits to protest state agency
BLM delays enforcement of methane waste rule
Artificial intelligence sparks debate at COP30 climate talks in Brazil
Rich nations must hit net zero and pay up on climate, India says
UK overtaken by Denmark as world’s most ambitious country on climate
Methane pollution rises, but UN foresees near-future reductions
New AI agent learns to use CAD to create 3D objects from sketches
Computer-Aided Design (CAD) is the go-to method for designing most of today’s physical products. Engineers use CAD to turn 2D sketches into 3D models that they can then test and refine before sending a final version to a production line. But the software is notoriously complicated to learn, with thousands of commands to choose from. To be truly proficient in the software takes a huge amount of time and practice.
MIT engineers are looking to ease CAD’s learning curve with an AI model that uses CAD software much like a human would. Given a 2D sketch of an object, the model quickly creates a 3D version by clicking buttons and file options, similar to how an engineer would use the software.
The MIT team has created a new dataset called VideoCAD, which contains more than 41,000 examples of how 3D models are built in CAD software. By learning from these videos, which illustrate how different shapes and objects are constructed step-by-step, the new AI system can now operate CAD software much like a human user.
With VideoCAD, the team is building toward an AI-enabled “CAD co-pilot.” They envision that such a tool could not only create 3D versions of a design, but also work with a human user to suggest next steps, or automatically carry out build sequences that would otherwise be tedious and time-consuming to manually click through.
“There’s an opportunity for AI to increase engineers’ productivity as well as make CAD more accessible to more people,” says Ghadi Nehme, a graduate student in MIT’s Department of Mechanical Engineering.
“This is significant because it lowers the barrier to entry for design, helping people without years of CAD training to create 3D models more easily and tap into their creativity,” adds Faez Ahmed, associate professor of mechanical engineering at MIT.
Ahmed and Nehme, along with graduate student Brandon Man and postdoc Ferdous Alam, will present their work at the Conference on Neural Information Processing Systems (NeurIPS) in December.
Click by click
The team’s new work expands on recent developments in AI-driven user interface (UI) agents — tools that are trained to use software programs to carry out tasks, such as automatically gathering information online and organizing it in an Excel spreadsheet. Ahmed’s group wondered whether such UI agents could be designed to use CAD, which encompasses many more features and functions, and involves far more complicated tasks than the average UI agent can handle.
In their new work, the team aimed to design an AI-driven UI agent that takes the reins of the CAD program to create a 3D version of a 2D sketch, click by click. To do so, the team first looked to an existing dataset of objects that were designed in CAD by humans. Each object in the dataset includes the sequence of high-level design commands, such as “sketch line,” “circle,” and “extrude,” that were used to build the final object.
However, the team realized that these high-level commands alone were not enough to train an AI agent to actually use CAD software. A real agent must also understand the details behind each action. For instance: Which sketch region should it select? When should it zoom in? And what part of a sketch should it extrude? To bridge this gap, the researchers developed a system to translate high-level commands into user-interface interactions.
“For example, let’s say we drew a sketch by drawing a line from point 1 to point 2,” Nehme says. “We translated those high-level actions to user-interface actions, meaning we say, go from this pixel location, click, and then move to a second pixel location, and click, while having the ‘line’ operation selected.”
In the end, the team generated over 41,000 videos of human-designed CAD objects, each of which is described in real-time in terms of the specific clicks, mouse-drags, and other keyboard actions that the human originally carried out. They then fed all this data into a model they developed to learn connections between UI actions and CAD object generation.
Once trained on this dataset, which they dub VideoCAD, the new AI model could take a 2D sketch as input and directly control the CAD software, clicking, dragging, and selecting tools to construct the full 3D shape. The objects ranged in complexity from simple brackets to more complicated house designs. The team is training the model on more complex shapes and envisions that both the model and the dataset could one day enable CAD co-pilots for designers in a wide range of fields.
“VideoCAD is a valuable first step toward AI assistants that help onboard new users and automate the repetitive modeling work that follows familiar patterns,” says Mehdi Ataei, who was not involved in the study, and is a senior research scientist at Autodesk Research, which develops new design software tools. “This is an early foundation, and I would be excited to see successors that span multiple CAD systems, richer operations like assemblies and constraints, and more realistic, messy human workflows.”
A new take on carbon capture
If there was one thing Cameron Halliday SM ’19, MBA ’22, PhD ’22 was exceptional at during the early days of his PhD at MIT, it was producing the same graph over and over again. Unfortunately for Halliday, the graph measured various materials’ ability to absorb CO2 at high temperatures over time — and it always pointed down and to the right. That meant the materials lost their ability to capture the molecules responsible for warming our climate.
At least Halliday wasn’t alone: For many years, researchers have tried and mostly failed to find materials that could reliably absorb CO2 at the super-high temperatures of industrial furnaces, kilns, and boilers. Halliday’s goal was to find something that lasted a little longer.
Then in 2019, he put a type of molten salt called lithium-sodium ortho-borate through his tests. The salts absorbed more than 95 percent of the CO2. And for the first time, the graph showed almost no degradation over 50 cycles. The same was true after 100 cycles. Then 1,000.
“I honestly don’t know if we ever expected to completely solve the problem,” Halliday says. “We just expected to improve the system. It took another two months to figure out why it worked.”
The researchers discovered the salts behave like a liquid at high temperatures, which avoids the brittle cracking responsible for the degradation of many solid materials.
“I remember walking home over the Mass Ave bridge at 5 a.m. with all the morning runners going by me,” Halliday recalls. “That was the moment when I realized what this meant. Since then, it’s been about proving it works at larger scales. We’ve just been building the next scaled-up version, proving it still works, building a bigger version, proving that out, until we reach the ultimate goal of deploying this everywhere.”
Today, Halliday is the co-founder and CEO of Mantel, a company building systems to capture carbon dioxide at large industrial sites of all types. Although a lot of people think the carbon capture industry is a dead end, Halliday doesn’t give up so easily, and he’s got a growing corpus of performance data to keep him encouraged.
Mantel’s system can be added on to the machines of power stations and factories making cement, steel, paper and pulp, oil and gas, and more, reducing their carbon emissions by around 95 percent. Instead of being released into the atmosphere, the emitted CO2 is channeled into Mantel’s system, where the company’s salts are sprayed out from something that looks like a shower head. The CO2 diffuses through the molten salts in a reaction that can be reversed through further temperature increases, so the salts boil off pure CO2 that can be transported for use or stored underground.
A key difference from other carbon capture methods that have struggled to be profitable is that Mantel uses the heat from its process to generate steam for customers by combining it with water in another part of its system. Mantel says delivering steam, which is used to drive many common industrial processes, lets its system work with just 3 percent of the net energy that state-of-the-art carbon capture systems require.
“We’re still consuming energy, but we get most of it back as steam, whereas the incumbent technology only consumes steam,” says Halliday, who co-founded Mantel with Sean Robertson PhD ’22 and Danielle Rapson. “That steam is a useful revenue stream, so we can turn carbon capture from a waste management process into a value creation process for our customer’s core business — whether that’s a power station using steam to make electricity, or oil and gas refineries. It completely changes the economics of carbon capture.”
From science to startup
Halliday’s first exposure to MIT came in 2016 when he cold emailed Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, asking if he could come to his lab for the summer and work on research into carbon capture.
“He invited me, but he didn’t put me on that project,” Halliday recalls. “At the end of the summer he said, ‘You should consider coming back and doing a PhD.’”
Halliday enrolled in a joint PhD-MBA program the following year.
“I really wanted to work on something that had an impact,” Halliday says. “The dual PhD-MBA program has some deep technical academic elements to it, but you also work with a company for two months, so you use a lot of what you learn in the real world.”
Halliday worked on a few different research projects in Hatton’s lab early on, all three of which eventually turned into companies. The one that he stuck with explored ways to make carbon capture more energy efficient by working at the high temperatures common at emissions-heavy industrial sites.
Halliday ran into the same problems as past researchers with materials degrading at such extreme conditions.
“It was the big limiter for the technology,” Halliday recalls.
Then Halliday ran his successful experiment with molten borate salts in 2019. The MBA portion of his program began soon after, and Halliday decided to use that time to commercialize the technology. Part of that occurred in Course 15.366 (Climate and Energy Ventures), where Halliday met his co-founders. As it happens, alumni of the class have started more than 150 companies over the years.
“MIT tries to pull these great ideas out of academia and get them into the world so they can be valued and used,” Halliday says. “For the Climate and Energy Ventures class, outside speakers showed us every stage of company-building. The technology roadmap for our system is shoebox-sized, shipping container, one-bedroom house, and then the size of a building. It was really valuable to see other companies and say, ‘That’s what we could look like in three years, or six years.”
From startup to scale up
When Mantel was officially founded in 2022 the founders had their shoebox-sized system. After raising early funding, the team built its shipping container-sized system at The Engine, an MIT-affiliated startup incubator. That system has been operational for almost two years.
Last year, Mantel announced a partnership with Kruger Inc. to build the next version of its system at a factory in Quebec, which will be operational next year. The plant will run in a two-year test phase before scaling across Kruger’s other plants if successful.
“The Quebec project is proving the capture efficiency and proving the step-change improvement in energy use of our system,” Halliday says. “It’s a derisking of the technology that will unlock a lot more opportunities.”
Halliday says Mantel is in conversations with close to 100 industrial partners around the world, including the owners of refineries, data centers, cement and steel plants, and oil and gas companies. Because it’s a standalone addition, Halliday says Mantel’s system doesn’t have to change much to be used in different industries.
Mantel doesn’t handle CO2 conversion or sequestration, but Halliday says capture makes up the bulk of the costs in the CO2 value chain. It also generates high-quality CO2 that can be transported in pipelines and used in industries including the food and beverage industry — like the CO2 that makes your soda bubbly.
“This is the solution our customers are dreaming of,” Halliday says. “It means they don’t have to shut down their billion-dollar asset and reimagine their business to address an issue that they all appreciate is existential. There are questions about the timeline, but most industries recognize this is a problem they’ll have to grapple with eventually. This is a pragmatic solution that’s not trying to reshape the world as we dream of it. It’s looking at the problem at hand today and fixing it.”
An improved way to detach cells from culture surfaces
Anchorage-dependent cells are cells that require physical attachment to a solid surface, such as a culture dish, to survive, grow, and reproduce. In the biomedical industry, and others, having the ability to culture these cells is crucial, but current techniques used to separate cells from surfaces can induce stresses and reduce cell viability.
“In the pharmaceutical and biotechnology industries, cells are typically detached from culture surfaces using enzymes — a process fraught with challenges,” says Kripa Varanasi, MIT professor of mechanical engineering. “Enzymatic treatments can damage delicate cell membranes and surface proteins, particularly in primary cells, and often require multiple steps that make the workflow slow and labor-intensive.”
Existing approaches also rely on large volumes of consumables, generating an estimated 300 million liters of cell culture waste each year. Moreover, because these enzymes are often animal-derived, they can introduce compatibility concerns for cells intended for human therapies, limiting scalability and high-throughput applications in modern biomanufacturing.
Varanasi is corresponding author on a new paper in the journal ACS Nano, in which researchers from the MIT Department of Mechanical Engineering and the Cancer Program at the Broad Institute of Harvard and MIT present a novel enzyme-free strategy for detaching cells from culture surfaces. The method works by harnessing alternating electrochemical current on a conductive biocompatible polymer nanocomposite surface.
“By applying low-frequency alternating voltage, our platform disrupts adhesion within minutes while maintaining over 90 percent cell viability — overcoming the limitations of enzymatic and mechanical methods that can damage cells or generate excess waste,” says Varanasi.
Beyond simplifying routine cell culture, the approach could transform large-scale biomanufacturing by enabling automated and contamination-conscious workflows for cell therapies, tissue engineering, and regenerative medicine. The platform also provides a pathway for safely expanding and harvesting sensitive immune cells for applications such as CAR-T therapies.
“Because our electrically tunable interface can dynamically shape the ionic microenvironment around cells, it also offers powerful opportunities to control ion channels, study signaling pathways, and integrate with bioelectronic systems for high-throughput drug screening, regenerative medicine, and personalized therapies,” Varanasi explains.
“Our work shows how electrochemistry can be harnessed not just for scientific discovery, but also for scalable, real-world applications,” says Wang Hee (Wren) Lee, MIT postdoc and co-first author. “By translating electrochemical control into biomanufacturing, we’re laying the foundation for technologies that can accelerate automation, reduce waste, and ultimately enable new industries built on sustainable and precise processing.”
Bert Vandereydt, co-first author and mechanical engineering researcher at MIT, emphasizes the potential for industrial scalability. “Because this method can be applied uniformly across large areas, it’s ideal for high-throughput and large-scale applications like cell therapy manufacturing. We envision it enabling fully automated, closed-loop cell culture systems in the near future.”
Yuen-Yi (Moony) Tseng, principal investigator at the Broad Institute and collaborator on the project, underscores the biomedical significance. “This platform opens new doors for culturing and harvesting delicate primary or cancer cells. It could streamline workflows across research and clinical biomanufacturing, reducing variability and preserving cell functionality for therapeutic use.”
Industrial applications of adherent cells include uses in the biomedical, pharmaceutical, and cosmetic sectors. For this study, the team tested their new method using human cancer cells, including osteosarcoma and ovarian cancer cells. After identifying an optimal frequency, the detachment efficiency for both types of cells increased from 1 percent to 95 percent, with cell viability exceeding 90 percent.
The paper, “Alternating Electrochemical Redox-Cycling on Nanocomposite Biointerface for High-Efficiency Enzyme-Free Cell Detachment,” is available from the American Chemical Society journal ACS Nano.
Lawsuit Challenges San Jose’s Warrantless ALPR Mass Surveillance
Contact: Josh Richman, EFF, jrichman@eff.org; Carmen King, ACLU of Northern California, cking@aclunc.org
SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warrantless searches of the stored records of millions of drivers’ private habits, movements, and associations, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union of Northern California (ACLU-NC) argue in a lawsuit filed Tuesday.
The lawsuit, filed in Santa Clara County Superior Court on behalf of the Services, Immigrant Rights and Education Network (SIREN) and the Council on American-Islamic Relations – California (CAIR-CA), challenges San Jose police officers’ practice of searching for location information collected by automated license plate readers (ALPRs) without first getting a warrant.
ALPRs are an invasive mass-surveillance technology: high-speed, computer-controlled cameras that automatically capture images of the license plates of every driver that passes by, without any suspicion that the driver has broken the law.
“A person who regularly drives through an area subject to ALPR surveillance can have their location information captured multiple times per day,” the lawsuit says. “This information can reveal travel patterns and provide an intimate window into a person’s life as they travel from home to work, drop off their children at school, or park at a house of worship, a doctor’s office, or a protest. It could also reveal whether a person crossed state lines to seek health care in California.”
The San Jose Police Department has blanketed the city’s roadways with nearly 500 ALPRs – indiscriminately collecting millions of records per month about people’s movements – and keeps this data for an entire year. Then the department permits its officers and other law enforcement officials from across the state to search this ALPR database to instantly reconstruct people’s locations over time – without first getting a warrant. This is an unchecked police power to scrutinize the movements of San Jose’s residents and visitors as they lawfully travel to work, to the doctor, or to a protest.
San Jose’s ALPR surveillance program is especially pervasive: Few California law enforcement agencies retain ALPR data for an entire year, and few have deployed nearly 500 cameras.
The lawsuit, which names the city, its Police Chief Paul Joseph, and its Mayor Matt Mahan as defendants, asks the court to stop the city and its police from searching ALPR data without first obtaining a warrant. Location information reflecting people’s physical movements, even in public spaces, is protected under the Fourth Amendment according to U.S. Supreme Court case law. The California Constitution is even more protective of location privacy, at both Article I, Section 13 (the ban on unreasonable searches) and Article I, Section 1 (the guarantee of privacy). “The SJPD’s widespread collection and searches of ALPR information poses serious threats to communities’ privacy and freedom of movement."
“This is not just about data or technology — it’s about power, accountability, and our right to move freely without being watched,” said CAIR-San Francisco Bay Area Executive Director Zahra Billoo. “For Muslim communities, and for anyone who has experienced profiling, the knowledge that police can track your every move without cause is chilling. San Jose’s mass surveillance program violates the California Constitution and undermines the privacy rights of every person who drives through the city. We’re going to court to make sure those protections still mean something."
"The right to privacy is one of the strongest protections that our immigrant communities have in the face of these acts of violence and terrorism from the federal government," said SIREN Executive Director Huy Tran. "This case does not raise the question of whether these cameras should be used. What we need to guard against is a surveillance state, particularly when we have seen other cities or counties violate laws that prohibit collaborating with ICE. We can protect the privacy rights of our residents with one simple rule: Access to the data should only happen once approved under a judicial warrant.”
For the complaint: https://www.eff.org/files/2025/11/18/siren_v._san_jose_-_filed_complaint.pdf
For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs
Tags: SIREN and CAIR-CA v. San JoseAutomated License Plate Readers (ALPRs)Street Level Surveillance