Feed aggregator
MIT affiliates named 2025 Schmidt Sciences AI2050 Fellows
Two current MIT affiliates and seven additional alumni are among those named to the 2025 cohort of AI2050 Fellows.
Zongyi Li, a postdoc in the MIT Computer Science and Artificial Intelligence Lab, and Tess Smidt ’12, an associate professor of electrical engineering and computer science (EECS), were both named as AI2050 Early Career Fellows.
Seven additional MIT alumni were also honored. AI2050 Early Career Fellows include Brian Hie SM '19, PhD '21; Natasha Mary Jaques PhD '20; Martin Anton Schrimpf PhD '22; Lindsey Raymond SM '19, PhD '24, who will join the MIT faculty in EECS, the Department of Economics, and the MIT Schwarzman College of Computing in 2026; and Ellen Dee Zhong PhD ’22. AI2050 Senior Fellows include Surya Ganguli ’98, MNG ’98; and Luke Zettlemoyer SM ’03, PhD ’09.
AI2050 Fellows are announced annually by Schmidt Sciences, a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to accelerate scientific knowledge and breakthroughs with the most promising, advanced tools to support a thriving planet. The organization prioritizes research in areas poised for impact including AI and advanced computing, astrophysics, biosciences, climate, and space — as well as supporting researchers in a variety of disciplines through its science systems program.
Li is postdoc in CSAIL working with associate professor of EECS Kaiming He. Li's research focuses on developing neural operator methods to accelerate scientific computing. He received his PhD in computing and mathematical sciences from Caltech, where he was advised by Anima Anandkumar and Andrew Stuart. He holds undergraduate degrees in computer science and mathematics from Washington University in St. Louis.
Li's work has been supported by a Kortschak Scholarship, PIMCO Fellowship, Amazon AI4Science Fellowship, Nvidia Fellowship, and MIT-Novo Nordisk AI Fellowship. He has also completed three summer internships at Nvidia. Li will join the NYU Courant Institute of Mathematical Sciences as an assistant professor of mathematics and data science in fall 2026.
Smidt, associate professor of electrical engineering and computer science (EECS), is the principal investigator of the Atomic Architects group at the Research Laboratory of Electronics (RLE), where she works at the intersection of physics, geometry, and machine learning to design algorithms that aid in the understanding of physical systems under physical and geometric constraints, with applications to the design both of new materials and new molecules. She has a particular focus on symmetries present in 3D physical systems, such as rotation, translation, and reflection.
Smidt earned her BS in physics from MIT in 2012 and her PhD in physics from the University of California at Berkeley in 2018. Prior to joining the MIT EECS faculty in 2021, she was the 2018 Alvarez Postdoctoral Fellow in Computing Sciences at Lawrence Berkeley National Laboratory, and a software engineering intern on the Google Accelerated Sciences team, where she developed Euclidean symmetry equivariant neural networks that naturally handle 3D geometry and geometric tensor data. Besides the AI2050 fellowship, she has received an Air Force Office of Scientific Research Young Investigator Program award, the EECS Outstanding Educator Award, and a Transformative Research Fund award.
Conceived and co-chaired by Eric Schmidt and James Manyika, AI2050 is a philanthropic initiative aimed at helping to solve hard problems in AI. Within their research, each fellow will contend with the central motivating question of AI2050: “It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?”
Prognostic tool could help clinicians identify high-risk cancer patients
Aggressive T-cell lymphoma is a rare and devastating form of blood cancer with a very low five-year survival rate. Patients often relapse after receiving initial therapy, making it especially challenging for clinicians to keep this destructive disease in check.
In a new study, researchers from MIT, in collaboration with researchers involved in the PETAL consortium at Massachusetts General Hospital, identified a practical and powerful prognostic marker that could help clinicians identify high-risk patients early, and potentially tailor treatment strategies to improve survival.
The team found that, when patients relapse within 12 months of initial therapy, their chances of survival decline dramatically. For these patients, targeted therapies might improve their chances for survival, compared to traditional chemotherapy, the researchers say.
According to their analysis, which used data collected from thousands of patients all over the world, the finding holds true across patient subgroups, regardless of the patient’s initial therapy or their score in a commonly used prognostic index.
A causal inference framework called Synthetic Survival Controls (SSC), developed as part of MIT graduate student Jessy (Xinyi) Han’s thesis, was central to this analysis. This versatile framework helps to answer “when-if” questions — to estimate how the timing of outcomes would shift under different interventions — while overcoming the limitations of inconsistent and biased data.
The identification of novel risk groups could guide clinicians as they select therapies to improve overall survival. For instance, a clinician might prioritize early-phase clinical trials over canonical therapies for this cohort of patients. The results could inform inclusion criteria for some clinical trials, according to the researchers.
The causal inference framework for survival analysis can also be applied more broadly. For instance, the MIT researchers have used it in areas like criminal justice to study how structural factors drive recidivism.
“Often we don’t only care about what will happen, but when the target event will happen. These when-if problems have remained under the radar for a long time, but they are common in a lot of domains. We’ve shown here that, to answer these questions with data, you need domain experts to provide insight and good causal inference methods to close the loop,” says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT, a member of Institute for Data, Systems and Society (IDSS) and of the Laboratory for Information and Decision Systems (LIDS), and co-author of the study.
Shah is joined on the paper by many co-authors, including Han, who is co-advised by Shah and Fotini Christia, the Ford International Professor of the Social Sciences in the Department of Political Science and director of IDSS; and corresponding authors Mark N. Sorial, a clinical pharmacist and investigator at the Dana-Farber Cancer Institute, and Salvia Jain, a clinician-investigator at the Massachusetts General Hospital Cancer Center, founder of the global PETAL consortium, and an assistant professor of medicine at Harvard Medical School. The research appears today in the journal Blood.
Estimating outcomes
The MIT researchers have spent the past few years developing the Synthetic Survival Control causal inference framework, which enables them to answer complex “when-if” questions when using available data is statistically challenging. Their approach estimates when a target event happens if a certain intervention is used.
In this paper, the researchers investigated an aggressive cancer called nodal mature T-cell lymphoma, and whether a certain prognostic marker led to worse outcomes. The marker, TTR12, signifies that a patient relapsed within 12 months of initial therapy.
They applied their framework to estimate when a patient will die if they have TTR12, and how their survival trajectory would be different if they do not have this prognostic marker.
“No experiment can answer that question because we are asking about two outcomes for the same patient. We have to borrow information from other patients to estimate, counterfactually, what a patient’s survival outcome would have been,” Han explains.
Answering these types of questions is notoriously difficult due to biases in the available observational data. Plus, patient data gathered from an international cohort bring their own unique challenges. For instance, a clinical dataset often contains some historical data about a patient, but at some point the patient may stop treatment, leading to incomplete records.
In addition, if a patient receives a specific treatment, that might impact how long they will survive, adding to the complexity of the data. Plus, for each patient, the researchers only observe one outcome on how long the patient survives — limiting the amount of data available.
Such issues lead to suboptimal performance of many classical methods.
The Synthetic Survival Control framework can overcome these challenges. Even though the researchers don’t know all the details for each patient, their method stitches information from multiple other patients together in such a way that it can estimate survival outcomes.
Importantly, their method is robust to specific modeling assumptions, making it broadly applicable in practice.
The power of prognostication
The researchers’ analysis revealed that TTR12 patients consistently had much greater risk of death within five years of initial therapy than patients without the marker. This was true no matter the initial therapy the patients received or which subgroup they fell into.
“This tells us that early relapse is a very important prognosis. This acts as a signal to clinicians so they can think about tailored therapies for these patients that can overcome resistance in second-line or third-line,” Han says.
Moving forward, the researchers are looking to expand this analysis to include high-dimensional genomics data. This information could be used to develop bespoke treatments that can avoid relapse within 12 months.
“Based on our work, there is already a risk calculation tool being used by clinicians. With more information, we can make it a richer tool that can provide more prognostic details,” Shah says.
They are also applying the framework to other domains.
For instance, in a paper recently presented at the Conference on Neural Information Processing Systems, the researchers identified a dramatic difference in the recidivism rate among prisoners of different races that begins about seven months after release. A possible explanation is the different access to long-term support by different racial groups. They are also investigating individuals’ decisions to leave insurance companies, while exploring other domains where the framework could generate actionable insights.
“Partnering with domain experts is crucial because we want to demonstrate that our methods are of value in the real world. We hope these tools can be used to positively impact individuals across society,” Han says.
This work was funded, in part, by Daiichi Sankyo, Secure Bio, Inc., Acrotech Biopharma, Kyowa Kirin, the Center for Lymphoma Research, the National Cancer Institute, Massachusetts General Hospital, the Reid Fund for Lymphoma Research, the American Cancer Society, and the Scarlet Foundation.
NIH Director Jay Bhattacharya visits MIT
National Institutes of Health (NIH) Director Jay Bhattacharya visited MIT on Friday, engaging in a wide-ranging discussion about policy issues and research aims at an event also featuring Rep. Jake Auchincloss MBA ’16 of Massachusetts.
The forum consisted of a dialogue between Auchincloss and Bhattacharya, followed by a question-and-answer session with an audience that included researchers from the greater Boston area. The event was part of a daylong series of stops Bhattacharya and Auchincloss made around Boston, a world-leading hub of biomedical research.
“I was joking with Dr. Bhattacharya that when the NIH director comes to Massachusetts, he gets treated like a celebrity, because we do science, and we take science very seriously here,” Auchincloss quipped at the outset.
Bhattacharya said he was “delighted” to be visiting, and credited the thousands of scientists who participate in peer review for the NIH. “The reason why the NIH succeeds is the willingness and engagement of the scientific community,” he said.
In response to an audience question, Bhattacharya also outlined his overall vision of the NIH’s portfolio of projects.
“You both need investments in ideas that are not tested, just to see if something works. You don’t know in advance,” he said. “And at the same time, you need an ecosystem that tests those ideas rigorously and winnows those ideas to the ones that actually work, that are replicable. A successful portfolio will have both elements in it.”
MIT President Sally A. Kornbluth gave opening remarks at the event, welcoming Bhattacharya and Auchincloss to campus and noting that the Institute’s earliest known NIH grant on record dates to 1948. In recent decades, biomedical research at MIT has boomed, expanding across a wide range of frontier fields.
Indeed, Kornbluth noted, MIT’s federally funded research projects during U.S. President Trump’s first term include a method for making anesthesia safer, especially for children and the elderly; a new type of expanding heart valve for children that eliminates the need for repeated surgeries; and a noninvasive Alzheimer’s treatment using sound and light stimulation, which is currently in clinical trials.
“Today, researchers across our campus pursue pioneering science on behalf of the American people, with profoundly important results,” Kornbluth said.
“The hospitals, universities, startups, investors, and companies represented here today have made greater Boston an extraordinary magnet for talent,” Kornbluth added. “Both as a force for progress in human health and an engine of economic growth, this community of talent is a precious national asset. We look forward to working with Dr. Bhattacharya to build on its strengths.”
The discussion occurred amid uncertainty about future science funding levels and pending changes in the NIH’s grant-review processes. The NIH has announced a “unified strategy” for reviewing grant applications that may lead to more direct involvement in grant decisions by directors of the 27 NIH institutes and centers, along with other changes that could shift the types of awards being made.
Auchincloss asked multiple questions about the ongoing NIH changes; about 10 audience members from a variety of institutions also posed a range of questions to Bhattacharya, often about the new grant-review process and the aims of the changes.
“The unified funding strategy is a way to allow institute direcors to look at the full range of scoring, including scores on innovation, and pick projects that look like they are promising,” Bhattacharya said in response to one of Auchincloss’ queries.
One audience member also emphasized concerns about the long-term effects of funding uncertainties on younger scientists in the U.S.
“The future success of the American biomedical enterprise depends on us training the next generation of scientists,” Bhattacharya acknowledged.
Bhattacharya is the 18th director of the NIH, having been confirmed by the U.S. Senate in March. He has served as a faculty member at Stanford University, where he received his BA, MA, MD, and PhD, and is currently a professor emeritus. During his career, Bhattacharya’s work has often examined the economics of health care, though his research has ranged broadly across topics, in over 170 published papers. He has also served as director of the Center on the Demography and Economics of Health and Aging at Stanford University.
Auchincloss is in his third term as the U.S. Representative to Congress from the 4th district in Massachusetts, having first been elected in 2020. He is also a major in the Marine Corps Reserve, and received his MBA from the MIT Sloan School of Management.
Ian Waitz, MIT’s vice president for research, concluded the session with a note of thanks to Auchincloss and Bhattacharya for their “visit to the greater Boston ecosystem which has done so much for so many and contributed obviously to the NIH mission that you articulated.” He added: “We have such a marvelous history in this region in making such great gains for health and longevity, and we’re here to do more to partner with you.”
10 (Not So) Hidden Dangers of Age Verification
It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.
Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children.
Here’s who gets hurt, and how:
1. Adults Without IDs Get Locked OutDocument-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don't have a driver's license or state ID with their current name and address.
- 18% of Black adults don't have a driver's license at all.
- Black and Hispanic Americans are disproportionately less likely to have current licenses.
- Undocumented immigrants often cannot obtain state IDs or driver's licenses.
- People with disabilities are less likely to have current identification.
- Lower-income Americans face greater barriers to maintaining valid IDs.
Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don't own homes, and close to 20% of households don't have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.
2. Communities of Color Face Higher Error RatesPlatforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.
3. People with Disabilities Face More BarriersAge-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.
Document-based systems also don't solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver's licenses, so document-based age-gating technologies are equally exclusionary.
4. Transgender and Non-Binary People Are Put At RiskAge-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.
5. Anonymity Becomes a CasualtyAge-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.
6. Young People Lose Access to Essential InformationBecause state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn't just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being.
This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers' mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.
7. LGBTQ+ Youth Are Denied Vital LifelinesFor many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports.
When parents must consent to or monitor their children's social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them.
8. Youth in Foster Care Systems Are Completely Left OutAge verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.
9. All of Our Personal Data is Put at RiskAn age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we've seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years.
The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.
10. All of Our Free Speech Rights Are TrampledThe internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.
Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment.
The Bottom LineAge-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.
The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.
Substitution Cipher Based on The Voynich Manuscript
Here’s a fun paper: “The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext“:
Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a historically plausible cipher that can replicate the manuscript’s unusual properties. The resulting ciphera verbose homophonic substitution cipher I call the Naibbe ciphercan be done entirely by hand with 15th-century materials, and when it encrypts a wide range of Latin and Italian plaintexts, the resulting ciphertexts remain fully decipherable and also reliably reproduce many key statistical properties of the Voynich Manuscript at once. My results suggest that the so-called “ciphertext hypothesis” for the Voynich Manuscript remains viable, while also placing constraints on plausible substitution cipher structures...
Old coal unit at giant Colorado plant evades closure
Mamdani won on affordability. Can he add climate to that pursuit?
DeSantis calls for restricting data centers
Weak hurricane season strengthens property insurers
Tom Steyer’s climate pivot signals new playbook for Dems
Experts disagree with Trump that weaker CAFE rules mean cheaper cars
Greens’ dilemma: Salvage EU climate laws or break with von der Leyen's party
India weighs power plan shift to expand coal capacity through 2047
Texas residents, Camp Mystic staffer plead for help in 911 audio
When companies “go green,” air quality impacts can vary dramatically
Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.
Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?
In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.
They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.
Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.
In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.
Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.
“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.
Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.
A quantification quandary
Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.
Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.
To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.
“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.
For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.
Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.
“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.
The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.
Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.
The sheer breadth of the system created many challenges.
“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.
Analyzing air quality
At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.
Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.
This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.
“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.
Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.
Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.
The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.
“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.
Even in electricity purchases, the researchers found that location matters.
For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.
Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.
“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.
In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.
They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.
This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy.
Friday Squid Blogging: Vampire Squid Genome
The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.
It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered...
Paula Hammond named dean of the School of Engineering
Paula Hammond ’84, PhD ’93, an Institute Professor and MIT’s executive vice provost, has been named dean of MIT’s School of Engineering, effective Jan. 16. She will succeed Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science, who was appointed MIT’s provost in July.
Hammond, who was head of the Department of Chemical Engineering from 2015 to 2023, has also served as MIT’s vice provost for faculty. She will be the first woman to hold the role of dean of MIT’s School of Engineering.
“From the rigor and creativity of her scientific work to her outstanding record of service to the Institute, Paula Hammond represents the very best of MIT,” says MIT President Sally Kornbluth. “Wise, thoughtful, down-to-earth, deeply curious, and steeped in MIT’s culture and values, Paula will be a highly effective leader for the School of Engineering. I’m delighted she accepted this new challenge.”
Hammond, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, has earned many accolades for her work developing polymers and nanomaterials that can be used for applications including drug delivery, regenerative medicine, noninvasive imaging, and battery technology.
Chandrakasan announced Hammond’s appointment today in an email to the MIT community, writing, “Ever since enrolling at MIT as an undergraduate, Paula has built a remarkable record of accomplishment in scholarship, teaching, and service. Faculty, staff, and students across the Institute praise her wisdom, selflessness, and kindness, especially when it comes to enabling others’ professional growth and success.”
“Paula is a scholar of extraordinary distinction. It is hard to overstate the value of the broad contributions she has made in her field, which have significantly expanded the frontiers of knowledge,” Chandrakasan told MIT News. “Any one of her many achievements could stand as the cornerstone of an outstanding academic career. In addition, her investment in mentoring the next generation of scholars and building community is unparalleled.”
Chandrakasan also thanked Professor Maria Yang, who has served as the school’s interim dean in recent months. “In a testament to her own longstanding contributions to the School of Engineering, Maria took on the deanship even while maintaining leadership roles with the Ideation Lab, D-Lab, and Morningside Academy for Design. For her excellent service and leadership, Maria deserves our deep appreciation,” he wrote to the community.
Building a sense of community
Throughout her career at MIT, Hammond has helped to create a supportive environment in which faculty and students can do their best work. As vice provost for faculty, a role Hammond assumed in 2023, she developed and oversaw new efforts to improve faculty recruitment and retention, mentoring, and professional development. Earlier this year, she took on additional responsibilities as executive vice provost, providing guidance and oversight for a number of Institute-wide initiatives.
As head of the Department of Chemical Engineering, Hammond worked to strengthen the department’s sense of community and initiated a strategic planning process that led to more collaborative research between faculty members. Under her leadership, the department also launched a major review of its undergraduate curriculum and introduced more flexibility into the requirements for a chemical engineering degree.
Another major priority was ensuring that faculty had the support they needed to pursue new research goals. To help achieve that, she established and raised funds for a series of Faculty Research Innovation Fund grants for mid-career faculty who wanted to explore fresh directions.
“I really enjoyed enabling faculty to explore new areas, finding ways to resource them, making sure that they had the right mentoring early in their career and the ‘wind beneath their wings’ that they needed to get where they wanted to go,” she says. “That, to me, was extremely fulfilling.”
Before taking on her official administrative roles, Hammond served the Institute through her work chairing committees that contributed landmark reports on gender and race at MIT: the Initiative for Faculty Race and Diversity and the Academic and Organizational Relationships Working Group.
In her new role as dean, Hammond plans to begin by consulting with faculty across the School of Engineering to learn more about their needs.
“I like to start with conversations,” she says. “I’m very excited about the idea of visiting each of the departments, finding out what’s on the minds of the faculty, and figuring out how we can meaningfully address their needs and continue to build and grow an excellent engineering program.”
One of her goals is to promote greater cross-disciplinarity in MIT’s curriculum, in part by encouraging and providing resources for faculty to develop more courses that bridge multiple departments.
“There are some barriers that exist between departments, because we all need to teach our core requirements,” she says. “I am very interested in collaborating with departments to think about how we can lower barriers to allow faculty to co-teach, or to perhaps look at different course structures that allow us to teach a core component and then have it branch to a more specialized component.”
She also hopes to guide MIT’s engineering departments in finding ways to incorporate artificial intelligence into their curriculum, and to give students greater opportunity for relevant hands-on experiences in engineering.
“I am particularly excited to build from the strong cross-disciplinary efforts and the key strategic initiatives that Anantha launched during his time as dean,” Hammond says. “I believe we have incredible opportunities to build off these critical areas at the interfaces of science, engineering, the humanities, arts, design, and policy, and to create new emergent fields. MIT should be the leader in providing educational foundations that prepare our students for a highly interdisciplinary and AI-enabled world, and a setting that enables our researchers and scholars to solve the most difficult and urgent problems of the world.”
A pioneer in nanotechnology
Hammond grew up in Detroit, where her father was a PhD biochemist who ran the health laboratories for the city of Detroit. Her mother founded a nursing school at Wayne County Community College, and both parents encouraged her interest in science. As an undergraduate at MIT, she majored in chemical engineering with a focus on polymer chemistry.
After graduating in 1984, Hammond spent two years working as a process engineer at Motorola, then earned a master’s degree in chemical engineering from Georgia Tech. She realized that she wanted to pursue a career in academia, and returned to MIT to earn a PhD in polymer science technology. After finishing her degree in 1993, she spent a year and a half as a postdoc at Harvard University before joining the MIT faculty in 1995.
She became a full professor in 2006, and in 2021, she was named an Institute Professor, the highest honor bestowed by MIT. In 2010, Hammond joined MIT’s Koch Institute for Integrative Cancer Research, where she leads a lab that is developing novel nanomaterials a variety of applications, with a primary focus on treatments and diagnostics for ovarian cancer.
Early in her career, Hammond developed a technique for generating functional thin-film materials by stacking layers of charged polymeric materials. This approach can be used to build polymers with highly controlled architectures by alternately exposing a surface to positively and negatively charged particles.
She has used this layer-by-layer assembly technique to build ultrathin batteries, fuel cell electrodes, and drug delivery nanoparticles that can be specifically targeted to cancer cells. These particles can be tailored to carry chemotherapy drugs such as cisplatin, immunotherapy agents, or nucleic acids such as messenger RNA.
In recognition of her pioneering research, Hammond was awarded the 2024 National Medal of Technology and Innovation. She was also the 2023-24 recipient of MIT’s Killian Award, which honors extraordinary professional achievements by an MIT faculty member. Her many other awards include the Benjamin Franklin Medal in Chemistry in 2024, the ACS Award in Polymer Science in 2018, the American Institute of Chemical Engineers Charles M. A. Stine Award in Materials Engineering and Science in 2013, and the Ovarian Cancer Research Program Teal Innovator Award in 2013.
Hammond has also been honored for her dedication to teaching and mentoring. As a reflection of her excellence in those areas, she was awarded the Irwin Sizer Award for Significant Improvements to MIT Education, the Henry Hill Lecturer Award in 2002, and the Junior Bose Faculty Award in 2000. She also co-chaired the recent Ad Hoc Committee on Faculty Advising and Mentoring, and has been selected as a “Committed to Caring” honoree for her work mentoring students and postdocs in her research group.
Hammond has served on the President’s Council of Advisors on Science and Technology, as well as the U.S. Secretary of Energy Scientific Advisory Board, the NIH Center for Scientific Review Advisory Council, and the Board of Directors of the American Institute of Chemical Engineers. Additionally, she is one of a small group of scientists who have been elected to the National Academies of Engineering, Sciences, and Medicine.
MADMEC winners develop spray-on coating to protect power lines from ice
A spray-on coating to keep power lines standing through an ice storm may not be the obvious fix for winter outages — but it’s exactly the kind of innovation that happens when MIT students tackle a sustainability challenge.
“The big threat to the power line network is winter icing that causes huge amounts of downed lines every year,” says Trevor Bormann, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE) and member of MITten, the winning team in the 2025 MADMEC innovation contest. Fixing those outages is hugely carbon-intensive, requiring diesel-powered equipment, replacement materials, and added energy use. And as households switch to electric heat pumps, the stakes of a prolonged outage rise.
To address the challenge, the team developed a specialized polymer coating that repels water and can be sprayed onto aluminum power lines. The coating contains nanofillers — particles hundreds of times smaller than a human hair — that give the surface a texture that makes water bead and drip off.
The effect is known as “superhydrophobicity,” says Shaan Jagani, a graduate student in the Department of Aeronautics and Astronautics. “And what that really means is water does not stay on the surface, and therefore water will not have the opportunity to nucleate down into ice.”
MITten — pronounced “mitten” — won the $10,000 first prize in the contest, hosted by DMSE on Nov. 10 at MIT, where audience presentations and poster sessions capped months of design and experimentation. Since 2007, MADMEC (the MIT and Dow Materials Engineering Contest), funded by Dow and Saint-Gobain, has given students a chance to tackle real-world sustainability challenges, with each team receiving $1,000 to build and test their projects. Judges evaluated the teams’ work from conception to prototype.
MADMEC winners have gone on to succeed in major innovation competitions such as MassChallenge, and at least six startups — including personal cooling wristband maker Embr and vehicle-motion-control company ClearMotion — trace their roots to the contest.
Cold inspiration
The idea for the MITten project came in part from Bormann’s experience growing up in South Dakota, where winter outages were common. His home was heated by natural gas, but if grid-reliant heat pumps had warmed it in negative-zero winter months, a days-long outage would have been “really rough.”
“I love the part of sustainability that is focused on developing all these new technologies for electricity generation and usage, but also the distribution side of it shouldn’t be neglected, either,” Bormann says. “It’s important for all those to be growing synergistically, and to be paying attention to all aspects of it.”
And there’s an opportunity to make distribution infrastructure more durable: An estimated 50,000 miles of new power lines are planned over the next decade in the northern United States, where icing is a serious risk.
To test their coating, the team built an icing chamber to simulate rain and freezing conditions, comparing coated versus uncoated aluminum samples at –10 degrees Celsius (14 degrees Fahrenheit). They also dipped samples in liquid nitrogen to evaluate performance in extreme cold and simulated real-world stresses such as lines swaying in windstorms.
“We basically coated aluminum substrates and then bent them to demonstrate that the coating itself could accommodate very long strains,” Jagani says.
The team ran simulations to estimate that a typical outage affecting 20 percent of a region could cost about $7 million to repair. “But if you fully coat, say, 1,000 kilometers of line, you actually can save $1 million in just material costs,” says DMSE grad student Matthew Michalek. The team hopes to further refine the coating with more advanced materials and test them in a professional icing chamber.
Amber Velez, a graduate student in the Department of Mechanical Engineering, stressed the parameters of the contest — working within a $1,000 budget.
“I feel we did quite good work with quite a lot of legitimacy, but I think moving on, there is a lot of space that we could have more play in,” she says. “We’ve definitely not hit the ceiling yet, and I think there’s a lot of room to keep growing.”
Compostable electrodes, microwavable ceramics
The second-place, $6,000 prize went to Electrodiligent, which is designing a biodegradable, compostable alternative to electrodes used for heart monitoring. Their prototype uses a cellulose paper backing and a conductive gel made from gelatin, glycerin, and sodium chloride to carry the electric signal.
Comparing electrocardiogram (ECG) results, the team found their electrodes performed similarly to the 3M Red Dot standard. “We’re very optimistic about this result,” says Ethan Frey, a DMSE graduate student.
The invention aims to cut into the 3.6 tons of medical waste produced each day, but judges noted that adhesive electrodes are almost always incinerated for health and safety reasons, making the intended application a tough fit.
“But there’s a whole host of other directions the team could go in,” says Mike Tarkanian, senior lecturer in DMSE and coordinator of MADMEC.
The $4,000 third prize went to Cerawave, a team made up of mostly undergraduates and a member the team jokingly called a “token grad student,” working to make ceramics in an ordinary kitchen microwave. Traditional ceramic manufacturing requires high-temperature kilns, a major source of energy use and carbon emissions. Cerawave added silicon carbide to their ceramic mix to help it absorb microwave energy and fuse into a durable final product.
“We threw it on the ground a few times, and it didn’t break,” says Merrill Chiang, a junior in DMSE, drawing laughs from the audience. The team now plans to refine their recipe and overall ceramic-making process so that hobbyists — and even users in environments like the International Space Station — could create ceramic parts “without buying really expensive furnaces.”
The power of student innovation
Although it didn’t earn a prize, the contest’s most futuristic project was ReForm Designs, which aims to make reusable children’s furniture — expensive and quickly outgrown — from modular blocks made of mycelium, the root-like, growth-driving part of a mushroom. The team showed they could successfully produce mycelium blocks, but slow growth and sensitivity to moisture and temperature meant they didn’t yet have full furniture pieces to show judges.
The project still impressed DMSE senior David Miller, who calls the blocks “really intriguing,” with potential applications beyond furniture in manufacturing, construction, and consumer products.
“They adapt to the way we consume products, where a lot of us use products for one, two, three years before we throw them out,” Miller says. “Their capacity to be fully biodegradable and molded into any shape fills the need for certain kinds of additive manufacturing that requires certain shapes, while also being extremely sustainable.”
While the contest has produced successful startups, Tarkanian says MADMEC’s original goal — giving students a chance to get their hands dirty and pursue their own ideas — is thriving 18 years on, especially at a time when research budgets are being cut and science is under scrutiny.
“It gives students an opportunity to make things that are real and impactful to society,” he says. “So when you can build a prototype and say, ‘This is going to save X millions of dollars or X million pounds of waste,’ that value is obvious to everyone.”
Attendee Jinsung Kim, a postdoc in mechanical engineering, echoed Tarkanian’s comments, emphasizing the space set aside for innovative thinking.
“MADMEC creates the rare environment where students can experiment boldly, validate ideas quickly, and translate core scientific principles into solutions with real societal impact. To move society forward, we have to keep pushing the boundaries of technology and fundamental science,” he says.
MIT researchers “speak objects into existence” using AI and robotics
Generative AI and robotics are moving us ever closer to the day when we can ask for an object and have it created within a few minutes. In fact, MIT researchers have developed a speech-to-reality system, an AI-driven workflow that allows them to provide input to a robotic arm and “speak objects into existence,” creating things like furniture in as little as five minutes.
With the speech-to-reality system, a robotic arm mounted on a table is able to receive spoken input from a human, such as “I want a simple stool,” and then construct the objects out of modular components. To date, the researchers have used the system to create stools, shelves, chairs, a small table, and even decorative items such as a dog statue.
“We’re connecting natural language processing, 3D generative AI, and robotic assembly,” says Alexander Htet Kyaw, an MIT graduate student and Morningside Academy for Design (MAD) fellow. “These are rapidly advancing areas of research that haven’t been brought together before in a way that you can actually make physical objects just from a simple speech prompt.”
The idea started when Kyaw — a graduate student in the departments of Architecture and Electrical Engineering and Computer Science — took Professor Neil Gershenfeld’s course, “How to Make Almost Anything.” In that class, he built the speech-to-reality system. He continued working on the project at the MIT Center for Bits and Atoms (CBA), directed by Gershenfeld, collaborating with graduate students Se Hwan Jeon of the Department of Mechanical Engineering and Miana Smith of CBA.
The speech-to-reality system begins with speech recognition that processes the user’s request using a large language model, followed by 3D generative AI that creates a digital mesh representation of the object, and a voxelization algorithm that breaks down the 3D mesh into assembly components.
After that, geometric processing modifies the AI-generated assembly to account for fabrication and physical constraints associated with the real world, such as the number of components, overhangs, and connectivity of the geometry. This is followed by creation of a feasible assembly sequence and automated path planning for the robotic arm to assemble physical objects from user prompts.
By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. And, unlike 3D printing, which can take hours or days, this system builds within minutes.
“This project is an interface between humans, AI, and robots to co-create the world around us,” Kyaw says. “Imagine a scenario where you say ‘I want a chair,’ and within five minutes a physical chair materializes in front of you.”
The team has immediate plans to improve the weight-bearing capability of the furniture by changing the means of connecting the cubes from magnets to more robust connections.
“We’ve also developed pipelines for converting voxel structures into feasible assembly sequences for small, distributed mobile robots, which could help translate this work to structures at any size scale,” Smith says.
The purpose of using modular components is to eliminate the waste that goes into making physical objects by disassembling and then reassembling them into something different, for instance turning a sofa into a bed when you no longer need the sofa.
Because Kyaw also has experience using gesture recognition and augmented reality to interact with robots in the fabrication process, he is currently working on incorporating both speech and gestural control into the speech-to-reality system.
Leaning into his memories of the replicator in the “Star Trek” franchise and the robots in the animated film “Big Hero 6,” Kyaw explains his vision.
“I want to increase access for people to make physical objects in a fast, accessible, and sustainable manner,” he says. “I’m working toward a future where the very essence of matter is truly in your control. One where reality can be generated on demand.”
The team presented their paper “Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly” at the Association for Computing Machinery (ACM) Symposium on Computational Fabrication (SCF ’25) held at MIT on Nov. 21.
Cultivating confidence and craft across disciplines
Both Rohit Karnik and Nathan Wilmers personify the type of mentorship that any student would be fortunate to receive — one rooted in intellectual rigor and grounded in humility, empathy, and personal support. They show that transformative academic guidance is not only about solving research problems, but about lifting up the people working on them.
Whether it’s Karnik’s quiet integrity and commitment to scientific ethics, or Wilmers’ steadfast encouragement of his students in the face of challenges, both professors cultivate spaces where students are not only empowered to grow as researchers, but affirmed as individuals. Their mentees describe feeling genuinely seen and supported; mentored not just in theory or technique, but in resilience. It’s this attention to the human element that leaves a lasting impact.
Professors Karnik and Wilmers are two of the 2023–25 Committed to Caring cohort who are cultivating confidence and craft across disciplines. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Rohit Karnik: Rooted in rigor, guided by care
Rohit Karnik is Abdul Latif Jameel Professor in the Department of Mechanical Engineering at MIT, where he leads the Microfluidics and Nanofluidics Research Group and serves as director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). His research explores the physics of micro- and nanofluidic flows and systems. Applications of his work include the development of water filters, portable diagnostic tools, and sensors for environmental monitoring.
Karnik is genuinely excited about his students’ ideas, and open to their various academic backgrounds. He validates students by respecting their research, encouraging them to pursue their interests, and showing enthusiasm for their exploration within mechanical engineering and beyond.
One student reflected on the manner in which Karnik helped them feel more confident in their academic journey. When a student from a non-engineering field joined the mechanical engineering graduate program, Karnik never viewed their background as a barrier to success. The student wrote, “from the start, he was enthusiastic about my interdisciplinarity and the perspective I could bring to the lab.”
He allowed the student to take remedial undergraduate classes to learn engineering basics, provided guidance on leveraging their previous academic background, and encouraged them to write grants and apply for fellowships that would support their interdisciplinary work. In addition to these concrete supports, Karnik also provided the student with the freedom to develop their own ideas, offering constructive, realistic feedback on what was attainable.
“This transition took time, and Karnik honored that, prioritizing my growth in a completely new field over getting quick results,” the nominator reflected. Ultimately, Karnik’s mentorship, patience, and thoughtful encouragement led the student to excel in the engineering field.
Karnik encourages his advisees to explore their interests in mechanical engineering and beyond. This holistic approach extends beyond academics and into Karnik’s view of his students as whole individuals. One student wrote that he treats them as complete humans, with ambitions, aspirations, and passions worthy of his respect and consideration — and remains truly selfless in his commitment to their growth and success.
Karnik emphasizes that “it’s important to have dreams,” regularly encouraging his mentees to take advantage of opportunities that align with their goals and values. This sentiment is felt deeply by his students, with one nominator sharing that Karnik “encourag[ed] me to think broadly and holistically about my life, which has helped me structure and prioritize my time at MIT.”
Nathan Wilmers: Cultivating confidence, craft, and care
Nathan Wilmers is the Sarofim Family Career Development Associate Professor of Work and Organizations at MIT Sloan School of Management. His research spans wage and earnings inequality, economic sociology, and the sociology of labor. He is also affiliated with the Institute for Work and Employment Research, and the Economic Sociology program at Sloan. Wilmers studies wage and earnings inequality, economic sociology, and the sociology of labor, bringing insights from economic sociology to the study of labor markets and the wage structure.
A remarkable mentor, Wilmers is known for guiding his students through different projects while also teaching them more broadly about the system of academia. As one nominator illustrates, “he … helped me learn the ‘tacit’ knowledge to understand how to write a paper,” while also emphasizing the learning process of the PhD as a whole, and never reprimanding any mistakes along the way.
Students say that Wilmers “reassures us that making mistakes is a natural part of the learning process and encourages us to continuously check, identify, and rectify them.” He welcomes all questions without judgment, and generously invests his time and patience in teaching students.
Wilmers is a strong advocate for his students, both academically and personally. He emphasizes the importance of learning, growth, and practical experience, rather than solely focusing on scholarly achievements and goals. Students feel this care, describing “an environment that maximizes learning opportunities and fosters the development of skills,” allowing them to truly collaborate rather than simply aim for the “right” answers.
In addition to his role in the classroom and lab, Wilmers also provides informal guidance to advisees, imparting valuable knowledge about the academic system, emphasizing the significance of networking, and sharing insider information.
“Nate’s down-to-earth nature is evident in his accessibility to students,” expressed one nominator, who wrote that “sometimes we can freely approach his office without an appointment and receive valuable advice on both work-related and personal matters.” Moreover, Wilmers prioritizes his advisees’ career advancement, dedicating a substantial amount of time to providing feedback on thesis projects, and even encouraging students to take a lead in publishing research.
True mentorship often lies in the patient, careful transmission of craft — the behind-the-scenes work that forms the backbone of rigorous research. “I care about the details,” says Wilmers, reflecting a philosophy shaped by his own graduate advisors. Wilmers’ mentors instilled in him a deep respect for the less-glamorous but essential elements of scholarly work: data cleaning, thoughtful analysis, and careful interpretation. These technical and analytical skills are where real learning happens, he believes.
By modeling this approach with his own students, Wilmers creates a culture where precision and discipline are valued just as much as innovation. His mentorship is grounded in the belief that becoming a good researcher requires not just vision, but also an intimate understanding of process — of how ideas are sharpened through methodical practice, and how impact comes from doing the small things well. His thoughtful, detail-oriented mentorship leaves a lasting impression on his students.
A nominator acclaimed, “Nate’s strong enthusiasm for my research, coupled with his expressed confidence and affirmation of its value, served as a significant source of motivation for me to persistently pursue my ideas.”
