AI Safety Researcher & Writer
AI safety research, evaluation, and technical writing
I research and write about AI safety, alignment, and loss-of-control risks. Contributing author on work including alignment faking, SHADE-Arena, and constitutional classifiers at Anthropic. Previously Google Brain AI Residency and Research Associate. Technical writing and policy analysis for Palisade Research, Foresight Institute, and the Bezos Earth Fund.
Research focus
What I work on
AI Safety & Alignment
Evaluation of frontier AI models for loss-of-control risks: alignment faking, deception, sabotage, self-exfiltration. Contributing author on safety research at Anthropic. Misalignment evaluation and self-replication assessment at Palisade Research.
See publications →Technical Writing & Policy
Technical reports and policy analysis at the intersection of AI, safety, and society. Clients include Foresight Institute, Bezos Earth Fund, and Palisade Research.
See work →What people say
What collaborators say
In the 10 years I have been involved in hiring contractors for various technical writing at Foresight, Linda has been the best writer I’ve worked with, both in terms of the quality of result she delivers and in terms of working style. Even if the topic is technically complex and outside her domain knowledge, she communicates concepts correctly, effectively and with an interesting lens. She hits deadlines, is kind, patient, reliable and a great communicator. In a self-directed manner, she brings structure to projects whose scope is rather unclear — frequently exceeding expectations.
Allison Duettmann
President, Foresight Institute
Newsletter
One email worth reading, when there’s something worth saying.
Essays on AI, technology, and what it means to stay human in the middle of this. They go out on Substack — not on a schedule, when there’s actually something to say.
Subscribe on SubstackOr browse the archive first.