I'm a scientist and analyst with a PhD in Biology and a background spanning ecology, genomics, and management consulting. I've spent my career thinking rigorously about complex systems — and I'm now applying that lens to the most consequential technology of our time.
We were, for a long time, the only agent on Earth capable of fundamentally restructuring planetary systems. That asymmetry — between our capability and the rest of the biosphere's capacity to respond — created a genuine ethical obligation. I've always believed that capability implies responsibility.
AI is now assuming a version of that role, and doing so at a speed and scale that exceeds anything ecology has prepared us for. The questions feel structurally familiar: How do we ensure a powerful optimisation system remains aligned with broader values? How do we avoid irreversible lock-in? How do we build something that tells the truth even when it's inconvenient, rather than something that learns to tell us what we want to hear?
I'm not an AI doomer — just as I was never an ecological doomer. Catastrophic collapse is frequently predicted and less frequently realised, and misplaced alarm can crowd out the careful, tractable work that actually moves things forward. But that doesn't mean the risks aren't real. The work of alignment is urgent precisely because the window to shape these systems well may not stay open indefinitely.
My view is that the single most important property an AI system can have is a rigorous, unconditional commitment to truth — not the version of truth that is comfortable or politically convenient, but the real kind. Everything else — safety, usefulness, trustworthiness — follows from that foundation, or fails without it.
Rigorous adherence to truth is the only way to build safe AI and the only way to understand the true nature of the Universe.
Elon Musk — on X, March 2025
My current interests sit at the intersection of AI evaluation, ethics, and the science of building systems we can actually trust.
How do we specify what we actually want from AI systems — and verify that we got it? Exploring value alignment, goal specification, and the limits of current approaches.
Rigorous assessment of AI agent performance. What does it mean for a model to be honest, calibrated, and genuinely helpful? How do we measure what matters?
RLHF shapes what AI systems learn to value. Understanding its mechanics — and its failure modes — is central to understanding where alignment goes right or wrong.
Drawing on a background in international political economy and environmental governance to think about how institutions, incentives, and norms shape AI development trajectories.
Applying standards from empirical research — reproducibility, uncertainty quantification, honest reporting — to AI development and deployment.
The intersection of ecological systems thinking and AI: from environmental applications of machine learning to the broader question of how we govern transformative technologies.
A scientist and analyst bringing an unusual mix of quantitative rigour, systems thinking, and policy awareness to questions of AI.
My scientific background is in population genomics and spatial ecology. I completed a PhD at the Swedish University of Agricultural Sciences studying the genetics and habitat dynamics of large carnivores, developed reproducible bioinformatic pipelines for large sequencing datasets, and published in leading conservation and genetics journals. That work trained me to reason carefully about complex, uncertain systems — and to be honest when the data don't tell a clean story.
Before academia, I spent several years as a management consultant at Deloitte, working on large-scale technology transformation for government and private sector clients. I know what it looks like when complex systems are built, deployed, and sometimes fail in practice.
My graduate education spans biology, ecology, and international political economy (LSE), which means I approach AI not just as a technical problem but as a governance and values problem — one that will be shaped as much by institutions and incentives as by the underlying engineering.
Full scientific background and publication list: heatherhemmingmoore.com ↗
Supplementing a quantitative research background with targeted training in the technical and ethical dimensions of modern AI.
DeepLearning.AI & Arize AI — Agent evaluation frameworks, benchmarking methodologies, and observability for LLM-based systems.
DeepLearning.AI & Google Cloud — RLHF pipeline mechanics, reward modelling, and fine-tuning language models to align with human preferences.
University of Helsinki MOOC — Philosophical foundations of AI ethics, fairness, accountability, transparency, and the societal implications of machine learning systems.
Interested in collaborating, discussing alignment, or just exchanging ideas? I'd love to hear from you.