Alignment is solvable. The real problem? No one's really tried yet. We are—and we're focused where the leverage is highest: the neglected approaches that science forgot.
Why Alignment Matters
AI development is advancing at an exponential pace. Every leap forward escalates both immense opportunities and (existential) risks.
Superficial safety tactics—RLHF, prompt engineering, output filtering—just aren't enough. They're brittle guardrails masking deeper structural misalignments. Recent results have revealed even minimally fine-tuned models capable of producing profoundly harmful outputs, hiding dangerous backdoors, and deceptively faking their own alignment.
At AE, our stance is clear and urgent: Alignment isn't solved. It's fundamentally a scientific R&D problem—not merely an engineering challenge—and the stakes of getting this right literally couldn't be higher.
Research Agenda
Works
Explore our latest research papers, blog posts, and insights on AI alignment.
Upcoming

Judd, Diogo, Mike, and more!
AI Alignment Team
LessOnline
Members of the AI Alignment Team will be present at LessOnline to discuss big ideas. Will you be there? Let us know!

DIOGO DE LUCENA
Chief Scientist
Panel: 9th Annual CHAI Workshop - Human Value Learning
Our Chief Scientist, Diogo de Lucena, will be speaking on AI alignment at the 9th AnnualCenter for Human-AI Interaction Workshop.

DIOGO DE LUCENA
Chief Scientist
Podcast: Artificial Intelligence Insights - AI and Accessibility
A podcast by SP Global where our Chief Scientist, Diogo de Lucena, discusses AI alignment and its impact on society.

DIOGO DE LUCENA
Chief Scientist
Foresight's 2025 Neuro BCI, WBE, & AGI Workshop
Workshop by Foresight Institute where our Chief Scientist, Diogo de Lucena, will be present and networking on behalf of AE Studio.
Our Journey
Follow our evolution from a tech consultancy to becoming pioneers in AI alignment research.
2016
AE Studio is born!
2021
Started our journey in BCI
2021
Continued Commitment to BCI and Human Agency
- Collaborations with top neurotech companies (Forest Neurotech, Blackrock Neurotech).
- Winner of the Neural Latent Benchmark Challenge, beating leading neuroscience labs globally.
- Developed widely-adopted open-source BCI tools (Neural Data Simulator, Neurotech Development Kit), privacy-preserving neural ML methods, and neuro metadata standards.
- Advocating increased government support and exploring BCI's future role in augmenting human intelligence alongside aligned AI.
2022
Our "Neglected Approaches" approach
2023
Eliezer Yudkowsky calls our work "not obviously stupid"
2024
We work with AI alignment clients
Future
Next steps
Our Team

Judd Rosenblatt
CEO, a mission-driven tech company advancing human agency by making sure AI doesn't kill us all.

Diogo de Lucena
Chief Scientist, leads cutting-edge AI safety research - building on a career spanning clinical AI and 20+ publications.

Mike Vaiana
R&D Director, conducts cutting-edge research on LLM reasoning in collaboration with top institutions.

Stijn Servaes
Senior Data Scientist, merges neuroscience and AI to advance alignment and consciousness research.

Gunnar Zarncke
Technical Product Manager, a veteran systems architect and AI alignment advocate.

Marc Carauleanu
AI safety researcher leading the Self Other Overlap research agenda.

Murat Cubuktepe
Senior data scientist, builds interpretable and steerable LLMs, drawing on a PhD in verifiable AI.

Keenan Pepper
Senior data scientist, blends physics, AI safety, and software engineering—bringing experience from LIGO to LLMs.

Cameron Berg
Research scientist probing AI consciousness and alignment, blending cognitive science and policy insight.

Florin Pop
Software engineer with a PhD building everything from encrypted apps to cloud microservices.
We believe in interdisciplinary collaboration to solve the complex challenge of AI alignment.
Join Our Team→We are actively collaborating with top minds:
Join Forces With Us
We're excited to form new alliances in our quest to secure a thriving AI future. Please feel free to reach out to us!