Hover over lines to ALIGN them. Then scroll down for more ALIGNMENT!Tap text to ALIGN. Then scroll down for more ALIGNMENT!

Alignment is solvable. The real problem? No one's really tried yet. We are—and we're focused where the leverage is highest: the neglected approaches that science forgot.

— Judd Rosenblatt, CEO of AE Studio
Explore AI Alignment

Why Alignment Matters

AI development is advancing at an exponential pace. Every leap forward escalates both immense opportunities and (existential) risks.

Superficial safety tactics—RLHF, prompt engineering, output filtering—just aren't enough. They're brittle guardrails masking deeper structural misalignments. Recent results have revealed even minimally fine-tuned models capable of producing profoundly harmful outputs, hiding dangerous backdoors, and deceptively faking their own alignment.

At AE, our stance is clear and urgent: Alignment isn't solved. It's fundamentally a scientific R&D problem—not merely an engineering challenge—and the stakes of getting this right literally couldn't be higher.

A person looking out over rolling hills with green grass and yellow flowers under a blue sky

AI is rapidly integrating into our minds, our economies, and our militaries—yet we still don't understand how it works. That's already alarming. But when we surveyed top alignment researchers, fewer than one in ten believed today's methods would actually solve the core problem before AGI. That's a crisis.

So we're doing the hard thing: building a Bell Labs-style research engine, self-funded and independent, focused not on safety theater but on actually solving alignment—at the root.

We're building for the future because the stakes are real.

AE AI Alignment Team • AE AI Alignment Team •

Research Agenda

Works

Explore our latest research papers, blog posts, and insights on AI alignment.

Upcoming

LessOnline
Loading
30MAY
Lighthaven, Berkeley, CA

Judd, Diogo, Mike, and more!

AI Alignment Team

LessOnline

Members of the AI Alignment Team will be present at LessOnline to discuss big ideas. Will you be there? Let us know!

Panel: 9th Annual CHAI Workshop - Human Value Learning
Loading
05JUN
Berkeley, CA

DIOGO DE LUCENA

Chief Scientist

Panel: 9th Annual CHAI Workshop - Human Value Learning

Our Chief Scientist, Diogo de Lucena, will be speaking on AI alignment at the 9th AnnualCenter for Human-AI Interaction Workshop.

Podcast: Artificial Intelligence Insights - AI and Accessibility
Loading
09JUN
Online

DIOGO DE LUCENA

Chief Scientist

Podcast: Artificial Intelligence Insights - AI and Accessibility

A podcast by SP Global where our Chief Scientist, Diogo de Lucena, discusses AI alignment and its impact on society.

Foresight's 2025 Neuro BCI, WBE, & AGI Workshop
Loading
11OCT
San Francisco, CA

DIOGO DE LUCENA

Chief Scientist

Foresight's 2025 Neuro BCI, WBE, & AGI Workshop

Workshop by Foresight Institute where our Chief Scientist, Diogo de Lucena, will be present and networking on behalf of AE Studio.

Our Journey

Follow our evolution from a tech consultancy to becoming pioneers in AI alignment research.

2016

AE Studio is born!

The tech consultancy stork brought baby AE Studio to Judd Rosenblatt. Today, we're a team of about 120 talented individuals - programmers, product designers, and data scientists - united by our mission to increase human agency.

2021

Started our journey in BCI

AE Studio began its work in brain-computer interfaces (BCI) with the goal of accelerating the field and supporting open-source development. We didn't trust companies like Meta or Neuralink to control the extension of human thought, so we helped advance competitors — including backing Blackrock Neurotech, the developer of the first FDA-approved invasive BCI — while contributing to the open-source ecosystem through initiatives like our Neurotech Developers Toolkit. Later, we partnered with one of the first Focused Research Organizations, Forest Neurotech, which was co-founded by a former member of AE.

2021

Continued Commitment to BCI and Human Agency

We continued our commitment to BCI innovation:
  • Collaborations with top neurotech companies (Forest Neurotech, Blackrock Neurotech).
  • Winner of the Neural Latent Benchmark Challenge, beating leading neuroscience labs globally.
  • Developed widely-adopted open-source BCI tools (Neural Data Simulator, Neurotech Development Kit), privacy-preserving neural ML methods, and neuro metadata standards.
  • Advocating increased government support and exploring BCI's future role in augmenting human intelligence alongside aligned AI.

2022

Our "Neglected Approaches" approach

As GPT-3, GPT-3.5, and GPT-4 were released in rapid succession, the trend lines appeared increasingly clear, our AI timelines shortened, and our concern for the future deepened. The odds shifted: it seemed unlikely that brain-computer interfaces (BCIs) would develop quickly enough to enhance human intelligence and help solve the Alignment problem as we had once hoped. With so few actively working on Alignment, we decided to throw our hat in the ring — aiming to help prevent human extinction at the hands of AI. At AE, we believe the space of plausible alignment research directions is vast and largely unexplored. Our "Neglected Approaches" approach focuses on pursuing a diverse set of promising but overlooked approaches to AI alignment.

2023

Eliezer Yudkowsky calls our work "not obviously stupid"

Our alignment research team expanded over the past year as we brought on researchers we were deeply excited to work with. We initiated our work on self-other overlap and began exploring concerns around AI sentience. Alongside our technical research, we started publishing more thought pieces, aiming to push the field toward greater ambition — encouraging riskier, higher-impact approaches, such as engaging conservatives as serious partners in understanding the risks of AI. These bets paid off, helping us grow our influence in both the technical and policy arenas. Our initiatives also include promising collaborations with Princeton University professor Dr. Michael Graziano about his work on Attention Schema Theory, Redwood Research on AI control research, and EAIGG on forecasting AI development landscape, among many other endeavors.

2024

We work with AI alignment clients

We reinvest the profits from our consulting business into our Alignment R&D team. From the beginning, we've aspired to unite the two halves of our company — and our collaboration with Goodfire.ai marked a major milestone, offering the perfect opportunity to channel our industry expertise and operational speed into solving technical challenges in alignment. The results exceeded expectations. We are now expanding our work with AI alignment clients and deepening collaborations with a growing network of research organizations, including Redwood Research, PIBBSS, and individuals from Anthropic and Princeton University.

Future

Next steps

Our technical work continues across a wide front — from reverse-engineering prosociality and reducing LLM deception through self-other overlap, to a range of projects, collaborations, and client engagements, including work on the very question of AI consciousness. Our ambition knows no bounds, both technically and politically. We've engaged with members of the National Security Council, the White House Office of Science and Technology Policy, editors at leading academic journals, and congressional representatives and their staff. We genuinely believe it's possible to solve the Alignment problem — and to help lead humanity toward a brilliant future. If this mission resonates with you, we'd love for you to join us in one way or another.

Our Team

Judd Rosenblatt

Judd Rosenblatt

CEO, a mission-driven tech company advancing human agency by making sure AI doesn't kill us all.

Diogo de Lucena

Diogo de Lucena

Chief Scientist, leads cutting-edge AI safety research - building on a career spanning clinical AI and 20+ publications.

Mike Vaiana

Mike Vaiana

R&D Director, conducts cutting-edge research on LLM reasoning in collaboration with top institutions.

Stijn Servaes

Stijn Servaes

Senior Data Scientist, merges neuroscience and AI to advance alignment and consciousness research.

Gunnar Zarncke

Gunnar Zarncke

Technical Product Manager, a veteran systems architect and AI alignment advocate.

Marc Carauleanu

Marc Carauleanu

AI safety researcher leading the Self Other Overlap research agenda.

Murat Cubuktepe

Murat Cubuktepe

Senior data scientist, builds interpretable and steerable LLMs, drawing on a PhD in verifiable AI.

Keenan Pepper

Keenan Pepper

Senior data scientist, blends physics, AI safety, and software engineering—bringing experience from LIGO to LLMs.

Cameron Berg

Cameron Berg

Research scientist probing AI consciousness and alignment, blending cognitive science and policy insight.

Florin Pop

Florin Pop

Software engineer with a PhD building everything from encrypted apps to cloud microservices.

We believe in interdisciplinary collaboration to solve the complex challenge of AI alignment.

Join Our Team

We are actively collaborating with top minds:

Ross Nordby - AnthropicBradley Love - Los Alamos National LaboratoryTobias Yergin - EAIGGGoodfire.aiRedwood ResearchNotadoctor.ai
Ross Nordby - AnthropicBradley Love - Los Alamos National LaboratoryTobias Yergin - EAIGGGoodfire.aiRedwood ResearchNotadoctor.ai
Ross Nordby - AnthropicBradley Love - Los Alamos National LaboratoryTobias Yergin - EAIGGGoodfire.aiRedwood ResearchNotadoctor.ai
Partnership Paths
Partnership Paths

Join Forces With Us

We're excited to form new alliances in our quest to secure a thriving AI future. Please feel free to reach out to us!