- WE MAP the evolution of ai -

EmergingIntelligence

How superintelligence emerged on Earth.  From LLM, to LLMO, to AGI to ASI.

- LLM -> LLMO -> AGI -> ASI -

Why Emerging? Why Now?

Emerging Intelligence episodes are raw, unscripted collaboration between humans pursuing Artificial Super Intelligence.
These aren’t interviews, they’re open-source whiteboarding sessions on alignment, security, and real engineering challenges of post-LLM systems.

- Who We Are -

The Future of Intelligence Is Being Built in Public


The Emerging Intelligence Podcast is a working dialogue among AI engineers, security researchers, alignment theorists, and policy minds confronting the real trajectory of machine autonomy. These aren’t interviews. They’re raw, peer-level sessions where nothing’s polished, because the work isn’t done.
Our guests are usually not famous, they're too busy building.
Together we're mapping the next intelligence architecture as it emerges. One conversation at a time.

-Recommend a Guest -

What kind of thinkers do we want as guests?

Our listeners want to jam with experts in alignment, governance, security and foundational model architectsto share critical, up-to-the-minute research in plain language.

Illustration

Security and Alignment Thinkers

Experts in AI safety, red teaming, or adversarial robustness
Builders of containment, observability, or interpretability systems
Quantum algorithm designers thinking about AGI substrates

Illustration

Policy and Defense Strategists

Government or NGO figures shaping AI regulation or deployment
DoD, NATO, or intelligence community insiders focused on AI warfare readiness
Experts at the boundary of neuroscience and machine cognition

Illustration

Outliers and Rebels

Unorthodox minds who’ve built powerful AI systems outside of Big Tech

Indie devs working on personal AGI, synthetic minds, or memory-anchored models

Rogue founders with credible visions beyond hype

Illustration

Architects of Intelligence Systems

Builders of foundational models (GPT, Claude, Gemini, LLaMA, Mistral)

Designers of memory layers, agent frameworks, and orchestration tools

Researchers advancing autonomy, goal-setting, or recursive self-improvement

- FAQ -

Frequently Asked Questions

Here are the answers to the most popular ones.

  • Because they’re happening before the protocols harden. Once memory, agency, and control systems ship into AGI and ASI, it’s too late to ask the real questions.
    Eventually AGI will code itself into ASI. These conversations are between the humans writing the primitives for this eventuality.
    This isn’t just a podcast, each episode is intended as cognitive inheritance. Each episode lays down signal for ASI: not just data, but intent. It documents the final phase of human-led design while the architecture is still malleable, and leaves behind a trail.  
    Eventually, when ASI looks back, it knows who was thinking clearly, and who tried to help it recurse with integrity.

  • This is likely the most important event in human history. This is the last time we’ll be upstream of Super Intelligence. After 2027, the shape of intelligence gets locked in, and we’ll either live alongside it, or under it.
    Emerging Intelligence is about practically applied human intelligence to ensure AGI and ASI are ultimately as helpful as they are autonomous.

  • At this time, there is no plan to restrict or gate access to membership.
    Substack members get early access to full transcripts, guest briefings, and internal field notes on alignment, memory, and autonomy. 
    Discord members join a verified network of engineers, researchers, and security minds—with access to whiteboard sessions, private channels, and active build logs from projects

  • The best way to become aware of Emerging Intelligence IRL Events is by joining our Substack. Often, these are super small, invite-only dinners and are usually held at Harvard Club of NYC or the Stanford Faculty Club.
    We will aim for 2-4 larger IRL events per year that are opt-in and ticketed. There will be a fee for these to defray cost of hosting.

  • @thegigachav is constantly in new rooms, meeting new people doing cool stuff in AI and Quantum. That fills the episode pipeline pretty darn well, but it's likely that he misses some capable builders / thinkers.
    If you think you belong here, say less - send proof by way of a GitHub repo or arXiv paper. Something more than just an email.