EP07: Memory That Never Forgets, The Trillion-Parameter Ghost, and Cancer's Betrayal Signal

AI Research Podcast — Episode 7
🎧 Listen to English Audio on Podbean
🎧 收听中文音频
Today's theme: AI is learning to truly remember, to act, and to understand the world in dimensions humans cannot perceive.
Segment 1: Google Titans + MIRAS — AI Finally Has Real Long-Term Memory
On March 17, Google Research published Titans (architecture) and MIRAS (theoretical framework) — fundamentally rethinking how sequence models handle long-term context. Titans introduces a deep neural network as a long-term memory module with a surprise metric that detects significant divergence. MIRAS unifies all sequence modeling approaches into four design choices. Result: outperforms GPT-4 on BABILong (2M+ tokens) with far fewer parameters.
Segment 2: Xiaomi MiMo-V2-Pro — The Trillion-Token Mystery
An anonymous model "Hunter Alpha" processed over 1 trillion tokens on OpenRouter in a week. Everyone guessed DeepSeek V4. It was Xiaomi's MiMo-V2-Pro — trillion-parameter, 42B active (MoE), coding beats Claude Sonnet 4.6, agent tasks approach Opus 4.6, API costs 67% lower. Anonymous A/B testing is now industry standard.
Segment 3: Apple × Google Siri-Gemini Partnership
Google's 1.2T Gemini powers new Siri in iOS 26.4. Three-tier architecture with Apple asking Google to run servers inside Apple's data centers. Zero-trust AI at its extreme. The model layer and experience layer are fully separating.
Segment 4: MangroveGS — AI Reads Cancer's Betrayal Signal
University of Geneva's MangroveGS analyzes hundreds of gene expression signatures to predict cancer metastasis at ~80% accuracy across multiple cancer types. Enables prospective prediction immediately after surgery.
Segment 5: Meta Omnilingual MT — 1,600 Languages
Meta extends AI translation from ~200 to 1,600+ languages. Specialized small models match 70B quality. Language carries culture — this makes hundreds of millions of voices hearable.