Keynote – Surya Ganguli
Summary
Ganguli presented a unified science of intelligence that bridges biology and machine learning. He described recent work on neural scaling laws, showing a theoretical foundation for why AI performance improves slowly with data and how non‑redundant data selection can accelerate learning (exponential rather than power‑law).
He highlighted energy efficiency as a fundamental difference between brains (20 W) and modern AI (MW‑scale). By co‑designing algorithms with physics, AI can approach the brain’s efficiency.
Ganguli introduced quantum‑neuromorphic computing concepts, where chemical‑based sensory models inspire ultra‑low‑power AI sensors. He also discussed digital twins of biological systems (e.g., retina, seizure models) that enable rapid, safe experimentation.
The talk concluded with a call for open, interdisciplinary research to build explainable, energy‑efficient AI that complements human cognition.
Key Takeaways
- Neural scaling laws now have a theoretical basis; non‑redundant data can give exponential gains.
- Energy gap: Brains are orders of magnitude more efficient; physics‑aware algorithms are needed.
- Quantum‑neuromorphic computing offers a path to ultra‑low‑power AI sensors.
- Digital twins allow safe, rapid testing of AI‑driven neuroscience interventions.
- Open interdisciplinary collaboration is essential for next‑generation AI.