
AI Signals From Tomorrow
Signals from Tomorrow is a podcast channel designed for curious minds eager to explore the frontiers of artificial intelligence. The format is a conversation between Voyager and Zaura discussing a specific scientific paper or a set of them, sometime in a short format and sometime as a deep dive.
Each episode delivers clear, thought-provoking insights into how AI is shaping our world—without the jargon. From everyday impacts to philosophical dilemmas and future possibilities, AI Signals from Tomorrow bridges the gap between cutting-edge research and real-world understanding.
Whether you're a tech enthusiast, a concerned citizen, or simply fascinated by the future, this podcast offers accessible deep dives into topics like machine learning, ethics, automation, creativity, and the evolving role of humans in an AI-driven age.
Join Voyager and Zaura as they decode the AI signals pointing toward tomorrow—and what they mean for us today.
AI Signals From Tomorrow
CoALA - for LLMs to Agents
Explore the cutting edge of Artificial Intelligence with "Cognitive Architectures for Language Agents (CoALA)" paper https://arxiv.org/pdf/2309.02427
Language agents are an emerging class of AI systems that leverage large language models (LLMs) to interact with the world. While LLMs alone have limitations in knowledge and reasoning, language agents connect them to internal memory and external environments, helping to ground them in existing knowledge or external observations.
Drawing on the rich history of cognitive science and symbolic artificial intelligence, the CoALA framework provides a way to organize existing language agents and plan future developments.
This podcast delves into the core concepts of CoALA, exploring how language agents are structured:
- Memory: Language agents organize information into modules, including working memory for current circumstances and long-term memories like episodic (past experiences), semantic (world facts), and procedural (rules/skills). This allows them to persist information across interactions, unlike stateless LLMs.
- Action Space: Agents interact with the world through a structured action space. This includes external grounding actions to interact with physical, digital, or human environments, and internal actions like retrieval (reading from long-term memory), reasoning (processing working memory to generate new info), and learning (modifying long-term memory or LLM parameters).
- Decision-Making: A generalized procedure structures how agents choose which actions to take, often involving planning stages to propose and evaluate actions before execution.
Join us as we uncover how CoALA provides a blueprint for building more capable agents by defining these interacting modules and processes. Discover how this framework reveals similarities and differences among prominent agents and identifies paths towards language-based general intelligence. Tune in to understand how combining the power of LLMs with structured architectures from cognitive science is shaping the future of AI.