SummaryIn this episode of the AI Engineering Podcast, Vasilije Markovich talks about enhancing Large Language Models (LLMs) with memory to improve their accuracy. He discusses the concept of memory in LLMs, which involves managing context windows to enhance reasoning without the high costs of traditional training methods. He explains the challenges of forgetting in LLMs due to context window limitations and introduces the idea of hierarchical memory, where immediate retrieval and long-term information storage are balanced to improve application performance. Vasilije also shares his work on Cognee, a tool he's developing to manage semantic memory in AI systems, and discusses its potential applications beyond its core use case. He emphasizes the importance of combining cognitive science principles with data engineering to push the boundaries of AI capabilities and shares his vision for the future of AI systems, highlighting the role of personalization and the ongoing development of Cognee to support evolving AI architectures.Announcements
Interview
Introduction
How did you get involved in machine learning?
Can you describe what "memory" is in the context of LLM systems?
What are the symptoms of "forgetting" that manifest when interacting with LLMs?
How do these issues manifest between single-turn vs. multi-turn interactions?
How does the lack of hierarchical and evolving memory limit the capabilities of LLM systems?
What are the technical/architectural requirements to add memory to an LLM system/application?
How does Cognee help to address the shortcomings of current LLM/RAG architectures?
Can you describe how Cognee is implemented?
Recognizing that it has only existed for a short time, how have the design and scope of Cognee evolved since you first started working on it?
What are the data structures that are most useful for managing the memory structures?
For someone who wants to incorporate Cognee into their LLM architecture, what is involved in integrating it into their applications?
How does it change the way that you think about the overall requirements for an LLM application?
For systems that interact with multiple LLMs, how does Cognee manage context across those systems? (e.g. different agents for different use cases)
There are other systems that are being built to manage user personalization in LLm applications, how do the goals of Cognee relate to those use cases? (e.g. Mem0 - https://github.com/mem0ai/mem0)))
What are the unknowns that you are still navigating with Cognee?
What are the most interesting, innovative, or unexpected ways that you have seen Cognee used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Cognee?
When is Cognee the wrong choice?
What do you have planned for the future of Cognee?
Contact Info
Parting Question
Closing Announcements
Links
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano) by The Freak Fandango Orchestra)/CC BY-SA 3.0)