

LARGE LANGUAGE MODEL PLAYGROUND MULTIAGENTS (PT 2)
Silas Liu - Oct. 12, 2024
Large Language Models, Graphs
Based on my LLM Playground, I recenty made an important leap forward by implementing a multi-agent system, with dynamic workflows and easy integration for future agents.​
​​
One of the biggest challenges when working with LLMs is combining different functionalities while structuring complex, interconnected processes. With the help of the LangGraph framework, I was able to refactor the entire codebase, designing a directed graph of actions that improved the system's structure and scalability.
​​
In addition to these developments, I also implemented some secondary functionalities that can be useful like token usage counter, which shows overall input and output tokens counter, as well as costs.
One of the biggest challenges in developing LLM systems is integrating different functionalities while managing complex workflows. As more tools and agents become necessary to meet various needs, maintaining clarity and structure becomes increasingly difficult. Directed graphs offer a powerful solution to this problem, as they allow for clear visualization of processes, decision loops and different actions.
​
To address these challenges, I used LangGraph, a framework that made it possible to refactor my codebase into a well organized workflow of actions. This tool simplifies the integration of custom tools, making it easy to expand the system with new agents and ensuring the entire flow remains adaptable and maintainable.
​
In the context of LLMs, agents are autonomous modules designed to perform specific task or manage particular processes within the system. They operate independently but communicate and collaborate with other agents through structured flows. This modular approach makes it easier to scale the system and integrate new functionalities.
​
To organize the logic more effectively, I divided the process into distinct workflows governed by specialized agents. These agents collaborate through structured loops, ensuring that inputs are processed correctly and the right tools are selected for each task. The current system features one main agent and one tool agent.

The Main Agent (represented as blue) is responsible for managing communication between the user and the system. It detects and translates user input, enabling interaction in any language, selects the appropriate tool based on the user's query, and translates the response back to the user's language. This agent also incorporates reasoning and reflection loops to ensure coherence between questions and answers, manage chat memory, and make sure that the available tools are used efficiently throughout the interaction.
​
The Tool Agent showed here (represented as red) is a specialized retrieval tool that extracts metadata from the user's query and performs retrieval-augmented generation (RAG) on the graph database. This ensures precise and efficient responses, even for complex queries that span multiple PDFs or require navigating through lengthy documents.
​
In addition to these agents, I also implemented several useful monitoring features to improve both development and system usage. Every step of the process is now logged, providing detailed records that can be used for debugging and analysis. I also included a token usage counter, which tracks input and output tokens, along with the associated costs. This feature helps maintain operational efficiency and optimize performance over time.

​To demonstrate the system's ability to generate more complex responses, below is an example of an elaborate answer created by the LLM Playground. It seamlessly combines explanations, code snippets and even links to external source, showcasing the depths and versality of this multi-agent system. The external source referrenced is:

These improvements have transformed the LLM Playground into a more robust and adaptable platform. With the use of multi-agents now I am able to implement more innovative functionalities. I am very excited to see the next steps and what functionalities I will be implementing.