This backend service powers a multi-agent system designed to assist with medical queries by leveraging various research tools and language models. It consists of a Researcher Agent and a Research Manager Agent that work together to gather, compile, and respond to user queries with accurate and relevant medical information.
-
Researcher Agent:
- Receives the user's query and searches for relevant information across various sources, including PubMed, Arxiv, Tavily, and Google Scholar.
-
Research Manager Agent:
- Compiles the information retrieved by the Researcher Agent, draws conclusions, and frames the response in the language of a medical professional.
- LangChain: A framework for integrating large language models (LLMs) with web scraping tools.
- CrewAI: A library that facilitates the use of multi-agent Retrieval-Augmented Generation (RAG) systems.
- Flask: A Python-based web framework used to host the chatbot backend.
- Gemini-1.5-Flash: The LLM used for generating responses within the chatbot.
- LangChain Tools:
- DuckDuckGo Search
- Tavily Search
- Semantic Scholar
- Google Scholar
- Arxiv
- Serper Dev Tool
- Endpoint:
/api/crew
- Method:
POST
- Request Body:
This endpoint starts the multi-agent system with the given query. The process will run asynchronously, and a jobid will be returned to track the progress.
{ "question": "user query related to his medical condition..." }
- Response:
{ "jobid": <The ID of the subprocess handling the query.> }
- Endpoint:
/api/crew/<job_id>
- Method:
GET
- Request Body:
{ "job_id": <The ID of the subprocess you want to check.> }
This endpoint is used to poll the status of the initiated query. If the process is completed, the response from the AI system will be returned.
- Response:
- The status of the query (e.g., processing, completed).
- If completed, the final response generated by the multi-agent system.