Skip to content

HackSmiths-SIH/Med-Python-Server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Medical Chat Assistant Backend

This backend service powers a multi-agent system designed to assist with medical queries by leveraging various research tools and language models. It consists of a Researcher Agent and a Research Manager Agent that work together to gather, compile, and respond to user queries with accurate and relevant medical information.

Overview

Multi-Agentic System

  • Researcher Agent:

    • Receives the user's query and searches for relevant information across various sources, including PubMed, Arxiv, Tavily, and Google Scholar.
  • Research Manager Agent:

    • Compiles the information retrieved by the Researcher Agent, draws conclusions, and frames the response in the language of a medical professional.

Tech Stack

  • LangChain: A framework for integrating large language models (LLMs) with web scraping tools.
  • CrewAI: A library that facilitates the use of multi-agent Retrieval-Augmented Generation (RAG) systems.
  • Flask: A Python-based web framework used to host the chatbot backend.
  • Gemini-1.5-Flash: The LLM used for generating responses within the chatbot.
  • LangChain Tools:
    • DuckDuckGo Search
    • Tavily Search
    • Semantic Scholar
    • Google Scholar
    • Arxiv
    • Serper Dev Tool

API Endpoints

1. Initiate a Research Query

  • Endpoint: /api/crew
  • Method: POST
  • Request Body:
    {
      "question": "user query related to his medical condition..."
    }
    This endpoint starts the multi-agent system with the given query. The process will run asynchronously, and a jobid will be returned to track the progress.
  • Response:
    {
      "jobid": <The ID of the subprocess handling the query.>
    }

2. Check the Status of a Research Query

  • Endpoint: /api/crew/<job_id>
  • Method: GET
  • Request Body:
    {
      "job_id": <The ID of the subprocess you want to check.>
    }

This endpoint is used to poll the status of the initiated query. If the process is completed, the response from the AI system will be returned.

  • Response:
    • The status of the query (e.g., processing, completed).
    • If completed, the final response generated by the multi-agent system.

About

The server hosting multi-agentic AI system to providing chat assistance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages