Discerning Eye | OpenAI-powered artificial intelligence system for creating comprehensive reports with structured 3-section format and markdown export.
Oversight AI is a sophisticated 3-step artificial intelligence system powered by OpenAI's GPT models, designed to analyze topics comprehensively and generate detailed informative reports. The system employs OpenAI API integration with a systematic approach to information processing, categorization, and professional report generation in markdown format.
- Purpose: Validate and prepare user-provided topics for analysis
- Process: Input validation, topic normalization, and preparation for processing
- Output: Validated and formatted topic ready for research
- Purpose: Conduct in-depth AI-powered research on the provided topic
- Process: Multi-angle research approach using OpenAI GPT models covering:
- Definitions and comprehensive overviews
- Key concepts and fundamental principles
- Real-world applications and use cases
- Benefits, advantages, and opportunities
- Challenges, limitations, and risks
- Current trends and market developments
- Future outlook and predictions
- Technical specifications and implementation details
- Output: AI-generated comprehensive research data with timing metrics
- Purpose: Systematically categorize information by importance
- Process: Information architect analyzes content using:
- Keyword-based importance scoring
- Content angle assessment
- Length and comprehensiveness evaluation
- Hybrid scoring algorithm
- Output: Categorized information with importance levels:
- High Priority (Critical Information)
- Medium Priority (Important Information)
- Low Priority (Supporting Information)
- Supplementary (Background Information)
-
Purpose: Generate comprehensive informative reports in structured markdown format
-
Process: Professional report creation with 3-section structure and multiple format options
-
Output: Professional reports with structured sections:
Section 1: Sources Used
- OpenAI GPT model information and confidence levels
- Research methodology and data quality indicators
- Processing metrics and reliability scores
Section 2: Speed & Performance Metrics
- Processing speed and loading time/ETA
- Content generation rate (words per second)
- Quality assurance and coverage completeness
Section 3: Document Content (varies by type)
- Executive summaries with strategic insights
- Detailed analysis with comprehensive categorization
- Technical specifications with implementation details
- Quick summaries with key highlights
- Categorized insights by priority levels
- Actionable recommendations and conclusions
- Executive Summary: High-level strategic insights and recommendations
- Detailed Report: Comprehensive analysis with all categorized information
- Technical Report: In-depth technical analysis with methodology details
- Quick Summary: Concise overview with key highlights
- OpenAI Integration: Powered by GPT-3.5-turbo, GPT-4, and other OpenAI models
- Multi-angle Research: AI-driven systematic exploration from various perspectives
- Intelligent Categorization: Advanced importance assessment and priority categorization
- Structured 3-Section Format: Consistent Sources, Performance, and Content sections
- Markdown Export: Professional markdown (.md) file generation and download
- Flexible Reporting: Multiple report formats for different audiences and use cases
- Performance Metrics: Real-time processing speed, timing, and quality indicators
- Quality Assurance: Confidence scoring, reliability assessment, and validation
- Session Management: Track and manage multiple analysis sessions with history
- Dual Export Options: Download reports in both markdown (.md) and text (.txt) formats
All Platforms:
- Python 3.8 or higher
- pip package manager
- OpenAI API key (required for AI functionality)
macOS:
- Xcode Command Line Tools (for some dependencies):
xcode-select --install
- Homebrew (recommended): Install Homebrew
Windows:
- Microsoft Visual C++ 14.0 or greater (usually included with Visual Studio)
- Windows 10/11 or Windows Server 2016+
Linux (Ubuntu/Debian):
python3-dev
andpython3-venv
packages:sudo apt-get update sudo apt-get install python3-dev python3-venv python3-pip
Linux (CentOS/RHEL/Fedora):
- Development tools:
sudo yum groupinstall "Development Tools" # CentOS/RHEL sudo dnf groupinstall "Development Tools" # Fedora
rm -rf Oversight
git clone https://github.com/RipScriptos/Oversight.git
cd Oversight
macOS/Linux:
python3 -m venv oversight_env
source oversight_env/bin/activate
Windows (Command Prompt):
python -m venv oversight_env
oversight_env\Scripts\activate
Windows (PowerShell):
python -m venv oversight_env
oversight_env\Scripts\Activate.ps1
All platforms:
pip install -r requirements.txt
Alternative for macOS (if you encounter issues):
pip3 install -r requirements.txt
macOS/Linux:
cp .env.example .env
nano .env # or use your preferred editor
Windows:
copy .env.example .env
notepad .env
Required Configuration:
Add your OpenAI API key to the .env
file:
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-3.5-turbo
OPENAI_MAX_TOKENS=2000
OPENAI_TEMPERATURE=0.7
The project requires the following Python packages:
- Flask (2.3.3) - Web framework for the API and web interface
- Flask-CORS (4.0.0) - Cross-Origin Resource Sharing support for embedding
- python-dotenv (1.0.0) - Environment variable management
- openai (>=1.0.0) - OpenAI API client for GPT model integration
- requests (2.31.0) - HTTP library for API calls and web requests
- beautifulsoup4 (4.12.2) - HTML/XML parsing for web content extraction (legacy support)
- lxml (4.9.3) - XML and HTML parser (optional, for better performance)
- urllib3 (2.0.4) - HTTP client library (optional, for enhanced session handling)
Run the interactive demo to test the system:
macOS/Linux:
python3 run_demo.py
Windows:
python run_demo.py
Start the web application:
macOS/Linux:
python3 app.py
Windows:
python app.py
For macOS users, you can install dependencies individually using these Terminal commands:
pip3 install flask
pip3 install flask_cors
pip3 install python-dotenv
pip3 install beautifulsoup4
pip3 install openai
pip3 install requests
Note: It's recommended to use pip install -r requirements.txt
instead for automatic dependency management.
Access the web interface at: http://localhost:12001
from src.oversight_ai import OversightAI
# Initialize the system
oversight_ai = OversightAI()
# Process a topic
result = oversight_ai.process_topic("Artificial Intelligence", "detailed")
if result['success']:
print(f"Report generated: {result['session_id']}")
print(f"Processing time: {result['processing_time']:.2f} seconds")
# Access markdown report
print("Markdown Report:")
print(result['markdown_report'])
# Or access text report
print("Text Report:")
print(result['text_report'])
else:
print(f"Error: {result['error']}")
POST /api/analyze
- Analyze a topic with OpenAI integrationGET /api/status/<session_id>
- Get processing status and progressGET /api/results/<session_id>
- Get session results with performance metricsGET /api/download/<session_id>
- Download report as markdown (.md) by defaultGET /api/download/<session_id>/markdown
- Download report as markdown (.md)GET /api/download/<session_id>/text
- Download report as text (.txt)GET /api/history
- Get processing history with timing dataGET /api/statistics
- Get system statistics and performance metrics
- Conducts AI-powered multi-angle research using OpenAI GPT models
- Generates comprehensive information from 8+ different research perspectives
- Provides research summaries with performance metrics and timing data
- Includes fallback mechanisms and error handling for API reliability
- Analyzes content importance using hybrid scoring
- Categorizes information into priority levels
- Provides confidence metrics and reasoning
- Creates professional reports in structured 3-section format
- Generates both markdown (.md) and text (.txt) export formats
- Structures information hierarchically with proper markdown formatting
- Includes performance metrics, timing data, and quality indicators
- Generates actionable insights and recommendations with source attribution
- Orchestrates the 3-step process with OpenAI integration
- Manages sessions with comprehensive processing history and timing data
- Provides system statistics, performance monitoring, and quality metrics
- Handles both markdown and text report generation and export
- Flask-based web interface with OpenAI integration
- RESTful API for programmatic access with enhanced endpoints
- Real-time processing status updates with performance metrics
- Dual format report download functionality (markdown and text)
- Enhanced error handling and API key validation
Run component tests to verify core functionality:
python test_simple.py
This will test:
- Configuration structure and validation
- Markdown export functionality
- Report formatting and structure
- Core system components
Run integration tests with mocked OpenAI calls:
python test_integration.py
This will test:
- OpenAI integration with mocked responses
- Complete 3-step processing pipeline
- Markdown and text report generation
- Performance metrics and timing data
- Error handling and fallback mechanisms
OPENAI_API_KEY
: Your OpenAI API key (required for AI functionality)OPENAI_MODEL
: OpenAI model to use (default: gpt-3.5-turbo)OPENAI_MAX_TOKENS
: Maximum tokens per API request (default: 2000)OPENAI_TEMPERATURE
: Response creativity level (default: 0.7)
SECRET_KEY
: Flask application secret keyDEBUG
: Enable/disable debug mode (default: True)PORT
: Server port (default: 12001)HOST
: Server host (default: 0.0.0.0)
The system supports four report types, each optimized for different use cases:
- Executive: Strategic decision-makers
- Detailed: Comprehensive analysis needs
- Technical: Technical teams and developers
- Summary: Quick overview requirements
The system tracks comprehensive performance metrics:
- Processing Speed: Total time for complete analysis with OpenAI API calls
- Loading Time/ETA: Individual research angle processing times
- Content Generation Rate: Words generated per second by AI models
- API Response Times: OpenAI API call durations and success rates
- Quality Scores: Confidence levels and reliability indicators
- Success/Failure Rates: System reliability and error tracking
- Topic Diversity Analysis: Coverage and categorization effectiveness
- System Usage Statistics: Session history and performance trends
- Overall categorization confidence
- Distribution balance assessment
- Content quality indicators
- Processing reliability metrics
- Input topic validation
- Report type verification
- Content categorization accuracy
- Output format consistency
- Market research analysis
- Competitive intelligence
- Strategic planning support
- Industry trend analysis
- Literature review assistance
- Topic exploration
- Research methodology guidance
- Knowledge synthesis
- Technology assessment
- Implementation planning
- Risk analysis
- Best practices compilation
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
This project is licensed under the Unlicense License - see the LICENSE file for details.
For questions, issues, or feature requests, please:
- Check the existing documentation
- Search for existing issues
- Create a new issue with detailed information
- Provide system information and error logs
Error: OPENAI_API_KEY environment variable is required
Solution: Add your OpenAI API key to the .env
file:
OPENAI_API_KEY=your_actual_api_key_here
Error: Rate limit exceeded for requests
Solution: Wait and retry, or upgrade your OpenAI plan for higher rate limits.
Error: The model 'gpt-4' does not exist
Solution: Check your OpenAI plan and use an available model like gpt-3.5-turbo
.
macOS/Linux:
- Try using
python3
instead ofpython
- Install Python 3.8+ from python.org or use Homebrew:
brew install python3
Windows:
- Install Python from python.org and make sure to check "Add Python to PATH"
- Try using
py
instead ofpython
macOS/Linux:
# If python3-venv is not installed (Ubuntu/Debian)
sudo apt-get install python3-venv
# Alternative virtual environment creation
python3 -m virtualenv oversight_env
Windows PowerShell Execution Policy:
# If you get execution policy errors
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
macOS/Linux:
# If you get permission errors with pip
pip install --user -r requirements.txt
Windows:
# Run Command Prompt as Administrator if needed
If port 12001 is already in use, you can change it in your .env
file:
PORT=8080
- Check the Issues page for known problems
- Create a new issue if you encounter a bug or need help
- Multi-Model Support: Integration with Claude, Gemini, and other AI models
- Advanced Caching: Intelligent caching for improved performance and cost efficiency
- Batch Processing: Multiple topic analysis in single requests
- Custom Templates: User-defined report templates and formatting options
- Real-time Collaboration: Multi-user analysis and report sharing
- Enhanced Analytics: Advanced performance metrics and usage insights
- API Integrations: External data source connections and enrichment
- Intelligent Caching: Redis-based caching for API responses and processed data
- Parallel Processing: Concurrent API calls and multi-threaded analysis
- Database Integration: PostgreSQL/MongoDB for persistent storage and analytics
- Advanced Analytics: Machine learning-based performance optimization
- Stream Processing: Real-time analysis and progressive report generation
Oversight AI - Transforming information into actionable intelligence through OpenAI-powered systematic analysis, intelligent categorization, and professional markdown reporting.