Skip to content

Conversation

@mikejmorgan-ai
Copy link
Member

@mikejmorgan-ai mikejmorgan-ai commented Dec 10, 2025

Summary

Implements Model Context Protocol (MCP) server for Cortex Linux, allowing any MCP-compatible AI assistant to manage packages.

What's Included

  • AGENTS.md - AI coding agent guidelines (AAIF standard)
  • MCP Server - Full implementation with 6 tools
  • Documentation - Setup guides for Claude, Cursor, VS Code

Tools Exposed

Tool Description
install_package Install via natural language
search_packages Search package database
get_history View installation history
rollback Rollback installations
detect_hardware Detect GPU/CPU
system_status System disk/packages

Safety Features

  • ✅ Dry-run by default
  • ✅ Explicit execution required
  • ✅ Firejail sandboxing
  • ✅ Audit logging

Why This Matters

Linux Foundation announced AAIF today (Dec 9, 2025) consolidating MCP, goose, and AGENTS.md. This positions Cortex as the first MCP-native package manager.

Testing

pip install mcp[cli]
python mcp/cortex_mcp_server.py

Ready for review.

Summary by CodeRabbit

  • Documentation

    • Added comprehensive agent guidelines documentation for developers
    • Added MCP Server setup and configuration guide
  • New Features

    • Introduced MCP Server integration enabling AI-powered package management with six available tools: install packages, search packages, view history, rollback changes, detect hardware, and system status

✏️ Tip: You can customize this high-level summary in your review settings.

- Add AGENTS.md for AI coding agent guidance
- Implement MCP server exposing Cortex capabilities
- Tools: install, search, history, rollback, detect_hardware
- Dry-run default for safety
- Configuration examples for Claude, Cursor, VS Code

Positions Cortex as first MCP-native package manager.
Aligns with AAIF (Linux Foundation) standards.

Closes #XXX
@mikejmorgan-ai mikejmorgan-ai added enhancement New feature or request high-priority labels Dec 10, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 10, 2025

Walkthrough

Introduces a new MCP server implementation that exposes Cortex Linux tools to Claude Desktop and other MCP-compatible clients. Adds documentation for agent guidelines and MCP server setup, along with package initialization to expose the server at the package level.

Changes

Cohort / File(s) Change Summary
Documentation
AGENTS.md, README_MCP.md
New documentation files: AGENTS.md provides guidelines for Cortex Linux agent development, testing, and code standards; README_MCP.md documents MCP server installation, configuration, available tools, and safety practices.
Package Initialization
mcp/__init__.py
Adds imports and \all\ export for CortexMCPServer and main from cortex_mcp_server module to expose package-level API.
MCP Server Implementation
mcp/cortex_mcp_server.py
New module implementing CortexMCPServer class with MCP tool endpoints: list_tools and call_tool. Includes six async tool handlers (install_package, search_packages, get_history, rollback, detect_hardware, system_status) that dispatch to cortex CLI or gather system information; supports dry-run modes, error handling, and stdio-based server integration.

Sequence Diagram

sequenceDiagram
    actor User as MCP Client<br/>(Claude Desktop)
    participant Server as CortexMCPServer
    participant CLI as Cortex CLI
    participant System as System Resources<br/>(Package DB, Hardware, FS)
    
    User->>Server: call_tool("install_package", {request, dry_run})
    activate Server
    Server->>CLI: cortex install [args] --dry-run
    activate CLI
    CLI->>System: Query package manager, execute
    System-->>CLI: stdout, stderr
    CLI-->>Server: exit code, output
    deactivate CLI
    Server->>Server: Parse result, format as JSON
    Server-->>User: TextContent(JSON result)
    deactivate Server
    
    User->>Server: call_tool("detect_hardware")
    activate Server
    Server->>System: Read /proc/cpuinfo
    System-->>Server: CPU info
    Server->>CLI: nvidia-smi --query-gpu=name
    activate CLI
    CLI->>System: GPU query
    System-->>CLI: GPU name or error
    CLI-->>Server: output/empty
    deactivate CLI
    Server->>Server: Aggregate hardware dict
    Server-->>User: TextContent({cpu, gpu})
    deactivate Server
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Areas requiring extra attention:

  • Async subprocess handling in _run_cortex(): ensure proper stream capture, error propagation, and missing cortex CLI fallback
  • Hardware detection logic: parsing /proc/cpuinfo and nvidia-smi output for robustness across different system configurations
  • Tool input validation and schema adherence: verify all tool parameters match declared schemas and handle edge cases
  • Error handling and safe JSON serialization: confirm exceptions are caught and converted to proper error payloads
  • Dry-run flag propagation: validate that dry-run modes are correctly applied across install and rollback handlers

Poem

🐰 A server blooms in MCP's way,
With tools to install, detect, and survey,
Claude's new companion, swift and keen,
Makes Cortex Linux's power seen! ✨🛠️

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is well-structured with summary, tools list, safety features, and testing instructions, but missing required fields from template: related issue number and checklist. Add 'Closes #XXX' field for related issue and include the required checklist with checkboxes for tests and PR title format validation.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed Title 'Add MCP server for AI assistant integration' clearly describes the main feature being added and relates directly to the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/mcp-server

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

import asyncio
import json
import logging
import os
import os
import subprocess
from datetime import datetime
from typing import Optional
if line.startswith("model name"):
hardware["cpu"] = line.split(":")[1].strip()
break
except:
stdout, _ = await process.communicate()
if process.returncode == 0:
hardware["gpu"] = stdout.decode("utf-8").strip()
except:
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult
except ImportError:
print("MCP SDK not installed. Run: pip install mcp[cli]")
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (7)
AGENTS.md (1)

7-8: Optionally address markdownlint’s “bare URL” warnings

markdownlint is flagging the bare URLs and email; if you care about a clean lint pass, you can wrap them as Markdown links (e.g., [repo](https://github.com/...), [Discord](https://discord.gg/...), [email](mailto:[email protected])), otherwise this is safe to ignore.

Also applies to: 80-81

mcp/cortex_mcp_server.py (6)

17-27: Avoid exiting at import time and configuring logging in library code

Calling sys.exit(1) and logging.basicConfig(...) at import time can be unfriendly for library consumers (e.g., importing mcp just to introspect types). Consider restricting both to the CLI entrypoint instead:

-try:
-    from mcp.server import Server
-    from mcp.server.stdio import stdio_server
-    from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult
-except ImportError:
-    print("MCP SDK not installed. Run: pip install mcp[cli]")
-    import sys
-    sys.exit(1)
-
-logging.basicConfig(level=logging.INFO)
+try:
+    from mcp.server import Server
+    from mcp.server.stdio import stdio_server
+    from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult
+except ImportError as exc:
+    # Defer user-friendly messaging to the CLI entrypoint.
+    raise
+
+logging.basicConfig(level=logging.INFO)

and then in main() emit the user-facing hint if import fails, instead of exiting at import time.


39-42: Prefer shutil.which over spawning which via subprocess

You can avoid an extra process, fix Ruff S607, and simplify _find_cortex by using shutil.which:

+import shutil
@@
-    def _find_cortex(self) -> str:
-        result = subprocess.run(["which", "cortex"], capture_output=True, text=True)
-        return result.stdout.strip() if result.returncode == 0 else "cortex"
+    def _find_cortex(self) -> str:
+        path = shutil.which("cortex")
+        return path or "cortex"

104-140: Tighten call_tool error handling and validation, and log unexpected failures

The central dispatch is straightforward, but two small changes would improve robustness and debuggability:

  • Validate that required arguments (e.g., request, query, installation_id) are present/non-empty before shelling out, returning a structured error instead of calling the CLI with "".
  • In the except Exception block, log the exception via logger.exception(...) so unexpected failures are visible in logs, while still returning an MCP-formatted error.

For example:

        @self.server.call_tool()
        async def call_tool(name: str, arguments: dict) -> CallToolResult:
            try:
+                # Basic validation against tool schemas before dispatch.
+                if name == "install_package" and not arguments.get("request"):
+                    return CallToolResult(
+                        content=[TextContent(type="text", text=json.dumps(
+                            {"error": "Missing required argument: request"}
+                        ))],
+                        isError=True,
+                    )
@@
                 else:
-                    result = {"error": f"Unknown tool: {name}"}
+                    result = {"error": f"Unknown tool: {name}"}
@@
-            except Exception as e:
-                return CallToolResult(
-                    content=[TextContent(type="text", text=json.dumps({"error": str(e)}))],
-                    isError=True
-                )
+            except Exception as e:
+                logger.exception("Unhandled exception in call_tool(%s)", name)
+                return CallToolResult(
+                    content=[TextContent(type="text", text=json.dumps({"error": str(e)}))],
+                    isError=True,
+                )

141-155: Minor subprocess helper improvements (_run_cortex)

A couple of small tweaks here would address Ruff feedback and make failures easier to reason about:

  • Use [self._cortex_path, *args] instead of list concatenation.
  • Optionally include the raw returncode in the result dict so callers can distinguish different failure modes.
  • Consider adding a timeout (via asyncio.wait_for) so a hung cortex process doesn’t block the MCP server indefinitely.

Example:

-        cmd = [self._cortex_path] + args
+        cmd = [self._cortex_path, *args]
@@
-            return {
-                "success": process.returncode == 0,
-                "stdout": stdout.decode("utf-8"),
-                "stderr": stderr.decode("utf-8")
-            }
+            return {
+                "success": process.returncode == 0,
+                "returncode": process.returncode,
+                "stdout": stdout.decode("utf-8"),
+                "stderr": stderr.decode("utf-8"),
+            }

163-173: Harden apt-cache search helper and address Ruff’s l variable warning

Two practical issues here:

  • An empty query will cause apt-cache search "" to traverse the entire index, which can be slow and noisy; rejecting empty/too-short queries is safer.
  • Ruff rightfully flags the ambiguous variable name l.

Consider:

-    async def _search_packages(self, query: str, limit: int = 10) -> dict:
-        process = await asyncio.create_subprocess_exec(
-            "apt-cache", "search", query,
+    async def _search_packages(self, query: str, limit: int = 10) -> dict:
+        if not query.strip():
+            return {"query": query, "count": 0, "packages": [], "error": "query must be non-empty"}
+
+        process = await asyncio.create_subprocess_exec(
+            "apt-cache", "search", query,
             stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
         )
-        stdout, _ = await process.communicate()
-        lines = stdout.decode("utf-8").strip().split("\n")[:limit]
-        packages = [{"name": l.split(" - ")[0], "description": l.split(" - ")[1]} 
-                   for l in lines if " - " in l]
+        stdout, stderr = await process.communicate()
+        lines = stdout.decode("utf-8").strip().split("\n")[:limit]
+        packages = [
+            {"name": line.split(" - ", 1)[0], "description": line.split(" - ", 1)[1]}
+            for line in lines
+            if " - " in line
+        ]
         return {"query": query, "count": len(packages), "packages": packages}

209-219: Consider defensive checks around df -h / parsing

The happy path is fine, but a couple of small guards would make _system_status more robust if df output is unexpected:

  • Wrap the create_subprocess_exec + communicate in a try/except FileNotFoundError and return a structured error instead of bubbling up.
  • Check len(parts) >= 4 before indexing parts[1:4] to avoid IndexError on odd platforms or locales.

Not strictly required, but would keep the MCP server resilient if the underlying environment is slightly different.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2eb10c and f44a517.

📒 Files selected for processing (4)
  • AGENTS.md (1 hunks)
  • README_MCP.md (1 hunks)
  • mcp/__init__.py (1 hunks)
  • mcp/cortex_mcp_server.py (1 hunks)
🧰 Additional context used
🪛 GitHub Check: SonarCloud Code Analysis
mcp/cortex_mcp_server.py

[failure] 204-204: Specify an exception class to catch or reraise the exception

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCG&open=AZsGLbGTzTotR7JyVaCG&pullRequest=286


[failure] 188-188: Use an asynchronous file API instead of synchronous open() in this async function.

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCF&open=AZsGLbGTzTotR7JyVaCF&pullRequest=286


[failure] 193-193: Specify an exception class to catch or reraise the exception

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCE&open=AZsGLbGTzTotR7JyVaCE&pullRequest=286

🪛 markdownlint-cli2 (0.18.1)
AGENTS.md

7-7: Bare URL used

(MD034, no-bare-urls)


80-80: Bare URL used

(MD034, no-bare-urls)


81-81: Bare URL used

(MD034, no-bare-urls)

🪛 Ruff (0.14.8)
mcp/cortex_mcp_server.py

40-40: Starting a process with a partial executable path

(S607)


135-135: Do not catch blind exception: Exception

(BLE001)


142-142: Consider [self._cortex_path, *args] instead of concatenation

Replace with [self._cortex_path, *args]

(RUF005)


171-171: Ambiguous variable name: l

(E741)


193-193: Do not use bare except

(E722)


204-204: Do not use bare except

(E722)

🔇 Additional comments (7)
AGENTS.md (1)

5-56: Quick start and dev/test flow look consistent with the project goals

Cloning, venv creation, editable install, API key setup, and pytest invocation all read clean and match the rest of the repo’s tooling; I don’t see any blocking issues here. You may just want to double‑check that cortex-detect-hardware is the correct CLI entrypoint name vs. a cortex detect-hardware subcommand to keep the doc perfectly aligned with the actual binary.

mcp/cortex_mcp_server.py (3)

43-103: Tool metadata and schemas align well with the documented MCP surface

The list_tools handler cleanly exposes the six tools with sensible defaults (notably dry_run=True for destructive operations) and minimal schemas; this lines up with the README and PR description and looks good as an initial public surface.


174-183: History and rollback wrappers are consistent with the CLI surface

These thin wrappers around cortex history and cortex rollback keep all the real logic/safety in the underlying CLI and just shape the JSON; the argument handling and default dry_run=True behavior look correct to me.


221-231: Entry point wiring with stdio_server looks appropriate

CortexMCPServer.run() using stdio_server() and server.create_initialization_options() matches the expected MCP stdio pattern, and the main() wrapper via asyncio.run gives you a clean CLI entrypoint.

mcp/__init__.py (1)

1-4: Package-level re-exports are clear and minimal

Re-exporting CortexMCPServer and main via __all__ is a clean way to give consumers a stable import path (from mcp import CortexMCPServer, main); no issues here.

README_MCP.md (2)

1-42: README content matches the MCP server surface and safety model

Tool list, safety notes (dry‑run by default, audit logging, sandboxing), and the Claude Desktop JSON snippet all line up well with the server implementation and overall PR goals. This is a solid, succinct intro to the MCP server.


7-23: Verify package name and CLI entrypoint for cortex-mcp-server

The README assumes:

  • pip install cortex-mcp-server
  • A cortex-mcp-server CLI on $PATH that runs main().

Please double‑check that your packaging/entrypoint config (e.g., pyproject.toml or setup.cfg) actually defines a console_scripts entry for cortex-mcp-server; otherwise users following this README will hit a “command not found”.

Comment on lines +185 to +207
async def _detect_hardware(self) -> dict:
hardware = {}
try:
with open("/proc/cpuinfo") as f:
for line in f:
if line.startswith("model name"):
hardware["cpu"] = line.split(":")[1].strip()
break
except:
hardware["cpu"] = "Unknown"

try:
process = await asyncio.create_subprocess_exec(
"nvidia-smi", "--query-gpu=name", "--format=csv,noheader",
stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, _ = await process.communicate()
if process.returncode == 0:
hardware["gpu"] = stdout.decode("utf-8").strip()
except:
hardware["gpu"] = None

return hardware
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

fd cortex_mcp_server.py

Repository: cortexlinux/Cortex

Length of output: 86


🏁 Script executed:

wc -l mcp/cortex_mcp_server.py

Repository: cortexlinux/Cortex

Length of output: 90


🏁 Script executed:

sed -n '185,207p' mcp/cortex_mcp_server.py

Repository: cortexlinux/Cortex

Length of output: 918


🏁 Script executed:

head -20 mcp/cortex_mcp_server.py

Repository: cortexlinux/Cortex

Length of output: 532


🏁 Script executed:

cat pyproject.toml 2>/dev/null || cat setup.py 2>/dev/null || cat setup.cfg 2>/dev/null | head -30

Repository: cortexlinux/Cortex

Length of output: 1827


🏁 Script executed:

grep -n "except:" mcp/cortex_mcp_server.py

Repository: cortexlinux/Cortex

Length of output: 101


🏁 Script executed:

python3 << 'EOF'
import ast
import asyncio
from typing import Optional

# Check if the suggested code is syntactically valid
code = '''
async def _detect_hardware(self) -> dict:
    hardware: dict[str, Optional[str]] = {}

    def _read_cpu_model() -> Optional[str]:
        try:
            with open("/proc/cpuinfo") as f:
                for line in f:
                    if line.startswith("model name"):
                        return line.split(":", 1)[1].strip()
        except OSError:
            return None
        return None

    hardware["cpu"] = await asyncio.to_thread(_read_cpu_model) or "Unknown"

    try:
        process = await asyncio.create_subprocess_exec(
            "nvidia-smi",
            "--query-gpu=name",
            "--format=csv,noheader",
            stdout=asyncio.subprocess.PIPE,
            stderr=asyncio.subprocess.PIPE,
        )
        stdout, _ = await process.communicate()
        if process.returncode == 0:
            hardware["gpu"] = stdout.decode("utf-8").strip()
        else:
            hardware["gpu"] = None
    except FileNotFoundError:
        hardware["gpu"] = None

    return hardware
'''

try:
    ast.parse(code)
    print("✓ Syntax is valid")
except SyntaxError as e:
    print(f"✗ Syntax error: {e}")
EOF

Repository: cortexlinux/Cortex

Length of output: 148


Fix Sonar failures: avoid blocking I/O in async context and replace bare except blocks

The code has two issues flagged by Sonar:

  1. Synchronous open() on /proc/cpuinfo inside an async def blocks the event loop
  2. Two bare except blocks that suppress all exceptions indiscriminately

Move the blocking file read into a thread using asyncio.to_thread() and catch specific exceptions:

-    async def _detect_hardware(self) -> dict:
-        hardware = {}
-        try:
-            with open("/proc/cpuinfo") as f:
-                for line in f:
-                    if line.startswith("model name"):
-                        hardware["cpu"] = line.split(":")[1].strip()
-                        break
-        except:
-            hardware["cpu"] = "Unknown"
-        
-        try:
-            process = await asyncio.create_subprocess_exec(
-                "nvidia-smi", "--query-gpu=name", "--format=csv,noheader",
-                stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
-            )
-            stdout, _ = await process.communicate()
-            if process.returncode == 0:
-                hardware["gpu"] = stdout.decode("utf-8").strip()
-        except:
-            hardware["gpu"] = None
-        
-        return hardware
+    async def _detect_hardware(self) -> dict:
+        hardware: dict[str, Optional[str]] = {}
+
+        def _read_cpu_model() -> Optional[str]:
+            try:
+                with open("/proc/cpuinfo") as f:
+                    for line in f:
+                        if line.startswith("model name"):
+                            return line.split(":", 1)[1].strip()
+            except OSError:
+                return None
+            return None
+
+        hardware["cpu"] = await asyncio.to_thread(_read_cpu_model) or "Unknown"
+
+        try:
+            process = await asyncio.create_subprocess_exec(
+                "nvidia-smi",
+                "--query-gpu=name",
+                "--format=csv,noheader",
+                stdout=asyncio.subprocess.PIPE,
+                stderr=asyncio.subprocess.PIPE,
+            )
+            stdout, _ = await process.communicate()
+            if process.returncode == 0:
+                hardware["gpu"] = stdout.decode("utf-8").strip()
+            else:
+                hardware["gpu"] = None
+        except FileNotFoundError:
+            hardware["gpu"] = None
+
+        return hardware

This preserves the original behavior (CPU defaults to "Unknown", GPU defaults to None on failure) while satisfying Sonar's requirements. The project targets Python 3.10+, so asyncio.to_thread() is available.

🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[failure] 204-204: Specify an exception class to catch or reraise the exception

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCG&open=AZsGLbGTzTotR7JyVaCG&pullRequest=286


[failure] 188-188: Use an asynchronous file API instead of synchronous open() in this async function.

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCF&open=AZsGLbGTzTotR7JyVaCF&pullRequest=286


[failure] 193-193: Specify an exception class to catch or reraise the exception

See more on https://sonarcloud.io/project/issues?id=cortexlinux_cortex&issues=AZsGLbGTzTotR7JyVaCE&open=AZsGLbGTzTotR7JyVaCE&pullRequest=286

🪛 Ruff (0.14.8)

193-193: Do not use bare except

(E722)


204-204: Do not use bare except

(E722)

🤖 Prompt for AI Agents
In mcp/cortex_mcp_server.py around lines 185 to 207, avoid blocking I/O and bare
excepts: move reading /proc/cpuinfo into a thread via asyncio.to_thread() and
parse the content there, catching FileNotFoundError and OSError to set
hardware["cpu"]="Unknown" on failure; for the nvidia-smi call, catch
FileNotFoundError and OSError around asyncio.create_subprocess_exec (and keep
the existing returncode check) and set hardware["gpu"]=None on failure — do not
use bare excepts and preserve the original defaults.

@mikejmorgan-ai
Copy link
Member Author

Hey @shashankxrm - This PR adds MCP server integration. Linux Foundation announced AAIF today - this positions Cortex as first MCP-native package manager. Please review. 🚀

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

@mikejmorgan-ai mikejmorgan-ai merged commit 5a1755c into main Dec 11, 2025
12 of 14 checks passed
@mikejmorgan-ai mikejmorgan-ai deleted the feature/mcp-server branch December 11, 2025 12:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request high-priority

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants