-
-
Notifications
You must be signed in to change notification settings - Fork 19
[feature] Add MCP server for AI assistant integration #286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add AGENTS.md for AI coding agent guidance - Implement MCP server exposing Cortex capabilities - Tools: install, search, history, rollback, detect_hardware - Dry-run default for safety - Configuration examples for Claude, Cursor, VS Code Positions Cortex as first MCP-native package manager. Aligns with AAIF (Linux Foundation) standards. Closes #XXX
WalkthroughIntroduces a new MCP server implementation that exposes Cortex Linux tools to Claude Desktop and other MCP-compatible clients. Adds documentation for agent guidelines and MCP server setup, along with package initialization to expose the server at the package level. Changes
Sequence DiagramsequenceDiagram
actor User as MCP Client<br/>(Claude Desktop)
participant Server as CortexMCPServer
participant CLI as Cortex CLI
participant System as System Resources<br/>(Package DB, Hardware, FS)
User->>Server: call_tool("install_package", {request, dry_run})
activate Server
Server->>CLI: cortex install [args] --dry-run
activate CLI
CLI->>System: Query package manager, execute
System-->>CLI: stdout, stderr
CLI-->>Server: exit code, output
deactivate CLI
Server->>Server: Parse result, format as JSON
Server-->>User: TextContent(JSON result)
deactivate Server
User->>Server: call_tool("detect_hardware")
activate Server
Server->>System: Read /proc/cpuinfo
System-->>Server: CPU info
Server->>CLI: nvidia-smi --query-gpu=name
activate CLI
CLI->>System: GPU query
System-->>CLI: GPU name or error
CLI-->>Server: output/empty
deactivate CLI
Server->>Server: Aggregate hardware dict
Server-->>User: TextContent({cpu, gpu})
deactivate Server
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| import asyncio | ||
| import json | ||
| import logging | ||
| import os |
| import os | ||
| import subprocess | ||
| from datetime import datetime | ||
| from typing import Optional |
| if line.startswith("model name"): | ||
| hardware["cpu"] = line.split(":")[1].strip() | ||
| break | ||
| except: |
| stdout, _ = await process.communicate() | ||
| if process.returncode == 0: | ||
| hardware["gpu"] = stdout.decode("utf-8").strip() | ||
| except: |
| from mcp.server.stdio import stdio_server | ||
| from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult | ||
| except ImportError: | ||
| print("MCP SDK not installed. Run: pip install mcp[cli]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
AGENTS.md (1)
7-8: Optionally address markdownlint’s “bare URL” warningsmarkdownlint is flagging the bare URLs and email; if you care about a clean lint pass, you can wrap them as Markdown links (e.g.,
[repo](https://github.com/...),[Discord](https://discord.gg/...),[email](mailto:[email protected])), otherwise this is safe to ignore.Also applies to: 80-81
mcp/cortex_mcp_server.py (6)
17-27: Avoid exiting at import time and configuring logging in library codeCalling
sys.exit(1)andlogging.basicConfig(...)at import time can be unfriendly for library consumers (e.g., importingmcpjust to introspect types). Consider restricting both to the CLI entrypoint instead:-try: - from mcp.server import Server - from mcp.server.stdio import stdio_server - from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult -except ImportError: - print("MCP SDK not installed. Run: pip install mcp[cli]") - import sys - sys.exit(1) - -logging.basicConfig(level=logging.INFO) +try: + from mcp.server import Server + from mcp.server.stdio import stdio_server + from mcp.types import Tool, TextContent, CallToolResult, ListToolsResult +except ImportError as exc: + # Defer user-friendly messaging to the CLI entrypoint. + raise + +logging.basicConfig(level=logging.INFO)and then in
main()emit the user-facing hint if import fails, instead of exiting at import time.
39-42: Prefershutil.whichover spawningwhichviasubprocessYou can avoid an extra process, fix Ruff S607, and simplify
_find_cortexby usingshutil.which:+import shutil @@ - def _find_cortex(self) -> str: - result = subprocess.run(["which", "cortex"], capture_output=True, text=True) - return result.stdout.strip() if result.returncode == 0 else "cortex" + def _find_cortex(self) -> str: + path = shutil.which("cortex") + return path or "cortex"
104-140: Tightencall_toolerror handling and validation, and log unexpected failuresThe central dispatch is straightforward, but two small changes would improve robustness and debuggability:
- Validate that required arguments (e.g.,
request,query,installation_id) are present/non-empty before shelling out, returning a structured error instead of calling the CLI with"".- In the
except Exceptionblock, log the exception vialogger.exception(...)so unexpected failures are visible in logs, while still returning an MCP-formatted error.For example:
@self.server.call_tool() async def call_tool(name: str, arguments: dict) -> CallToolResult: try: + # Basic validation against tool schemas before dispatch. + if name == "install_package" and not arguments.get("request"): + return CallToolResult( + content=[TextContent(type="text", text=json.dumps( + {"error": "Missing required argument: request"} + ))], + isError=True, + ) @@ else: - result = {"error": f"Unknown tool: {name}"} + result = {"error": f"Unknown tool: {name}"} @@ - except Exception as e: - return CallToolResult( - content=[TextContent(type="text", text=json.dumps({"error": str(e)}))], - isError=True - ) + except Exception as e: + logger.exception("Unhandled exception in call_tool(%s)", name) + return CallToolResult( + content=[TextContent(type="text", text=json.dumps({"error": str(e)}))], + isError=True, + )
141-155: Minor subprocess helper improvements (_run_cortex)A couple of small tweaks here would address Ruff feedback and make failures easier to reason about:
- Use
[self._cortex_path, *args]instead of list concatenation.- Optionally include the raw
returncodein the result dict so callers can distinguish different failure modes.- Consider adding a timeout (via
asyncio.wait_for) so a hungcortexprocess doesn’t block the MCP server indefinitely.Example:
- cmd = [self._cortex_path] + args + cmd = [self._cortex_path, *args] @@ - return { - "success": process.returncode == 0, - "stdout": stdout.decode("utf-8"), - "stderr": stderr.decode("utf-8") - } + return { + "success": process.returncode == 0, + "returncode": process.returncode, + "stdout": stdout.decode("utf-8"), + "stderr": stderr.decode("utf-8"), + }
163-173: Hardenapt-cache searchhelper and address Ruff’slvariable warningTwo practical issues here:
- An empty
querywill causeapt-cache search ""to traverse the entire index, which can be slow and noisy; rejecting empty/too-short queries is safer.- Ruff rightfully flags the ambiguous variable name
l.Consider:
- async def _search_packages(self, query: str, limit: int = 10) -> dict: - process = await asyncio.create_subprocess_exec( - "apt-cache", "search", query, + async def _search_packages(self, query: str, limit: int = 10) -> dict: + if not query.strip(): + return {"query": query, "count": 0, "packages": [], "error": "query must be non-empty"} + + process = await asyncio.create_subprocess_exec( + "apt-cache", "search", query, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) - stdout, _ = await process.communicate() - lines = stdout.decode("utf-8").strip().split("\n")[:limit] - packages = [{"name": l.split(" - ")[0], "description": l.split(" - ")[1]} - for l in lines if " - " in l] + stdout, stderr = await process.communicate() + lines = stdout.decode("utf-8").strip().split("\n")[:limit] + packages = [ + {"name": line.split(" - ", 1)[0], "description": line.split(" - ", 1)[1]} + for line in lines + if " - " in line + ] return {"query": query, "count": len(packages), "packages": packages}
209-219: Consider defensive checks arounddf -h /parsingThe happy path is fine, but a couple of small guards would make
_system_statusmore robust ifdfoutput is unexpected:
- Wrap the
create_subprocess_exec+communicatein atry/except FileNotFoundErrorand return a structured error instead of bubbling up.- Check
len(parts) >= 4before indexingparts[1:4]to avoid IndexError on odd platforms or locales.Not strictly required, but would keep the MCP server resilient if the underlying environment is slightly different.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
AGENTS.md(1 hunks)README_MCP.md(1 hunks)mcp/__init__.py(1 hunks)mcp/cortex_mcp_server.py(1 hunks)
🧰 Additional context used
🪛 GitHub Check: SonarCloud Code Analysis
mcp/cortex_mcp_server.py
[failure] 204-204: Specify an exception class to catch or reraise the exception
[failure] 188-188: Use an asynchronous file API instead of synchronous open() in this async function.
[failure] 193-193: Specify an exception class to catch or reraise the exception
🪛 markdownlint-cli2 (0.18.1)
AGENTS.md
7-7: Bare URL used
(MD034, no-bare-urls)
80-80: Bare URL used
(MD034, no-bare-urls)
81-81: Bare URL used
(MD034, no-bare-urls)
🪛 Ruff (0.14.8)
mcp/cortex_mcp_server.py
40-40: Starting a process with a partial executable path
(S607)
135-135: Do not catch blind exception: Exception
(BLE001)
142-142: Consider [self._cortex_path, *args] instead of concatenation
Replace with [self._cortex_path, *args]
(RUF005)
171-171: Ambiguous variable name: l
(E741)
193-193: Do not use bare except
(E722)
204-204: Do not use bare except
(E722)
🔇 Additional comments (7)
AGENTS.md (1)
5-56: Quick start and dev/test flow look consistent with the project goalsCloning, venv creation, editable install, API key setup, and pytest invocation all read clean and match the rest of the repo’s tooling; I don’t see any blocking issues here. You may just want to double‑check that
cortex-detect-hardwareis the correct CLI entrypoint name vs. acortex detect-hardwaresubcommand to keep the doc perfectly aligned with the actual binary.mcp/cortex_mcp_server.py (3)
43-103: Tool metadata and schemas align well with the documented MCP surfaceThe
list_toolshandler cleanly exposes the six tools with sensible defaults (notablydry_run=Truefor destructive operations) and minimal schemas; this lines up with the README and PR description and looks good as an initial public surface.
174-183: History and rollback wrappers are consistent with the CLI surfaceThese thin wrappers around
cortex historyandcortex rollbackkeep all the real logic/safety in the underlying CLI and just shape the JSON; the argument handling and defaultdry_run=Truebehavior look correct to me.
221-231: Entry point wiring withstdio_serverlooks appropriate
CortexMCPServer.run()usingstdio_server()andserver.create_initialization_options()matches the expected MCP stdio pattern, and themain()wrapper viaasyncio.rungives you a clean CLI entrypoint.mcp/__init__.py (1)
1-4: Package-level re-exports are clear and minimalRe-exporting
CortexMCPServerandmainvia__all__is a clean way to give consumers a stable import path (from mcp import CortexMCPServer, main); no issues here.README_MCP.md (2)
1-42: README content matches the MCP server surface and safety modelTool list, safety notes (dry‑run by default, audit logging, sandboxing), and the Claude Desktop JSON snippet all line up well with the server implementation and overall PR goals. This is a solid, succinct intro to the MCP server.
7-23: Verify package name and CLI entrypoint forcortex-mcp-serverThe README assumes:
pip install cortex-mcp-server- A
cortex-mcp-serverCLI on$PATHthat runsmain().Please double‑check that your packaging/entrypoint config (e.g.,
pyproject.tomlorsetup.cfg) actually defines aconsole_scriptsentry forcortex-mcp-server; otherwise users following this README will hit a “command not found”.
| async def _detect_hardware(self) -> dict: | ||
| hardware = {} | ||
| try: | ||
| with open("/proc/cpuinfo") as f: | ||
| for line in f: | ||
| if line.startswith("model name"): | ||
| hardware["cpu"] = line.split(":")[1].strip() | ||
| break | ||
| except: | ||
| hardware["cpu"] = "Unknown" | ||
|
|
||
| try: | ||
| process = await asyncio.create_subprocess_exec( | ||
| "nvidia-smi", "--query-gpu=name", "--format=csv,noheader", | ||
| stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE | ||
| ) | ||
| stdout, _ = await process.communicate() | ||
| if process.returncode == 0: | ||
| hardware["gpu"] = stdout.decode("utf-8").strip() | ||
| except: | ||
| hardware["gpu"] = None | ||
|
|
||
| return hardware |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd cortex_mcp_server.pyRepository: cortexlinux/Cortex
Length of output: 86
🏁 Script executed:
wc -l mcp/cortex_mcp_server.pyRepository: cortexlinux/Cortex
Length of output: 90
🏁 Script executed:
sed -n '185,207p' mcp/cortex_mcp_server.pyRepository: cortexlinux/Cortex
Length of output: 918
🏁 Script executed:
head -20 mcp/cortex_mcp_server.pyRepository: cortexlinux/Cortex
Length of output: 532
🏁 Script executed:
cat pyproject.toml 2>/dev/null || cat setup.py 2>/dev/null || cat setup.cfg 2>/dev/null | head -30Repository: cortexlinux/Cortex
Length of output: 1827
🏁 Script executed:
grep -n "except:" mcp/cortex_mcp_server.pyRepository: cortexlinux/Cortex
Length of output: 101
🏁 Script executed:
python3 << 'EOF'
import ast
import asyncio
from typing import Optional
# Check if the suggested code is syntactically valid
code = '''
async def _detect_hardware(self) -> dict:
hardware: dict[str, Optional[str]] = {}
def _read_cpu_model() -> Optional[str]:
try:
with open("/proc/cpuinfo") as f:
for line in f:
if line.startswith("model name"):
return line.split(":", 1)[1].strip()
except OSError:
return None
return None
hardware["cpu"] = await asyncio.to_thread(_read_cpu_model) or "Unknown"
try:
process = await asyncio.create_subprocess_exec(
"nvidia-smi",
"--query-gpu=name",
"--format=csv,noheader",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, _ = await process.communicate()
if process.returncode == 0:
hardware["gpu"] = stdout.decode("utf-8").strip()
else:
hardware["gpu"] = None
except FileNotFoundError:
hardware["gpu"] = None
return hardware
'''
try:
ast.parse(code)
print("✓ Syntax is valid")
except SyntaxError as e:
print(f"✗ Syntax error: {e}")
EOFRepository: cortexlinux/Cortex
Length of output: 148
Fix Sonar failures: avoid blocking I/O in async context and replace bare except blocks
The code has two issues flagged by Sonar:
- Synchronous
open()on/proc/cpuinfoinside anasync defblocks the event loop - Two bare
exceptblocks that suppress all exceptions indiscriminately
Move the blocking file read into a thread using asyncio.to_thread() and catch specific exceptions:
- async def _detect_hardware(self) -> dict:
- hardware = {}
- try:
- with open("/proc/cpuinfo") as f:
- for line in f:
- if line.startswith("model name"):
- hardware["cpu"] = line.split(":")[1].strip()
- break
- except:
- hardware["cpu"] = "Unknown"
-
- try:
- process = await asyncio.create_subprocess_exec(
- "nvidia-smi", "--query-gpu=name", "--format=csv,noheader",
- stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
- )
- stdout, _ = await process.communicate()
- if process.returncode == 0:
- hardware["gpu"] = stdout.decode("utf-8").strip()
- except:
- hardware["gpu"] = None
-
- return hardware
+ async def _detect_hardware(self) -> dict:
+ hardware: dict[str, Optional[str]] = {}
+
+ def _read_cpu_model() -> Optional[str]:
+ try:
+ with open("/proc/cpuinfo") as f:
+ for line in f:
+ if line.startswith("model name"):
+ return line.split(":", 1)[1].strip()
+ except OSError:
+ return None
+ return None
+
+ hardware["cpu"] = await asyncio.to_thread(_read_cpu_model) or "Unknown"
+
+ try:
+ process = await asyncio.create_subprocess_exec(
+ "nvidia-smi",
+ "--query-gpu=name",
+ "--format=csv,noheader",
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE,
+ )
+ stdout, _ = await process.communicate()
+ if process.returncode == 0:
+ hardware["gpu"] = stdout.decode("utf-8").strip()
+ else:
+ hardware["gpu"] = None
+ except FileNotFoundError:
+ hardware["gpu"] = None
+
+ return hardwareThis preserves the original behavior (CPU defaults to "Unknown", GPU defaults to None on failure) while satisfying Sonar's requirements. The project targets Python 3.10+, so asyncio.to_thread() is available.
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[failure] 204-204: Specify an exception class to catch or reraise the exception
[failure] 188-188: Use an asynchronous file API instead of synchronous open() in this async function.
[failure] 193-193: Specify an exception class to catch or reraise the exception
🪛 Ruff (0.14.8)
193-193: Do not use bare except
(E722)
204-204: Do not use bare except
(E722)
🤖 Prompt for AI Agents
In mcp/cortex_mcp_server.py around lines 185 to 207, avoid blocking I/O and bare
excepts: move reading /proc/cpuinfo into a thread via asyncio.to_thread() and
parse the content there, catching FileNotFoundError and OSError to set
hardware["cpu"]="Unknown" on failure; for the nvidia-smi call, catch
FileNotFoundError and OSError around asyncio.create_subprocess_exec (and keep
the existing returncode check) and set hardware["gpu"]=None on failure — do not
use bare excepts and preserve the original defaults.
|
Hey @shashankxrm - This PR adds MCP server integration. Linux Foundation announced AAIF today - this positions Cortex as first MCP-native package manager. Please review. 🚀 |
|




Summary
Implements Model Context Protocol (MCP) server for Cortex Linux, allowing any MCP-compatible AI assistant to manage packages.
What's Included
Tools Exposed
Safety Features
Why This Matters
Linux Foundation announced AAIF today (Dec 9, 2025) consolidating MCP, goose, and AGENTS.md. This positions Cortex as the first MCP-native package manager.
Testing
Ready for review.
Summary by CodeRabbit
Documentation
New Features
✏️ Tip: You can customize this high-level summary in your review settings.