Skip to content

Conversation

theumbrella1
Copy link

Summary

This PR creates a MCP server for Amazon AgentCore. It is inspired by Strands Agents MCP Server, which is based on Langchain mcpdoc. It has two mcp tools, search_docs (returns relevant top k docs) and fetch_doc (fetches the full content from a doc url). We'll have a llm.txt dynamically generated by mkdocs for AgentCore documentation.

Note: default_factory of llm_texts_url in config.py needs to be updated after finalizing llm.txt location

Changes

  • MCP server with two tools, search_docs and fetch_doc
  • Data classes and util files for the tools
  • Docusaurus changes including md file, server-cards.json, and sidebars.ts
  • README (generated by Kiro)
  • Unit tests (generated by Kiro)
  • Docstrings (generated by Kiro)

User experience

User can use MCP inspector to run the server locally. It can also be easily installed on a client of user's choosing, such as Kiro, Q CLI, Cursor, and Claude Code, as a mcp server:

{
  "mcpServers": {
    "awslabs.amazon-bedrock-agentcore-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.amazon-bedrock-agentcore-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Example use cases include:

  • “What tools do you have access to?”

  • “Convert this code to agentcore”/ “Transform to agentcore” / “ Make this agentcore compatible”

  • “Troubleshoot errors”/“do you see any errors on cloudwatch?”

  • “What agents can I invoke?” / “can you test agentv3”/ “say hello to strandsv4 agent”

  • “Create a dynamodb gateway” (downloads smithy model directly and deploys)

  • “what tools are available in this gateway?”

  • “Can you find me some good examples for...“ / ”I need an example for hosting strands on runtime“

  • “What agents can I invoke?” / “Is strandsagentv6 ready to invoke?”

  • deploy to runtime with oauth

  • Invoke runtime with oauth / “can you generate a bearer token”

  • list memory resources I can use

  • create memory resource / use this memory in my agent code

Checklist

If your change doesn't seem to apply, please leave them unchecked.

  • I have reviewed the contributing guidelines
  • I have performed a self-review of this change
  • Changes have been tested
  • Changes are documented

Is this a breaking change? N

RFC issue number:

Checklist:

  • Migration process documented
  • Implement warnings (if it can live side by side)

Acknowledgment

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the project license.

Copy link

@Vivekbhadauria1 Vivekbhadauria1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A major reconsideration is to reuse SQLite etc instead of implementing doc indexer/fetcher/cacher.

Comment on lines +1311 to +1312
"retrieval",
"citations",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why these?

"source_path": "src/amazon-bedrock-agentcore-mcp-server/",
"subcategory": "Knowledge & Search",
"tags": [
"ai-ml",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add tags for agents and AI agents, agentic AI etc


llm_texts_url: list[str] = field(
default_factory=lambda: [
'https://gist.githubusercontent.com/ryanycoleman/d22cdd6c37ea261b055dc9504e08d1de/raw/5fa9facbd500dcc87dc940e1a43e825e2b3824b1/agentcore-llms-txt.md'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not have llms.txt in the same package to start with? Eventually, we can use https://github.com/aws/bedrock-agentcore-starter-toolkit/tree/main/documentation github page?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that about logistics in hosting the llms.txt file at a later date or do you see an advantage in statically packaging the file? I don't prefer the potential for drift if the customer does not update the MCP Server.

Comment on lines +20 to +25
# Global state
_INDEX: indexer.IndexSearch | None = None
# url -> Page (None if not fetched yet)
_URL_CACHE: Dict[str, doc_fetcher.Page | None] = {}
_URL_TITLES: Dict[str, str] = {} # url -> curated title from llms.txt
_LINKS_LOADED = False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be moved to a singleton class for better testing and readability.



@dataclass(slots=True)
class Doc:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks very involved, why not use SQLite instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: To triage
Development

Successfully merging this pull request may close these issues.

4 participants