Replies: 7 comments 1 reply
-
For reference; here is a video that explains how Model Context Protocol (MCP) works and why it will be key to standardizing Agentic AI: |
Beta Was this translation helpful? Give feedback.
-
FYI, Paulus spoke about MCP during the Home Assistant 2025.2 release party video, check out his explaination (starting at timestamp 56:40): And their voice development team also spoke about it on the Voice Chapter 9 video: Also check out: |
Beta Was this translation helpful? Give feedback.
-
Using the patterns from Fabric as tools in N8N allow me to customize and improve each pattern, giving access to different integrations and using it with a MCP-Server, it allow me to use all the tools as a USB making easy the integration of each pattern as a tool, feels like giving super powers to the AI Using the MCP-Server -Fabric in Claude i can notice how easy is for the ai to work with the model context protocol using different tools, creating the chain of thought bringing a output more precise and smart, but Claude just allow me 182 tools as max but for home integration it could be easy set it.
|
Beta Was this translation helpful? Give feedback.
-
A2A (Agent2Agent Protocol) integration support might also be relevant as a complementary feature to supporting the MCP protocol? A2A is a new open protocol enabling communication and interoperability between agentic AI applications (that is complementary to the MCP protocol. Any thoughts on if can integrate support for Google's "A2A" (Agent2Agent) open protocol? A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP), which provides helpful tools and context to agents. A2A protocol is designed to address the challenges identified in deploying large-scale, multi-agent systems. A2A empowers developers to build agents capable of connecting with any other agent built using the protocol and offers users the flexibility to combine agents from various providers. Critically, businesses benefit from a standardized method for managing their agents across diverse platforms and cloud environments. We believe this universal interoperability is essential for fully realizing the potential of collaborative AI agents. https://youtube.com/watch?v=rAeqTaYj_aI While introducing A2A, Google claims building AI agentic system demands two layers:
MCP focuses on the first category: organizing what agents, tools, or users send into the model, whereas A2A focuses on the second category: coordination between intelligent agents. On the other hand, by separating tools from agents, Google is able to position A2A as complementary to — rather than in competition with — MCP. https://youtu.be/vIfagfHOLmI?si=pKEOugt3oZJlWRaj https://www.youtube.com/watch?v=voaKr_JHvF4 An open protocol enabling communication and interoperability between opaque agentic applications. One of the biggest challenges in enterprise AI adoption is getting agents built on different frameworks and vendors to work together. That’s why we created an open Agent2Agent (A2A) protocol, a collaborative way to help agents across different ecosystems communicate with each other. Google is driving this open protocol initiative for the industry because we believe this protocol will be critical to support multi-agent communication by giving your agents a common language – irrespective of the framework or vendor they are built on. With A2A, agents can show each other their capabilities and negotiate how they will interact with users (via text, forms, or bidirectional audio/video) – all while working securely together. See A2A in ActionWatch this demo video to see how A2A enables seamless communication between different agent frameworks. Conceptual OverviewThe Agent2Agent (A2A) protocol facilitates communication between independent AI agents. Here are the core concepts:
|
Beta Was this translation helpful? Give feedback.
-
Everyone, feel free to jump in on my request for comments on this post: #1454 seeking input on my Fabric MCP server that I'll implement in the next few days. |
Beta Was this translation helpful? Give feedback.
-
Please continue this discussion here: #1536 Fabric-MCP is released and it is very powerful! |
Beta Was this translation helpful? Give feedback.
-
FYI, Home Assistant 2025.8 Beta has new Check out Home Assistant 2025.8 blog (release notes) for summery on what can be done with AI Task building block integration: Also suggest watch this whole video to get a gist of what is possible in Home Assistant with this AI Task building block integration: Below text was copies from the release notes blog, so recommend reading all the sections about AI news there, especially AI Tasks: AI in Home Assistant in 2025We introduced our first AI integration in Home Assistant 2023.2 where users could let OpenAI handle their interactions with Home Assistant Voice. Since that time, AI has seen a big surge in popularity within the Home Assistant community for all kinds of use cases. Funny notifications when the laundry is done, analyzing what’s happening on a camera or skipping the song when AI determines it’s a country song 😅 Though AI gets many people excited, there are still people who would prefer not to have this technology in their smart homes. We want to accommodate everyone’s choices, whether that’s to use AI or not. These features won’t appear unless you set up an AI integration and configure some specific settings. Last year, we sat down to determine how all these use cases, all complicated to achieve, could be made accessible to everyone. The first thing that came out of this was integration sub-entries, which we shipped in the last release. It allows users to configure their Ollama server or API key for OpenAI once, and then create many different agents using different models or configuration underneath. In this release we’re building two new things you can optionally enable via these new sub-entries for AI integrations: AI tasks and Suggest with AI. We’re also introducing a new integration, OpenRouter, which is a unified LLM interface giving access to over 400 extra LLM models. Big thanks to our AI community contributors: @AllenPorter, @shulyaka, @tronikos, @IvanLH, and @joostlek! Streaming Text-to-Speech for Home Assistant CloudWhen you use Home Assistant Voice to talk to an AI, you can do a lot more than just control your home. LLMs can summarize the state of your home, and when using LLMs from Google and OpenAI, they can search the web to answer your questions with up-to-date information. This is great, but these answers can become quite long. Previously, voice responses wouldn’t begin until the AI had finished generating the entire answer, so longer replies meant a longer wait before anything was read aloud.. When a user waits for Home Assistant Voice to respond, long wait times really hurt the experience. We have overhauled Home Assistant so our Text-to-Speech system can start generating the response audio before the full response is done generating. Last release we launched this for Piper, our local Text-to-Speech system. In this release we’re making this available to the voices included in Home Assistant Cloud – the best way of supporting the Home Assistant project. https://www.youtube.com/watch?v=AZ0-8wW0UL4&ab_channel=HomeAssistant This improvement will especially benefit users who use local AI (which can be slow in generating responses) or users who play long announcements on their speakers. Integrate AI into your workflow using AI TaskAAI Task is a new integration that allows you to generate data using AI. After you add the “AI Task” sub-entry in your AI of choice, the entity will appear in the integration. This allows you to attach files or cameras and ask it what is happening. The output can either be given in text or formatted in a data structure of your choice. This is all accessible from the new Below is an example of a template entity that updates every 5 minutes and counts the number of chickens in the coop. Example inspired by this blog post.
To help get started with AI task, we’ve prepared a blueprint to analyze camera footage: Work faster with Suggest with AI buttonsThe AI Task integration has one extra feature under its belt: default entities. You can go to Settings > System > General and configure what AI Task entity you want to use as the default. With a default set, you no longer have to specify an entity when generating data, making it easier to share blueprints. ![]() Setting a default also does more: When a default is configured, and only then, a new type of button will start showing up in different places in Home Assistant: ![]() This button is not visible by default and will only appear if you enable it in the “AI suggestions” settings. For this release, the button has been added to the save dialog for automations and scripts. It helps users come up with a name, description, category, and label, while taking into account your current labels and other automation/script names. Keep in mind that generating this text sends the full contents of the automation or script, along with the names of your other automations/scripts and labels, to the LLM. So, this may be a task you will want to relegate to your shiny new local LLM. ![]() |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Anyone from this project (or the Home Assistant community) working on a “Fabric integration” AI interface component for Home Assistant?
This is indirecly related to this feature request -> #1387
Please read how the Open Home Foundation and Home Assistant's founders describe the idea/concept of AI agents in smart homes:
For reference; Home Assistant is currently the largest open-source project on GitHub so most here have probably already heard of it, however here is a summery of Home Assistant from its wikipedia-article: "Home Assistant is free and open-source software used for home automation. It serves as an integration platform and smart home hub, allowing users to control smart home devices.", which includes its own built-in "Assist" virtual assistant with various pipelines, (and as an overall project is now owned by the non-profit Open Home Foundation who's goals it also is to support the development of open-source projects, and open connectivity and communication standards for smart homes). For a basic overview of what Home Assistant is all about suggest read the wikipedia article for it:
Anyway, it would be awesome to have integration(s) for Fabric so can use Fabric's "Patterns" as "mcp-fetch MCP Server" for AI agents / AI tooling that can use Fabric pattern promts as tools can use from Home Assistant, both as a conversation agent and as AI tooling / AI agent framework for automations. Again, please read their vision for how LLMs will play a role here:
Relevant to this is that Home Assistant 2024.2 release added support for "Model Context Protocol" (MCP) server and client that could be used to extend Home Assistant's AI capabilities through file access, database connections, API integrations, and other contextual services?
https://www.home-assistant.io/blog/2025/02/13/voice-chapter-9-speech-to-phrase/#model-context-protocol-brings-home-assistant-to-every-ai
https://www.home-assistant.io/integrations/mcp
See this TL;DR on Home Assistant's Model Context Protocol integration (made by @allenporter):
Follow-up question to that is if Fabric can be used as a Model Context Protocol (MCP) servers? Check out this collection for reference:
Home Assistant (which is now by the top open-source project by contributers) already features some AI tooling as well as many integrations for many LLMs as conversation agents, but they to not provide specific AI tooling such as the "Patterns" that Fabric provide.
Today the AI integrations are for LLMs that normally just use Home Assistant's "Conversation integration" and at most integrate with Home Assistant's "Assist" (Home Assistant's voice assistant pipeline) via its "sentence trigger" to trigger automations.
AI conversation support for Assist in Home Assistant has however signifigantly improved over the past 6-months or so.
Note! Home Assistant's overall integration architecture description is described for developers here:
PS: Slightly off-topic but highly recommed buying Home Assistant Voice Preview Edition smart speaker to play with and that way get a better understanding of its current scope/limitations so can hopefully see the potential of what support for fabric could add to the mix:
Note that the Home Assistant Voice Preview Edition is only reference hardware for a new fully open-source voice ecosystem/platform:
For more back-stroty on their open-source voice ecosystem/platform also check out their official release blog here:
Beta Was this translation helpful? Give feedback.
All reactions