Speed of Light (SOL) is a native AI Agent for the Linux desktop.
You can extend its functionality using the Model Context Protocol (MCP), the USB-C port for AI applications:
- 🏠 Support for both local (default) and cloud LLM providers
- 🔧 Extensible via MCP, supports both STDIO and Streamable HTTP servers
- 🐧 Built-in tools that integrate with the Linux desktop (e.g., clipboard access)
- 🖱️ Computer use capabilities for desktop automation (screenshot, mouse, keyboard control)
- 🎨 Developed with GNOME Adwaita for a modern look and compatibility with any desktop environment
⚠️ Warning: Computer use is an experimental feature that is disabled by default and directly controls your actual desktop with real mouse clicks and keyboard input, not in a sandboxed environment.
Clone this repo, install the dependencies in a virtual environment, and launch the app with Python:
$ sudo apt install gir1.2-gtksource-5
$ git clone [email protected]:zugaldia/speedoflight.git
$ cd speedoflight
$ python3 -m venv venv
$ source venv/bin/activate
$ pip3 install -r requirements.txt
$ python3 launch.py
SOL uses a config.toml
file for configuration, stored in the standard location: ~/.config/io.speedoflight.App/
. On first run, if no configuration file exists, SOL will create a default one.
The configuration file uses TOML format and has the following structure:
llm = "ollama" # Your preferred provider
[llms.ollama]
model = "mistral-small:latest"
[llms.anthropic]
model = "claude-sonnet-4-0"
api_key = "YOUR-API-KEY-HERE" # Required for cloud providers
[mcps.example]
type = "stdio"
command = "/path/to/mcp-server"
args = ["--arg1", "value1"]
env = {"ENV_VAR" = "value"}
-
llm
: The LLM provider to use (e.g.,"ollama"
,"anthropic"
) -
[llms.<provider>]
: Provider-specific configuration sections:- For Ollama:
model
specifies the model name (e.g.,"mistral-small:latest"
) - For Anthropic:
model
andapi_key
(setting an API key is required), and optionally:enable_web_search
(defaults tofalse
) to give Claude direct access to real-time web content with automatic source citationsenable_computer_use
(defaults tofalse
) to enable desktop automation capabilities
- Additional providers coming soon.
- For Ollama:
-
[mcps.<server>]
: Configuration for MCP servers. This allows extending SOL with additional tools. For example, to add the Mapbox MCP pictured above:
[mcps.mapbox]
type = "stdio"
command = "npx"
args = ["-y", "@mapbox/mcp-server"]
env = { "MAPBOX_ACCESS_TOKEN" = "YOUR-MAPBOX-ACCESS-TOKEN-HERE" }
Or to add the GNOME MCP Server:
[mcps.gnome]
type = "stdio"
command = "gnome-mcp-server"
You can optionally limit which tools from an MCP server are exposed to the LLM using the enabled_tools
setting. If, for example, you only wanted to use Mapbox for static map generation, you could configure:
[mcps.mapbox]
enabled_tools = ["static_map_image_tool"]
When enabled_tools
is empty (default), all tools from the server are available. When specified, only the listed tools will be exposed to the LLM. This reduces the number of tools exposed to the LLM, which tends to increase its effectiveness picking up a tool, particularly for smaller local models.
max_iterations
: Controls the maximum number of LLM iterations allowed in a single conversation turn (defaults to25
). This is a safety mechanism to prevent infinite loops when the LLM repeatedly invokes tools without reaching a conclusion. This protection is also helpful to control API costs when using cloud providers.
max_iterations = 25 # Adjust based on your needs and cost tolerance
target_monitor
(optional): For multi-monitor setups, specifies which monitor to use for screenshots and coordinate mapping (e.g.,"DP-6"
). If not set, the first monitor will be used. Run SOL once to see available monitor IDs in the logs.
Streamable HTTP servers are also supported:
[mcps.everything]
type = "streamable_http"
url = "http://localhost:3001/mcp"
Note that MCP servers are optional. SOL can work with no servers configured, in which case you would be talking to the LLM directly without any additional tools.
To extend SOL's capabilities, you need to make more "tools" available to the app. In the current context of LLMs, tools can have different origins and implementations described below.
We currently support:
- MCP tools: This is the primary mechanism to extend the tools available to SOL by a user. MCP is a provider agnostic standard which enables integrating with third-party providers and on-device functionality.
- Cloud tools: These are pre-built tools that are provider-specific and executed on the provider's server. They are configured per model and don't require local implementation. Examples include web search tools available from providers like Google, Anthropic, and OpenAI.
- Built-in tools: These are tools defined and implemented by SOL and available together with the other tools above. For example, we include tools that allow SOL to read and write the clipboard content. One possibility is to eventually graduate these built-in tools as their own MCP servers to simplify SOL's architecture and make these tools available to any MCP client.
- Computer use: Desktop automation capabilities that allow the AI to take screenshots, move the mouse, click, type, and interact with your desktop. Currently supported with Anthropic models when
enable_computer_use
is configured. These tools requirexdotool
to be installed on your system.
If you encounter any bugs, have feature requests, or need help with Speed of Light, please open an issue on our GitHub repository:
When reporting issues, please include:
- Your operating system and version
- The model and configuration you're using
- Steps to reproduce the issue
- Any relevant error messages or logs