Skip to content

Commit e77a10a

Browse files
authored
Add LM Studio support (#2425)
1 parent e4307ae commit e77a10a

File tree

15 files changed

+298
-1
lines changed

15 files changed

+298
-1
lines changed

docs/components/embedders/config.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ Here's a comprehensive list of all parameters that can be used across different
8484
| `memory_add_embedding_type` | The type of embedding to use for the add memory action | VertexAI |
8585
| `memory_update_embedding_type` | The type of embedding to use for the update memory action | VertexAI |
8686
| `memory_search_embedding_type` | The type of embedding to use for the search memory action | VertexAI |
87+
| `lmstudio_base_url` | Base URL for LM Studio API | LM Studio |
8788
</Tab>
8889
<Tab title="TypeScript">
8990
| Parameter | Description | Provider |
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
You can use embedding models from LM Studio to run Mem0 locally.
2+
3+
### Usage
4+
5+
```python
6+
import os
7+
from mem0 import Memory
8+
9+
os.environ["OPENAI_API_KEY"] = "your_api_key" # For LLM
10+
11+
config = {
12+
"embedder": {
13+
"provider": "lmstudio",
14+
"config": {
15+
"model": "nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.f16.gguf"
16+
}
17+
}
18+
}
19+
20+
m = Memory.from_config(config)
21+
messages = [
22+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
23+
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
24+
{"role": "user", "content": "I’m not a big fan of thriller movies but I love sci-fi movies."},
25+
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
26+
]
27+
m.add(messages, user_id="john")
28+
```
29+
30+
### Config
31+
32+
Here are the parameters available for configuring Ollama embedder:
33+
34+
| Parameter | Description | Default Value |
35+
| --- | --- | --- |
36+
| `model` | The name of the OpenAI model to use | `nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.f16.gguf` |
37+
| `embedding_dims` | Dimensions of the embedding model | `1536` |
38+
| `lmstudio_base_url` | Base URL for LM Studio connection | `http://localhost:1234/v1` |

docs/components/embedders/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ See the list of supported embedders below.
2222
<Card title="Gemini" href="/components/embedders/models/gemini"></Card>
2323
<Card title="Vertex AI" href="/components/embedders/models/vertexai"></Card>
2424
<Card title="Together" href="/components/embedders/models/together"></Card>
25+
<Card title="LM Studio" href="/components/embedders/models/lmstudio"></Card>
2526
</CardGroup>
2627

2728
## Usage

docs/components/llms/config.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,7 @@ Here's a comprehensive list of all parameters that can be used across different
108108
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
109109
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
110110
| `xai_base_url` | Base URL for XAI API | XAI |
111+
| `lmstudio_base_url` | Base URL for LM Studio API | LM Studio |
111112
</Tab>
112113
<Tab title="TypeScript">
113114
| Parameter | Description | Provider |
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
---
2+
title: LM Studio
3+
---
4+
5+
To use LM Studio with Mem0, you'll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.
6+
7+
## Usage
8+
9+
<CodeGroup>
10+
```python Python
11+
import os
12+
from mem0 import Memory
13+
14+
os.environ["OPENAI_API_KEY"] = "your-api-key" # used for embedding model
15+
16+
config = {
17+
"llm": {
18+
"provider": "lmstudio",
19+
"config": {
20+
"model": "lmstudio-community/Meta-Llama-3.1-70B-Instruct-GGUF/Meta-Llama-3.1-70B-Instruct-IQ2_M.gguf",
21+
"temperature": 0.2,
22+
"max_tokens": 2000,
23+
"lmstudio_base_url": "http://localhost:1234/v1", # default LM Studio API URL
24+
}
25+
}
26+
}
27+
28+
m = Memory.from_config(config)
29+
messages = [
30+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
31+
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
32+
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
33+
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
34+
]
35+
m.add(messages, user_id="alice", metadata={"category": "movies"})
36+
```
37+
</CodeGroup>
38+
39+
### Running Completely Locally
40+
41+
You can also use LM Studio for both LLM and embedding to run Mem0 entirely locally:
42+
43+
```python
44+
from mem0 import Memory
45+
46+
# No external API keys needed!
47+
config = {
48+
"llm": {
49+
"provider": "lmstudio"
50+
},
51+
"embedder": {
52+
"provider": "lmstudio"
53+
}
54+
}
55+
56+
m = Memory.from_config(config)
57+
messages = [
58+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
59+
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
60+
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
61+
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
62+
]
63+
m.add(messages, user_id="alice123", metadata={"category": "movies"})
64+
```
65+
66+
<Note>
67+
When using LM Studio for both LLM and embedding, make sure you have:
68+
1. An LLM model loaded for generating responses
69+
2. An embedding model loaded for vector embeddings
70+
3. The server enabled with the correct endpoints accessible
71+
</Note>
72+
73+
<Note>
74+
To use LM Studio, you need to:
75+
1. Download and install [LM Studio](https://lmstudio.ai/)
76+
2. Start a local server from the "Server" tab
77+
3. Set the appropriate `lmstudio_base_url` in your configuration (default is usually http://localhost:1234/v1)
78+
</Note>
79+
80+
## Config
81+
82+
All available parameters for the `lmstudio` config are present in [Master List of All Params in Config](../config).

docs/components/llms/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ To view all supported llms, visit the [Supported LLMs](./models).
3232
<Card title="Gemini" href="/components/llms/models/gemini" />
3333
<Card title="DeepSeek" href="/components/llms/models/deepseek" />
3434
<Card title="xAI" href="/components/llms/models/xAI" />
35+
<Card title="LM Studio" href="/components/llms/models/lmstudio" />
3536
</CardGroup>
3637

3738
## Structured vs Unstructured Outputs

mem0/configs/embeddings/base.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ def __init__(
3030
memory_add_embedding_type: Optional[str] = None,
3131
memory_update_embedding_type: Optional[str] = None,
3232
memory_search_embedding_type: Optional[str] = None,
33+
# LM Studio specific
34+
lmstudio_base_url: Optional[str] = "http://localhost:1234/v1",
3335
):
3436
"""
3537
Initializes a configuration class instance for the Embeddings.
@@ -58,6 +60,8 @@ def __init__(
5860
:type memory_update_embedding_type: Optional[str], optional
5961
:param memory_search_embedding_type: The type of embedding to use for the search memory action, defaults to None
6062
:type memory_search_embedding_type: Optional[str], optional
63+
:param lmstudio_base_url: LM Studio base URL to be use, defaults to "http://localhost:1234/v1"
64+
:type lmstudio_base_url: Optional[str], optional
6165
"""
6266

6367
self.model = model
@@ -82,3 +86,6 @@ def __init__(
8286
self.memory_add_embedding_type = memory_add_embedding_type
8387
self.memory_update_embedding_type = memory_update_embedding_type
8488
self.memory_search_embedding_type = memory_search_embedding_type
89+
90+
# LM Studio specific
91+
self.lmstudio_base_url = lmstudio_base_url

mem0/configs/llms/base.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,8 @@ def __init__(
3939
deepseek_base_url: Optional[str] = None,
4040
# XAI specific
4141
xai_base_url: Optional[str] = None,
42+
# LM Studio specific
43+
lmstudio_base_url: Optional[str] = "http://localhost:1234/v1",
4244
):
4345
"""
4446
Initializes a configuration class instance for the LLM.
@@ -83,6 +85,8 @@ def __init__(
8385
:type deepseek_base_url: Optional[str], optional
8486
:param xai_base_url: XAI base URL to be use, defaults to None
8587
:type xai_base_url: Optional[str], optional
88+
:param lmstudio_base_url: LM Studio base URL to be use, defaults to "http://localhost:1234/v1"
89+
:type lmstudio_base_url: Optional[str], optional
8690
"""
8791

8892
self.model = model
@@ -116,3 +120,6 @@ def __init__(
116120

117121
# XAI specific
118122
self.xai_base_url = xai_base_url
123+
124+
# LM Studio specific
125+
self.lmstudio_base_url = lmstudio_base_url

mem0/embeddings/configs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ class EmbedderConfig(BaseModel):
1313
@field_validator("config")
1414
def validate_config(cls, v, values):
1515
provider = values.data.get("provider")
16-
if provider in ["openai", "ollama", "huggingface", "azure_openai", "gemini", "vertexai", "together"]:
16+
if provider in ["openai", "ollama", "huggingface", "azure_openai", "gemini", "vertexai", "together", "lmstudio"]:
1717
return v
1818
else:
1919
raise ValueError(f"Unsupported embedding provider: {provider}")

mem0/embeddings/lmstudio.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
from typing import Literal, Optional
2+
3+
from openai import OpenAI
4+
5+
from mem0.configs.embeddings.base import BaseEmbedderConfig
6+
from mem0.embeddings.base import EmbeddingBase
7+
8+
9+
class LMStudioEmbedding(EmbeddingBase):
10+
def __init__(self, config: Optional[BaseEmbedderConfig] = None):
11+
super().__init__(config)
12+
13+
self.config.model = self.config.model or "nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.f16.gguf"
14+
self.config.embedding_dims = self.config.embedding_dims or 1536
15+
self.config.api_key = self.config.api_key or "lm-studio"
16+
17+
self.client = OpenAI(base_url=self.config.lmstudio_base_url, api_key=self.config.api_key)
18+
19+
def embed(self, text, memory_action: Optional[Literal["add", "search", "update"]] = None):
20+
"""
21+
Get the embedding for the given text using LM Studio.
22+
Args:
23+
text (str): The text to embed.
24+
memory_action (optional): The type of embedding to use. Must be one of "add", "search", or "update". Defaults to None.
25+
Returns:
26+
list: The embedding vector.
27+
"""
28+
text = text.replace("\n", " ")
29+
return (
30+
self.client.embeddings.create(input=[text], model=self.config.model)
31+
.data[0]
32+
.embedding
33+
)

0 commit comments

Comments
 (0)