Skip to content

Commit 19a7c06

Browse files
authored
feat(docs): update doc for ipex-llm (#1968)
1 parent b687dc8 commit 19a7c06

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

fern/docs/pages/manual/llms.mdx

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -193,3 +193,9 @@ or
193193

194194
When the server is started it will print a log *Application startup complete*.
195195
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
196+
197+
### Using IPEX-LLM
198+
199+
For a fully private setup on Intel GPUs (such as a local PC with an iGPU, or discrete GPUs like Arc, Flex, and Max), you can use [IPEX-LLM](https://github.com/intel-analytics/ipex-llm).
200+
201+
To deploy Ollama and pull models using IPEX-LLM, please refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html). Then, follow the same steps outlined in the [Using Ollama](#using-ollama) section to create a `settings-ollama.yaml` profile and run the private-GPT server.

0 commit comments

Comments
 (0)