Skip to content

Commit 62b2648

Browse files
committed
Fix typos
1 parent 17ce00f commit 62b2648

File tree

7 files changed

+59
-81
lines changed

7 files changed

+59
-81
lines changed

docs/features/core/output_types.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Outlines models accept a __prompt__ and an __output type__ when they are invoked
1313
Output types can be from the general Python ecosystem, including:
1414
- Most native Python types, such as `int` or `str`
1515
- Types from the `typing` module, such as `Literal`, `List`, `Dict`, `Enum`, etc
16-
- Types from popular theird party libraries such as Pydantic or GenSON.
16+
- Types from popular third party libraries such as Pydantic or GenSON.
1717

1818
Outlines also provides special classes for certain output structures (more details below):
1919
- JSON schemas with `JsonSchema`
@@ -164,7 +164,7 @@ output_type = JsonSchema(schema_dict)
164164

165165
### Regex Patterns
166166

167-
Outlines provides support for text generation constrained by regular expressions. Since regular expressions are expressed as simple raw string literals, regex strings must wrapped in an `outlines.types.Regex` object to clarify the expected return type.
167+
Outlines provides support for text generation constrained by regular expressions. Since regular expressions are expressed as simple raw string literals, regex strings must wrapped in an `outlines.types.Regex` object.
168168

169169
```python
170170
from outlines.types import Regex

docs/features/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This section presents in details the different features of Outlines.
88
- [Output Types](./core/output_types.md)
99
- [Generators](./core/generator.md)
1010

11-
## Utility
11+
## Utilities
1212

1313
- [Applications](./utility/application.md)
1414
- [Templates](./utility/templates.md)

docs/features/models/index.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,8 @@ print(result) # '200'
4545

4646
## Features Matrix
4747

48+
In alphabetical order:
49+
4850
| | [Anthropic](../../models/anthropic) | [Dottxt](../../models/dottxt) | [Gemini](../../models/gemini) | [LlamaCpp](../../models/llamacpp) | [MLXLM](../../models/mlxlm) | [Ollama](../../models/ollama) | [OpenAI](../../models/openai) | [SGLang](../../models/sglang) | [TGI](../../models/tgi) | [Transformers](../../models/transformers) | [Transformers MultiModal](../../models/transformers_multimodal) | [VLLM](../../models/vllm) | [VLLMOffline](../../models/vllm_offline) |
4951
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5052
| **Output Types** | | | | | | | | | | | | | |
@@ -63,7 +65,7 @@ print(result) # '200'
6365

6466
Models can be divided into two categories: local models and server-based models.
6567

66-
In the case of local models, the text generation happens within the inference library object used to instantite the model. This gives Outlines direct access to the generation process (through a logits processor) and means all structured generation output types are available.
68+
In the case of local models, the text generation happens within the inference library object used to instantiate the model. This gives Outlines direct access to the generation process (through a logits processor) and means all structured generation output types are available.
6769

6870
The local models available are the following:
6971

docs/features/utility/regex_dsl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ print(pattern)
329329

330330
*Expected Output:*
331331

332-
```
332+
```ascii
333333
└── Sequence
334334
├── String('a')
335335
├── KleenePlus(+)

docs/guide/getting_started.md

Lines changed: 50 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -28,45 +28,64 @@ The full list of available models along with detailed explanation on how to use
2828

2929
For a quick start, you can find below an example of how to initialize all supported models in Outlines:
3030

31-
=== "Anthropic"
31+
=== "vLLM"
3232

3333
```python
3434
import outlines
35-
from anthropic import Anthropic
35+
from openai import OpenAI
3636

37-
# Create an Anthropic client
38-
anthropic_client = Anthropic()
37+
# You must have a separate vLLM server running
38+
# Create an OpenAI client with the base URL of the VLLM server
39+
openai_client = OpenAI(base_url="http://localhost:11434/v1")
3940

4041
# Create an Outlines model
41-
model = outlines.from_anthropic(anthropic_client, "claude-3-haiku-20240307")
42+
model = outlines.from_vllm(openai_client, "microsoft/Phi-3-mini-4k-instruct")
4243
```
4344

44-
=== "Dottxt"
45+
=== "Ollama"
4546

4647
```python
4748
import outlines
48-
from dottxt.client import Dottxt
49+
from ollama import Client
4950

50-
# Create an Dottxt client
51-
dottxt_client = Dottxt()
51+
# Create an Ollama client
52+
ollama_client = Client()
53+
54+
# Create an Outlines model, the model must be available on your system
55+
model = outlines.from_ollama(ollama_client, "tinyllama")
56+
```
57+
58+
=== "OpenAI"
59+
60+
```python
61+
import outlines
62+
from openai import OpenAI
63+
64+
# Create an OpenAI client instance
65+
openai_client = OpenAI()
5266

5367
# Create an Outlines model
54-
model = outlines.from_dottxt(dottxt_client)
68+
model = outlines.from_openai(openai_client, "gpt-4o")
5569
```
5670

57-
=== "Gemini"
71+
=== "Transformers"
5872

5973
```python
6074
import outlines
61-
from google.generativeai import GenerativeModel
75+
from transformers
6276

63-
# Create a Gemini client
64-
gemini_client = GenerativeModel()
77+
# Define the model you want to use
78+
model_name = "HuggingFaceTB/SmolLM2-135M-Instruct"
79+
80+
# Create a HuggingFace model and tokenizer
81+
hf_model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
82+
hf_tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
6583

6684
# Create an Outlines model
67-
model = outlines.from_gemini(gemini_client, "gemini-1-5-flash")
85+
model = outlines.from_transformers(hf_model, hf_tokenizer)
6886
```
6987

88+
7089
=== "llama.cpp"
7190

7291
```python
@@ -84,43 +103,30 @@ For a quick start, you can find below an example of how to initialize all suppor
84103
model = outlines.from_llamacpp(llama_cpp_model)
85104
```
86105

87-
=== "mlx-lm"
88-
89-
```python
90-
import outlines
91-
import mlx_lm
92-
93-
# Create an MLXLM model with the output of mlx_lm.load
94-
# The model will be downloaded from the HuggingFace hub
95-
model = outlines.from_mlxlm(
96-
**mlx_lm.load("mlx-community/SmolLM-135M-Instruct-4bit")
97-
)
98-
```
99-
100-
=== "Ollama"
106+
=== "Gemini"
101107

102108
```python
103109
import outlines
104-
from ollama import Client
110+
from google.generativeai import GenerativeModel
105111

106-
# Create an Ollama client
107-
ollama_client = Client()
112+
# Create a Gemini client
113+
gemini_client = GenerativeModel()
108114

109-
# Create an Outlines model, the model must be available on your system
110-
model = outlines.from_ollama(ollama_client, "tinyllama")
115+
# Create an Outlines model
116+
model = outlines.from_gemini(gemini_client, "gemini-1-5-flash")
111117
```
112118

113-
=== "OpenAI"
119+
=== "mlx-lm"
114120

115121
```python
116122
import outlines
117-
from openai import OpenAI
118-
119-
# Create an OpenAI client instance
120-
openai_client = OpenAI()
123+
import mlx_lm
121124

122-
# Create an Outlines model
123-
model = outlines.from_openai(openai_client, "gpt-4o")
125+
# Create an MLXLM model with the output of mlx_lm.load
126+
# The model will be downloaded from the HuggingFace hub
127+
model = outlines.from_mlxlm(
128+
**mlx_lm.load("mlx-community/SmolLM-135M-Instruct-4bit")
129+
)
124130
```
125131

126132
=== "SgLang"
@@ -153,23 +159,6 @@ For a quick start, you can find below an example of how to initialize all suppor
153159
model = outlines.from_tgi(tgi_client)
154160
```
155161

156-
=== "Transformers"
157-
158-
```python
159-
import outlines
160-
from transformers
161-
162-
# Define the model you want to use
163-
model_name = "HuggingFaceTB/SmolLM2-135M-Instruct"
164-
165-
# Create a HuggingFace model and tokenizer
166-
hf_model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
167-
hf_tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
168-
169-
# Create an Outlines model
170-
model = outlines.from_transformers(hf_model, hf_tokenizer)
171-
```
172-
173162
=== "vLLM (offline)"
174163

175164
```python
@@ -183,19 +172,6 @@ For a quick start, you can find below an example of how to initialize all suppor
183172
model = outlines.from_vllm_offline(vllm_model)
184173
```
185174

186-
=== "vLLM (online)"
187-
188-
```python
189-
import outlines
190-
from openai import OpenAI
191-
192-
# You must have a separate vLLM server running
193-
# Create an OpenAI client with the base URL of the VLLM server
194-
openai_client = OpenAI(base_url="http://localhost:11434/v1")
195-
196-
# Create an Outlines model
197-
model = outlines.from_vllm(openai_client, "microsoft/Phi-3-mini-4k-instruct")
198-
```
199175

200176
## Generating Text
201177

@@ -221,7 +197,7 @@ for chunk in model.streaming("Write a short story about a cat.")
221197

222198
## Structured Generation
223199

224-
Outlines follows a simple pattern that mirrors Python's own type system for structured output. Simply specify the desired output type as you would when using type hinting with a function, and Outlines will ensure your data matches that structure exactly.
200+
Outlines follows a simple pattern that mirrors Python's own type system for structured outputs. Simply specify the desired output type as you would when using type hinting with a function, and Outlines will ensure your data matches that structure exactly.
225201

226202
Supported output types can be organized in 5 categories:
227203

@@ -235,7 +211,7 @@ Consult the section on [Output Types](../../features/core/output_types.md) in th
235211

236212
In the meantime, you can find below examples of using each of the five output type categories:
237213

238-
=== "Basic Yypes"
214+
=== "Basic Types"
239215

240216
```python
241217
model = <your_model_as_defined_above>
@@ -332,7 +308,7 @@ In the meantime, you can find below examples of using each of the five output ty
332308
print(result) # '2 + 3'
333309
```
334310

335-
It's important to note that not all output types are available for all models due to limitations in the underline inference libraries. The [Models](../features/models/index.md) section of the features documentation includes a features matrix to easily visualize output type availabilities.
311+
It's important to note that not all output types are available for all models due to limitations in the underlying inference engines. The [Models](../features/models/index.md) section of the features documentation includes a features matrix that summarize the availability of output types.
336312

337313
## Generators
338314

docs/guide/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ source .venv/bin/activate
2020
uv pip install outlines
2121
```
2222

23-
or with basic pip:
23+
or with pip:
2424

2525
```shell
2626
pip install outlines

docs/guide/migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Outlines 1.0 introduces some breaking changes that affect the way you use the li
44

55
This guide will help you migrate your code to the new version.
66

7-
All previous functionality will be supported until Outlines version 1.1.0, but a warning message will be displayed to remind you to migrate your code and provide instructions to help you do so. Please migrate your code to the v1 as soon as possible.
7+
All previous functionalities will be supported until Outlines version 1.1.0, but a warning message will be displayed to remind you to migrate your code and provide instructions to help you do so. Please migrate your code to the v1 as soon as possible.
88

99
## Removed or modified features
1010
- [Generate functions](#generate-functions)

0 commit comments

Comments
 (0)