@@ -82,7 +82,7 @@ Once you have configured the options above on all the GPU nodes in your
82
82
cluster, you can enable GPU support by deploying the following Daemonset:
83
83
84
84
``` shell
85
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.0-rc.5 /nvidia-device-plugin.yml
85
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.0-rc.6 /nvidia-device-plugin.yml
86
86
```
87
87
88
88
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -123,7 +123,7 @@ The preferred method to deploy the device plugin is as a daemonset using `helm`.
123
123
Instructions for installing `helm` can be found
124
124
[here](https://helm.sh/docs/intro/install/).
125
125
126
- The `helm` chart for the latest release of the plugin (`v0.7.0-rc.5 `) includes
126
+ The `helm` chart for the latest release of the plugin (`v0.7.0-rc.6 `) includes
127
127
a number of customizable values. The most commonly overridden ones are :
128
128
129
129
` ` `
@@ -191,7 +191,7 @@ rationale behind this strategy can be found
191
191
Please take a look in the following `values.yaml` file to see the full set of
192
192
overridable parameters for the device plugin.
193
193
194
- * https://github.com/NVIDIA/k8s-device-plugin/blob/v0.7.0-rc.5 /deployments/helm/nvidia-device-plugin/values.yaml
194
+ * https://github.com/NVIDIA/k8s-device-plugin/blob/v0.7.0-rc.6 /deployments/helm/nvidia-device-plugin/values.yaml
195
195
196
196
# ### Installing via `helm install`from the `nvidia-device-plugin` `helm` repository
197
197
@@ -214,7 +214,7 @@ plugin with the various flags from above.
214
214
Using the default values for the flags :
215
215
` ` ` shell
216
216
$ helm install \
217
- --version=0.7.0-rc.5 \
217
+ --version=0.7.0-rc.6 \
218
218
--generate-name \
219
219
nvdp/nvidia-device-plugin
220
220
` ` `
@@ -223,7 +223,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
223
223
100ms of CPU time and a limit of 512MB of memory.
224
224
` ` ` shell
225
225
$ helm install \
226
- --version=0.7.0-rc.5 \
226
+ --version=0.7.0-rc.6 \
227
227
--generate-name \
228
228
--set compatWithCPUManager=true \
229
229
--set resources.requests.cpu=100m \
@@ -234,7 +234,7 @@ $ helm install \
234
234
Use the legacy Daemonset API (only available on Kubernetes < `v1.16`) :
235
235
` ` ` shell
236
236
$ helm install \
237
- --version=0.7.0-rc.5 \
237
+ --version=0.7.0-rc.6 \
238
238
--generate-name \
239
239
--set legacyDaemonsetAPI=true \
240
240
nvdp/nvidia-device-plugin
@@ -243,7 +243,7 @@ $ helm install \
243
243
Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`
244
244
` ` ` shell
245
245
$ helm install \
246
- --version=0.7.0-rc.5 \
246
+ --version=0.7.0-rc.6 \
247
247
--generate-name \
248
248
--set compatWithCPUManager=true \
249
249
--set migStrategy=mixed \
@@ -261,7 +261,7 @@ Using the default values for the flags:
261
261
` ` ` shell
262
262
$ helm install \
263
263
--generate-name \
264
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5 .tgz
264
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.6 .tgz
265
265
` ` `
266
266
267
267
Enabling compatibility with the `CPUManager` and running with a request for
@@ -272,15 +272,15 @@ $ helm install \
272
272
--set compatWithCPUManager=true \
273
273
--set resources.requests.cpu=100m \
274
274
--set resources.limits.memory=512Mi \
275
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5 .tgz
275
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.6 .tgz
276
276
` ` `
277
277
278
278
Use the legacy Daemonset API (only available on Kubernetes < `v1.16`) :
279
279
` ` ` shell
280
280
$ helm install \
281
281
--generate-name \
282
282
--set legacyDaemonsetAPI=true \
283
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5 .tgz
283
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.6 .tgz
284
284
` ` `
285
285
286
286
Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`
@@ -289,31 +289,31 @@ $ helm install \
289
289
--generate-name \
290
290
--set compatWithCPUManager=true \
291
291
--set migStrategy=mixed \
292
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5 .tgz
292
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.6 .tgz
293
293
` ` `
294
294
295
295
# # Building and Running Locally
296
296
297
297
The next sections are focused on building the device plugin locally and running it.
298
298
It is intended purely for development and testing, and not required by most users.
299
- It assumes you are pinning to the latest release tag (i.e. `v0.7.0-rc.5 `), but can
299
+ It assumes you are pinning to the latest release tag (i.e. `v0.7.0-rc.6 `), but can
300
300
easily be modified to work with any available tag or branch.
301
301
302
302
# ## With Docker
303
303
304
304
# ### Build
305
305
Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin) :
306
306
` ` ` shell
307
- $ docker pull nvidia/k8s-device-plugin:v0.7.0-rc.5
308
- $ docker tag nvidia/k8s-device-plugin:v0.7.0-rc.5 nvidia/k8s-device-plugin:devel
307
+ $ docker pull nvidia/k8s-device-plugin:v0.7.0-rc.6
308
+ $ docker tag nvidia/k8s-device-plugin:v0.7.0-rc.6 nvidia/k8s-device-plugin:devel
309
309
` ` `
310
310
311
311
Option 2, build without cloning the repository :
312
312
` ` ` shell
313
313
$ docker build \
314
314
-t nvidia/k8s-device-plugin:devel \
315
315
-f docker/amd64/Dockerfile.ubuntu16.04 \
316
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.7.0-rc.5
316
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.7.0-rc.6
317
317
` ` `
318
318
319
319
Option 3, if you want to modify the code :
0 commit comments