@@ -82,7 +82,7 @@ Once you have configured the options above on all the GPU nodes in your
82
82
cluster, you can enable GPU support by deploying the following Daemonset:
83
83
84
84
``` shell
85
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.3 /nvidia-device-plugin.yml
85
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.8.0 /nvidia-device-plugin.yml
86
86
```
87
87
88
88
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -123,7 +123,7 @@ The preferred method to deploy the device plugin is as a daemonset using `helm`.
123
123
Instructions for installing `helm` can be found
124
124
[here](https://helm.sh/docs/intro/install/).
125
125
126
- The `helm` chart for the latest release of the plugin (`v0.7.3 `) includes
126
+ The `helm` chart for the latest release of the plugin (`v0.8.0 `) includes
127
127
a number of customizable values. The most commonly overridden ones are :
128
128
129
129
` ` `
@@ -193,7 +193,7 @@ rationale behind this strategy can be found
193
193
Please take a look in the following `values.yaml` file to see the full set of
194
194
overridable parameters for the device plugin.
195
195
196
- * https://github.com/NVIDIA/k8s-device-plugin/blob/v0.7.3 /deployments/helm/nvidia-device-plugin/values.yaml
196
+ * https://github.com/NVIDIA/k8s-device-plugin/blob/v0.8.0 /deployments/helm/nvidia-device-plugin/values.yaml
197
197
198
198
# ### Installing via `helm install`from the `nvidia-device-plugin` `helm` repository
199
199
@@ -216,7 +216,7 @@ plugin with the various flags from above.
216
216
Using the default values for the flags :
217
217
` ` ` shell
218
218
$ helm install \
219
- --version=0.7.3 \
219
+ --version=0.8.0 \
220
220
--generate-name \
221
221
nvdp/nvidia-device-plugin
222
222
` ` `
@@ -225,7 +225,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
225
225
100ms of CPU time and a limit of 512MB of memory.
226
226
` ` ` shell
227
227
$ helm install \
228
- --version=0.7.3 \
228
+ --version=0.8.0 \
229
229
--generate-name \
230
230
--set compatWithCPUManager=true \
231
231
--set resources.requests.cpu=100m \
@@ -236,7 +236,7 @@ $ helm install \
236
236
Use the legacy Daemonset API (only available on Kubernetes < `v1.16`) :
237
237
` ` ` shell
238
238
$ helm install \
239
- --version=0.7.3 \
239
+ --version=0.8.0 \
240
240
--generate-name \
241
241
--set legacyDaemonsetAPI=true \
242
242
nvdp/nvidia-device-plugin
@@ -245,7 +245,7 @@ $ helm install \
245
245
Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`
246
246
` ` ` shell
247
247
$ helm install \
248
- --version=0.7.3 \
248
+ --version=0.8.0 \
249
249
--generate-name \
250
250
--set compatWithCPUManager=true \
251
251
--set migStrategy=mixed \
@@ -263,7 +263,7 @@ Using the default values for the flags:
263
263
` ` ` shell
264
264
$ helm install \
265
265
--generate-name \
266
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.3 .tgz
266
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.8.0 .tgz
267
267
` ` `
268
268
269
269
Enabling compatibility with the `CPUManager` and running with a request for
@@ -274,15 +274,15 @@ $ helm install \
274
274
--set compatWithCPUManager=true \
275
275
--set resources.requests.cpu=100m \
276
276
--set resources.limits.memory=512Mi \
277
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.3 .tgz
277
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.8.0 .tgz
278
278
` ` `
279
279
280
280
Use the legacy Daemonset API (only available on Kubernetes < `v1.16`) :
281
281
` ` ` shell
282
282
$ helm install \
283
283
--generate-name \
284
284
--set legacyDaemonsetAPI=true \
285
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.3 .tgz
285
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.8.0 .tgz
286
286
` ` `
287
287
288
288
Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`
@@ -291,31 +291,31 @@ $ helm install \
291
291
--generate-name \
292
292
--set compatWithCPUManager=true \
293
293
--set migStrategy=mixed \
294
- https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.3 .tgz
294
+ https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.8.0 .tgz
295
295
` ` `
296
296
297
297
# # Building and Running Locally
298
298
299
299
The next sections are focused on building the device plugin locally and running it.
300
300
It is intended purely for development and testing, and not required by most users.
301
- It assumes you are pinning to the latest release tag (i.e. `v0.7.3 `), but can
301
+ It assumes you are pinning to the latest release tag (i.e. `v0.8.0 `), but can
302
302
easily be modified to work with any available tag or branch.
303
303
304
304
# ## With Docker
305
305
306
306
# ### Build
307
307
Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin) :
308
308
` ` ` shell
309
- $ docker pull nvidia/k8s-device-plugin:v0.7.3
310
- $ docker tag nvidia/k8s-device-plugin:v0.7.3 nvidia/k8s-device-plugin:devel
309
+ $ docker pull nvidia/k8s-device-plugin:v0.8.0
310
+ $ docker tag nvidia/k8s-device-plugin:v0.8.0 nvidia/k8s-device-plugin:devel
311
311
` ` `
312
312
313
313
Option 2, build without cloning the repository :
314
314
` ` ` shell
315
315
$ docker build \
316
316
-t nvidia/k8s-device-plugin:devel \
317
317
-f docker/amd64/Dockerfile.ubuntu16.04 \
318
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.7.3
318
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.8.0
319
319
` ` `
320
320
321
321
Option 3, if you want to modify the code :
@@ -369,6 +369,11 @@ $ ./k8s-device-plugin --pass-device-specs
369
369
370
370
# # Changelog
371
371
372
+ # ## Version v0.8.0
373
+
374
+ - Raise an error if a device has migEnabled=true but has no MIG devices
375
+ - Allow mig.strategy=single on nodes with non-MIG gpus
376
+
372
377
# ## Version v0.7.3
373
378
374
379
- Update vendoring to include bug fix for `nvmlEventSetWait_v2`
0 commit comments