-
Notifications
You must be signed in to change notification settings - Fork 1.8k
doc/proposals: Managing Dockerfiles #431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc/proposals: Managing Dockerfiles #431
Conversation
shawn-hurley
commented
Aug 27, 2018
- Add options for managing dockerfiles
- Discuss the best solution and the changes needed for this solution.
the content is wrong. |
7d84217
to
bfc6e88
Compare
@xiang90 thanks for the comment. 🤦♂️ |
@shawn-hurley This approach seems fine to me. With multiple dockerfiles users can choose and manage the base image and add other configurations for their operator image as needed. One thing I wanted to bring up for consideration was making each file a multi-stage Dockerfile. That should address #167 as well. # Build the operator binary
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/example-inc/app-operator
COPY . /go/src/github.com/example-inc/app-operator
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/bin/app-operator github.com/example-inc/app-operator/cmd/app-operator/main.go
# Copy over to the base image with other customizations
FROM alpine:3.6
COPY --from=builder /go/bin/app-operator /usr/local/bin/app-operator
...
ENTRYPOINT ["app-operator"] |
@shawn-hurley You mentioned offline that we can't do multi-stage docker files:
Can you elaborate why that's the case. /cc @spahl Still need your input on this. |
multistage builds are a feature only in docker greater then 1.13. rhel has stuck at 1.13. I believe buildah though supports multistage now, so maybe that is a good alternative? |
…h` instead of `docker` `buildah` is a tool that builds OCI images, from its CLI interface or by interpreting Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue to be supported in RHEL. See operator-framework#431 for relevant discussion.
…h` instead of `docker` `buildah` is a tool that builds OCI images, from its CLI interface or by interpreting Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue to be supported in RHEL. See operator-framework#431 for relevant discussion.
…h` instead of `docker` `buildah` is a tool that builds OCI images, from its CLI interface or by interpreting Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue to be supported in RHEL. See operator-framework#431 for relevant discussion.
…h` instead of `docker` `buildah` is a tool that builds OCI images, from its CLI interface or by interpreting Dockerfiles. RHEL does not support docker-ce above v1.13, preventing the SDK from leveraging the multi-stage build Dockerfile feature; `buildah` supports this feature, and will continue to be supported in RHEL. See operator-framework#431 for relevant discussion.
67df57c
to
b316c2e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This solution still only works with buildah on RHEL. Many users will not be constrained by RHEL docker versioning though. Should we should impose a image builder version check, ex. docker v17.05+ || buildah
, or let image builders themselves generate errors?
@estroz Yeah... this feels like something we are going to have to do unless we are going to constantly have issues. If we want to layer builds like this then I don't see a way around this. Another option is to concat the files together to a temp file and then use that file for the build, If we don't wan to do the check this could be an option. |
I think a simple image version check is enough for us. We specifically mention that the minimum version of docker is 17.03 in our README, and |
The `operator-sdk build` command will add an option to pass in the base image example: `operator-sdk build --base-dockerfile centos-base.Dockerfile`. The `new` command will generate a default base, as it does now, called `base.Dockerfile`. If no option is passed in `base.Dockerfile` will be selected. | ||
|
||
#### Operator-SDK Changes | ||
The `operator-sdk` will now add an `sdk.Dockerfile`. This dockerfile will contain all the steps to set up the operator image correctly for whichever type of operator this is. This will be changed/updated and owned by the operator-sdk, and we will not expect or respect any changes made by a user to this file. This will allow us to completely re-write it as we see fit. The testing framework will not be changed as part of this proposal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a thought but how important is it to name the top level Dockerfile sdk.Dockerfile
. Wondering if we can keep this as build/Dockerfile
so it doesn't break users with older projects.
I know it's a small change to update existing projects to move build/Dockerfile
to build/sdk.Dockerfile
but it would be nice if we could avoid even that if the name change isn't strictly necessary.
Overall this approach SGTM. Just a nit on how this affects existing projects. |
We have decided to hold on implementing this until real-world problems exist and we can further understand the problem. |