Skip to content
Closed
Show file tree
Hide file tree
Changes from 31 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
e2e7223
add spec template configs
onursatici Aug 17, 2018
ea4dde6
start from template for driver
onursatici Aug 17, 2018
4f088db
start from template for executor
onursatici Aug 17, 2018
f2f9a44
wip
onursatici Aug 17, 2018
368d0a4
volume executor podspec template
onursatici Aug 17, 2018
0005ea5
move logic to apply functions
onursatici Aug 17, 2018
dda5cc9
find containers
onursatici Aug 17, 2018
d0f41aa
style
onursatici Aug 17, 2018
c4c1231
remove import
onursatici Aug 17, 2018
74de0e5
compiles
yifeih Aug 21, 2018
205ddd3
tests pass
yifeih Aug 21, 2018
4ae6fc6
adding TemplateVolumeStepSuite
yifeih Aug 21, 2018
c0bcfea
DriverBuilder test
yifeih Aug 21, 2018
b9e4263
WIP trying to write tests for KubernetesDriverBuilder constructor
yifeih Aug 21, 2018
c5e1ea0
fix test
yifeih Aug 21, 2018
56a6b32
fix test, and move loading logic to util method
yifeih Aug 22, 2018
7d0d928
validate that the executor pod template is good in the driver
yifeih Aug 22, 2018
1da79a8
cleaning
yifeih Aug 22, 2018
8ef756e
Merge branch 'apache/master' into yh/pod-template
yifeih Aug 22, 2018
7f3cb04
redo mounting file
yifeih Aug 23, 2018
cc8d3f8
rename to TemplateConfigMapStep
yifeih Aug 23, 2018
4119899
Pass initialPod constructor instead of Spec constructor
yifeih Aug 23, 2018
1d0a8fa
make driver and executor container names configurable
yifeih Aug 23, 2018
81e5a66
create temp file correctly?
yifeih Aug 23, 2018
7f4ff5a
executor initial pod test
yifeih Aug 23, 2018
3097aef
add docs
yifeih Aug 23, 2018
9b1418a
addressing some comments
yifeih Aug 23, 2018
ebacc96
integration tests attempt 1
yifeih Aug 24, 2018
98acd29
fix up docs
yifeih Aug 24, 2018
95f8b8b
rename a variable
yifeih Aug 24, 2018
da5dff5
fix style?
yifeih Aug 24, 2018
7fb76c7
fix docs to remove container name conf and further clarify
yifeih Aug 25, 2018
d86bc75
actually add the pod template test
yifeih Aug 25, 2018
f2720a5
remove containerName confs
yifeih Aug 25, 2018
4b3950d
test tag and indent
onursatici Aug 27, 2018
3813fcb
extension
onursatici Aug 28, 2018
ec04323
use resources for integartion tests templates
onursatici Aug 28, 2018
f3b6082
rat
onursatici Aug 28, 2018
fd503db
fix path
onursatici Aug 29, 2018
eeb2492
prevent having duplicate containers
onursatici Aug 29, 2018
4801e8e
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Aug 29, 2018
36a70ad
do not use broken removeContainer
onursatici Aug 29, 2018
ece7a7c
nits
onursatici Aug 29, 2018
8b8aa48
inline integration test methods, add volume to executor builder unit …
onursatici Aug 30, 2018
1ed95ab
do not raise twice on template parse failuer
onursatici Aug 31, 2018
a4fde0c
add comprehensive test for supported template features
onursatici Aug 31, 2018
140e89c
generalize tests to cover both driver and executor builders
onursatici Aug 31, 2018
838c2bd
docs
onursatici Sep 4, 2018
5faea62
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
9e6a4b2
fix tests, templates does not support changing executor pod names
onursatici Oct 29, 2018
c8077dc
config to select spark containers in pod templates
onursatici Oct 29, 2018
3d6ff3b
more readable select container logic
onursatici Oct 29, 2018
83087eb
fix integration tests
onursatici Oct 29, 2018
a46b885
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
80b56c1
address comments
onursatici Oct 29, 2018
8f7f571
rename pod template volume name
onursatici Oct 30, 2018
3707e6a
imports
onursatici Oct 30, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
191 changes: 191 additions & 0 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,18 @@ To use a secret through an environment variable use the following options to the
--conf spark.kubernetes.executor.secretKeyRef.ENV_NAME=name:key
```

## Pod Template
Kubernetes allows defining pods from [template files](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pod-templates).
Spark users can similarly use template files to define the driver or executor pod configurations that Spark configurations do not support.
To do so, specify the spark properties `spark.kubernetes.driver.podTemplateFile` and `spark.kubernetes.executor.podTemplateFile`
to point to local files accessible to the `spark-submit` process. To allow the driver pod access the executor pod template
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you considered a single config map created at submission time, from which both driver and executors pull their appropriate templates?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think we considered this, but, an interesting proposal. I think that can be a follow up feature if requested.

file, the file will be automatically mounted onto a volume in the driver pod when it's created.

It is important to note that Spark is opinionated about certain pod configurations so there are values in the
pod template that will always be overwritten by Spark. Therefore, users of this feature should note that specifying
the pod template file only lets Spark start with a template pod instead of an empty pod during the pod-building process.
For details, see the [full list](#pod-template-properties) of pod template values that will be overwritten by spark.

## Introspection and Debugging

These are the different ways in which you can investigate a running/completed Spark application, monitor progress, and
Expand Down Expand Up @@ -775,4 +787,183 @@ specific to Spark on Kubernetes.
This sets the major Python version of the docker image used to run the driver and executor containers. Can either be 2 or 3.
</td>
</tr>
<tr>
<td><code>spark.kubernetes.driver.containerName</code></td>
<td><code>"spark-kubernetes-driver"</code></td>
<td>
This sets the driver container name. If you are specifying a driver [pod template](#pod-template), you can match this name to the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "match this name to the name set in the template" mean?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it mean "this is the name to look for in a list of containers coming in from the template?"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea it means that it'll look for that container in the template to be the driver container, and throw if it can't find it

driver container name set in the template.
</td>
</tr>
<tr>
<td><code>spark.kubernetes.executor.containerName</code></td>
<td><code>"spark-kubernetes-executor"</code></td>
<td>
This sets the executor container name. If you are specifying a an executor [pod template](#pod-template), you can match this name to the
driver container name set in the template.
</td>
</tr>
<tr>
<td><code>spark.kubernetes.driver.podTemplateFile</code></td>
<td>(none)</td>
<td>
Specify the local file that contains the driver [pod template](#pod-template). For example
<code>spark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml`</code>
</td>
</tr>
<tr>
<td><code>spark.kubernetes.executor.podTemplateFile</code></td>
<td>(none)</td>
<td>
Specify the local file that contains the executor [pod template](#pod-template). For example
<code>spark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml`</code>
</td>
</tr>
</table>

#### Pod template properties

See the below table for the full list of pod specifications that will be overwritten by spark.

### Pod Metadata

<table class="table">
<tr><th>Pod metadata key</th><th>Modified value</th><th>Description</th></tr>
<tr>
<td>name</td>
<td>Value of <code>spark.kubernetes.driver.pod.name</code></td>
<td>
The driver pod name will be overwritten with either the configured or default value of
<code>spark.kubernetes.driver.pod.name</code>. The executor pod names will be unaffected.
</td>
</tr>
<tr>
<td>namespace</td>
<td>Value of <code>spark.kubernetes.namespace</code></td>
<td>
Spark makes strong assumptions about the driver and executor namespaces. Both driver and executor namespaces will
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the spark conf value for namespace isn't set, can spark use the template setting, or will spark's conf default also override the template?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will also be replaced with the spark conf's default value. I'll update this description to match that of name right above this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be the default namespace.

be replaced by either the configured or default spark conf value.
</td>
</tr>
<tr>
<td>labels</td>
<td>Adds the labels from <code>spark.kubernetes.{driver,executor}.label.*</code></td>
<td>
Spark will add additional labels specified by the spark configuration.
</td>
</tr>
<tr>
<td>annotations</td>
<td>Adds the annotations from <code>spark.kubernetes.{driver,executor}.annotation.*</code></td>
<td>
Spark will add additional labels specified by the spark configuration.
</td>
</tr>
</table>

### Pod Spec

<table class="table">
<tr><th>Pod spec key</th><th>Modified value</th><th>Description</th></tr>
<tr>
<td>imagePullSecrets</td>
<td>Adds image pull secrets from <code>spark.kubernetes.container.image.pullSecrets</code></td>
<td>
Additional pull secrets will be added from the spark configuration to both executor pods.
</td>
</tr>
<tr>
<td>nodeSelector</td>
<td>Adds node selectors from <code>spark.kubernetes.node.selector.*</code></td>
<td>
Additional node selectors will be added from the spark configuration to both executor pods.
</td>
</tr>
<tr>
<td>restartPolicy</td>
<td><code>"never"</code></td>
<td>
Spark assumes that both drivers and executors never restart.
</td>
</tr>
<tr>
<td>serviceAccount</td>
<td>Value of <code>spark.kubernetes.authenticate.driver.serviceAccountName</code></td>
<td>
Spark will override <code>serviceAccount</code> with the value of the spark configuration for only
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar Q to namespace: if no spark-conf is set, will spark's conf default override here as well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

driver pods, and only if the spark configuration is specified. Executor pods will remain unaffected.
</td>
</tr>
<tr>
<td>serviceAccountName</td>
<td>Value of <code>spark.kubernetes.authenticate.driver.serviceAccountName</code></td>
<td>
Spark will override <code>serviceAccountName</code> with the value of the spark configuration for only
driver pods, and only if the spark configuration is specified. Executor pods will remain unaffected.
</td>
</tr>
<tr>
<td>volumes</td>
<td>Adds volumes from <code>spark.kubernetes.{driver,executor}.volumes.[VolumeType].[VolumeName].mount.path</code></td>
<td>
Spark will add volumes as specified by the spark conf, as well as additional volumes necessary for passing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also document the volume names (and optionally mount points) for all the "internal" spark volumes (pod download, config map, etc).
As people will use the pod template to also mount their own volumes, we should document the existing names so we can avoid name clashes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

As I've commented elsewhere it is easily possible to create invalid specs with this feature because Spark will create certain volumes, config maps etc with known name patterns that users need to avoid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have done exactly that even w/o this feature - I was trying to use the spark operator to mount a config map and accidentally hit upon the spark config volume on my first try :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. As I mentioned before we need to know the implications of whatever property we expose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way we can avoid conflicting volumes entirely is by randomizing the name of the volumes added by features, e.g. appending some UUID or at least some large integer. I think keeping running documentation on all volumes we add from features is too much overhead. If we run into these conflicts often then we can do this, but I think it's fine not to block merging on that.

Either way though I think again, the validation piece can be done separately from this PR. I wouldn't consider that documentation as blocking on this merging. Thoughts?

spark conf and pod template files.
</td>
</tr>
</table>

### Container spec

The following affect the driver and executor containers. All other containers in the pod spec will be unaffected.

<table class="table">
<tr><th>Container spec key</th><th>Modified value</th><th>Description</th></tr>
<tr>
<td>env</td>
<td>Adds env variables from <code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
<td>
Spark will add driver env variables from <code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code>, and
executor env variables from <code>spark.executorEnv.[EnvironmentVariableName]</code>.
</td>
</tr>
<tr>
<td>image</td>
<td>Value of <code>spark.kubernetes.{driver,executor}.container.image</code></td>
<td>
The image will be defined by the spark configurations.
</td>
</tr>
<tr>
<td>imagePullPolicy</td>
<td>Value of <code>spark.kubernetes.container.image.pullPolicy</code></td>
<td>
Spark will override the pull policy for both driver and executors.
</td>
</tr>
<tr>
<td>name</td>
<td>See description.</td>
<td>
The driver and executor container names are defined by <code>spark.kubernetes.{driver,executor}.containerName</code>.
If they don't exist in the pod template file, then an error will be thrown.
</td>
</tr>
<tr>
<td>resources</td>
<td>See description</td>
<td>
The cpu limits are set by <code>spark.kubernetes.{driver,executor}.limit.cores</code>. The cpu is set by
<code>spark.{driver,executor}.cores</code>. The memory request and limit are set by summing the values of
<code>spark.{driver,executor}.memory</code> and <code>spark.{driver,executor}.memoryOverhead</code>.

</td>
</tr>
<tr>
<td>volumeMounts</td>
<td>Add volumes from <code>spark.kubernetes.driver.volumes.[VolumeType].[VolumeName].mount.{path,readOnly}</code></td>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking, is it add or replace? I'm hoping one could use this to mount unsupported volume types, like config maps or secrets, in addition to those managed by spark.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aditanase that was the purpose of the design doc: https://docs.google.com/document/d/1pcyH5f610X2jyJW9WbWHnj8jktQPLlbbmmUwdeK4fJk capture what behavior we want for each case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Volumes should be additive. If names are duplicated though I'd expect K8s to throw an error.

<td>
Spark will add volumes as specified by the spark conf, as well as additional volumes necessary for passing
spark conf and pod template files.
</td>
</tr>
</table>
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,30 @@ private[spark] object Config extends Logging {
"Ensure that major Python version is either Python2 or Python3")
.createWithDefault("2")

val KUBERNETES_DRIVER_CONTAINER_NAME =
ConfigBuilder("spark.kubernetes.driver.containerName")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to do this via the containers container array from the pod template?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to make it explicit, rather than e.g. "pick the first container in the list".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're saying that manipulating containers doesn't give specific control over which name is assigned to which container, then I agree

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mccheah it raises interesting question: a user might use this feature to add containers, e.g. side-car proxies, service mesh, etc. In fact, I'd want users to be able to do this. If there are multiple containers defined, we may need a convention for identifying which one is driver/executor. Alternatives might include additive-only, like labels, or disallowing, but supporting this seems very desirable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature should support using multiple containers, in which case the user needs to specify which container is running the Spark process. Using a configuration option for that seems like the most straightforward solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, what if the user gives the driver container name in the driver template but forgets to specify spark.kubernetes.driver.containerName? Requiring users to set the container name explicitly and additionally through another config key sounds a bit awkward to me, particularly when there's only the Spark container itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there confusion because there's an existing configuration option also? I think the existing configuration option sets the driver container name when no yml is specified. But perhaps the interpretation of this configuration value should change when the pod template is provided.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the semantic of this key will change with the template option. So with the template, there're two sources of driver container name, and we need to resolve conflicts in certain cases, e.g., if the name is specified in only one source or if the two sources mismatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I don't think there's an existing configuration option. I believe the container name is currently just hard coded in Constants.

However, I see your concern about using spark.kubernetes.driver.containerName for multiple purposes. In that case, it's definitely easier to reason about with fewer moving pieces, and it sounds like simplest is best. I'll remove the container name configuration and just stick with first container = driver container. This just means that the only way to configure the container name will be through a pod template. I'll also clarify in the docs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. We were confusing the one for driver pod name with this one.

.doc("The name of the driver container within the driver pod template file")
.stringConf
.createWithDefault("spark-kubernetes-driver")

val KUBERNETES_EXECUTOR_CONTAINER_NAME =
ConfigBuilder("spark.kubernetes.executor.containerName")
.doc("The name of the executor container within the executor pod template file")
.stringConf
.createWithDefault("spark-kubernetes-executor")

val KUBERNETES_DRIVER_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.driver.podTemplateFile")
.doc("File containing a template pod spec for the driver")
.stringConf
.createOptional

val KUBERNETES_EXECUTOR_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.executor.podTemplateFile")
.doc("File containing a template pod spec for executors")
.stringConf
.createOptional

val KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX =
"spark.kubernetes.authenticate.submission"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,14 @@ private[spark] object Constants {
val ENV_R_PRIMARY = "R_PRIMARY"
val ENV_R_ARGS = "R_APP_ARGS"

// Pod spec templates
val EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME = "pod-spec-template.yml"
val EXECUTOR_POD_SPEC_TEMPLATE_MOUNTHPATH = "/opt/spark/pod-template"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling, think we want EXECUTOR_POD_SPEC_TEMPLATE_MOUNTPATH?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick ping here

val POD_TEMPLATE_VOLUME = "podspec-volume"
Copy link
Contributor

@skonto skonto Aug 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/podspec-volume/pod-template-volume
You are passing the whole template right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping here

val POD_TEMPLATE_CONFIGMAP = "podspec-configmap"
val POD_TEMPLATE_KEY = "podspec-configmap-key"

// Miscellaneous
val KUBERNETES_MASTER_INTERNAL_URL = "https://kubernetes.default.svc"
val DRIVER_CONTAINER_NAME = "spark-kubernetes-driver"
val MEMORY_OVERHEAD_MIN_MIB = 384L
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,9 @@ private[spark] case class KubernetesDriverSpec(
systemProperties: Map[String, String])

private[spark] object KubernetesDriverSpec {
def initialSpec(initialProps: Map[String, String]): KubernetesDriverSpec = KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
initialProps)
def initialSpec(initialConf: KubernetesConf[KubernetesDriverSpecificConf]): KubernetesDriverSpec =
KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: For clarity can you write as:

    KubernetesDriverSpec(
      SparkPod.initialPod(),
      driverKubernetesResources = Seq.empty,
      initialConf.sparkConf.getAll.toMap)

initialConf.sparkConf.getAll.toMap)
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,16 @@
*/
package org.apache.spark.deploy.k8s

import org.apache.spark.SparkConf
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient
import scala.collection.JavaConverters._

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.internal.Logging
import org.apache.spark.util.Utils

private[spark] object KubernetesUtils {
private[spark] object KubernetesUtils extends Logging {

/**
* Extract and parse Spark configuration properties with a given name prefix and
Expand Down Expand Up @@ -59,5 +65,22 @@ private[spark] object KubernetesUtils {
}
}

def loadPodFromTemplate(
kubernetesClient: KubernetesClient,
templateFile: File,
containerName: String): SparkPod = {
try {
val pod = kubernetesClient.pods().load(templateFile).get()
val containers = pod.getSpec.getContainers.asScala
require(containers.map(_.getName).contains(containerName))
SparkPod(pod, containers.filter(_.getName == containerName).head)
} catch {
case e: Exception =>
logError(
s"Encountered exception while attempting to load initial pod spec from file", e)
throw new SparkException("Could not load driver pod from template file.", e)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message is misleading, it throws when both executor and driver pod failed to load from its own template. Either remove "driver" or be more specific that its executor or driver.

}
}

def parseMasterUrl(url: String): String = url.substring("k8s://".length)
}
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ private[spark] class BasicDriverFeatureStep(
)
val driverUIPort = SparkUI.getUIPort(conf.sparkConf)
val driverContainer = new ContainerBuilder(pod.container)
.withName(DRIVER_CONTAINER_NAME)
.withName(conf.get(KUBERNETES_DRIVER_CONTAINER_NAME))
.withImage(driverContainerImage)
.withImagePullPolicy(conf.imagePullPolicy())
.addNewPort()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ private[spark] class BasicExecutorFeatureStep(
}

val executorContainer = new ContainerBuilder(pod.container)
.withName("executor")
.withName(kubernetesConf.get(KUBERNETES_EXECUTOR_CONTAINER_NAME))
.withImage(executorContainerImage)
.withImagePullPolicy(kubernetesConf.imagePullPolicy())
.withNewResources()
Expand Down Expand Up @@ -163,8 +163,8 @@ private[spark] class BasicExecutorFeatureStep(
val executorPod = new PodBuilder(pod.pod)
.editOrNewMetadata()
.withName(name)
.withLabels(kubernetesConf.roleLabels.asJava)
.withAnnotations(kubernetesConf.roleAnnotations.asJava)
.addToLabels(kubernetesConf.roleLabels.asJava)
.addToAnnotations(kubernetesConf.roleAnnotations.asJava)
.addToOwnerReferences(ownerReference.toSeq: _*)
.endMetadata()
.editOrNewSpec()
Expand Down
Loading