-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Description
When trying to run hash or cmd commands with spark in cluster mode, we get the same problem we used to have with ml, because the workers do not have the apollo lib and it is not added to the spark session using addPyfile.
I think we should either modify the way the --depzip flag functions in order to add it, or change logic:
- when
ml-sflag is used to specify a master that is not local, we should add ml, engine and all other dependencies if the call is not made by a command of themllibrary, e.g.apolloand it's dependencies. - the
--dep-zipflag should be used to addmldependencies. It will be of no use for us since our workers useml-coreimage and already have them, but it will be useful for other users. - as was pointed out in this issue, I think we should add to the spark conf by default the flags that will clean up after us, because it ends up taking a lot of memory
Metadata
Metadata
Assignees
Labels
No labels