google_api_dataproc v0.6.0 GoogleApi.Dataproc.V1.Model.SparkJob View Source
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
Attributes
- archiveUris ([String.t]): Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. Defaults to:
null
. - args ([String.t]): Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. Defaults to:
null
. - fileUris ([String.t]): Optional. HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. Defaults to:
null
. - jarFileUris ([String.t]): Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. Defaults to:
null
. - loggingConfig (LoggingConfig): Optional. The runtime log config for job execution. Defaults to:
null
. - mainClass (String.t): The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. Defaults to:
null
. - mainJarFileUri (String.t): The HCFS URI of the jar file that contains the main class. Defaults to:
null
. - properties (%{optional(String.t) => String.t}): Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. Defaults to:
null
.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Link to this section Functions
Link to this function
decode(value, options) View Source
Unwrap a decoded JSON object into its complex fields.