View Source GoogleApi.Dataproc.V1.Model.SparkBatch (google_api_dataproc v0.54.0)
A configuration for running an Apache Spark (https://spark.apache.org/) batch workload.
Attributes
-
archiveUris
(type:list(String.t)
, default:nil
) - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. -
args
(type:list(String.t)
, default:nil
) - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission. -
fileUris
(type:list(String.t)
, default:nil
) - Optional. HCFS URIs of files to be placed in the working directory of each executor. -
jarFileUris
(type:list(String.t)
, default:nil
) - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks. -
mainClass
(type:String.t
, default:nil
) - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris. -
mainJarFileUri
(type:String.t
, default:nil
) - Optional. The HCFS URI of the jar file that contains the main class.
Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Types
Functions
Unwrap a decoded JSON object into its complex fields.