View Source GoogleApi.Dataproc.V1.Model.SparkRBatch (google_api_dataproc v0.54.0)

A configuration for running an Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) batch workload.

Attributes

  • archiveUris (type: list(String.t), default: nil) - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
  • args (type: list(String.t), default: nil) - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
  • fileUris (type: list(String.t), default: nil) - Optional. HCFS URIs of files to be placed in the working directory of each executor.
  • mainRFileUri (type: String.t, default: nil) - Required. The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Types

@type t() :: %GoogleApi.Dataproc.V1.Model.SparkRBatch{
  archiveUris: [String.t()] | nil,
  args: [String.t()] | nil,
  fileUris: [String.t()] | nil,
  mainRFileUri: String.t() | nil
}

Functions

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.