GoogleApi.Dataproc.V1.Model.SparkSqlJob (google_api_dataproc v0.59.0)

View Source

A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries.

Attributes

  • jarFileUris (type: list(String.t), default: nil) - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
  • loggingConfig (type: GoogleApi.Dataproc.V1.Model.LoggingConfig.t, default: nil) - Optional. The runtime log config for job execution.
  • properties (type: map(), default: nil) - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
  • queryFileUri (type: String.t, default: nil) - The HCFS URI of the script that contains SQL queries.
  • queryList (type: GoogleApi.Dataproc.V1.Model.QueryList.t, default: nil) - A list of queries.
  • scriptVariables (type: map(), default: nil) - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Types

t()

@type t() :: %GoogleApi.Dataproc.V1.Model.SparkSqlJob{
  jarFileUris: [String.t()] | nil,
  loggingConfig: GoogleApi.Dataproc.V1.Model.LoggingConfig.t() | nil,
  properties: map() | nil,
  queryFileUri: String.t() | nil,
  queryList: GoogleApi.Dataproc.V1.Model.QueryList.t() | nil,
  scriptVariables: map() | nil
}

Functions

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.