google_api_dataproc v0.6.0 GoogleApi.Dataproc.V1.Model.Job View Source
A Cloud Dataproc job resource.
Attributes
- driverControlFilesUri (String.t): Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Defaults to:
null
. - driverOutputResourceUri (String.t): Output only. A URI pointing to the location of the stdout of the job's driver program. Defaults to:
null
. - hadoopJob (HadoopJob): Job is a Hadoop job. Defaults to:
null
. - hiveJob (HiveJob): Job is a Hive job. Defaults to:
null
. - jobUuid (String.t): Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Defaults to:
null
. - labels (%{optional(String.t) => String.t}): Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job. Defaults to:
null
. - pigJob (PigJob): Job is a Pig job. Defaults to:
null
. - placement (JobPlacement): Required. Job information, including how, when, and where to run the job. Defaults to:
null
. - pysparkJob (PySparkJob): Job is a Pyspark job. Defaults to:
null
. - reference (JobReference): Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a <code>job_id</code>. Defaults to:
null
. - scheduling (JobScheduling): Optional. Job scheduling configuration. Defaults to:
null
. - sparkJob (SparkJob): Job is a Spark job. Defaults to:
null
. - sparkSqlJob (SparkSqlJob): Job is a SparkSql job. Defaults to:
null
. - status (JobStatus): Output only. The job status. Additional application-specific status information may be contained in the <code>type_job</code> and <code>yarn_applications</code> fields. Defaults to:
null
. - statusHistory ([JobStatus]): Output only. The previous job status. Defaults to:
null
. - yarnApplications ([YarnApplication]): Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Defaults to:
null
.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Link to this type
t()
View Source
t()
View Source
t() :: %GoogleApi.Dataproc.V1.Model.Job{
driverControlFilesUri: any(),
driverOutputResourceUri: any(),
hadoopJob: GoogleApi.Dataproc.V1.Model.HadoopJob.t(),
hiveJob: GoogleApi.Dataproc.V1.Model.HiveJob.t(),
jobUuid: any(),
labels: map(),
pigJob: GoogleApi.Dataproc.V1.Model.PigJob.t(),
placement: GoogleApi.Dataproc.V1.Model.JobPlacement.t(),
pysparkJob: GoogleApi.Dataproc.V1.Model.PySparkJob.t(),
reference: GoogleApi.Dataproc.V1.Model.JobReference.t(),
scheduling: GoogleApi.Dataproc.V1.Model.JobScheduling.t(),
sparkJob: GoogleApi.Dataproc.V1.Model.SparkJob.t(),
sparkSqlJob: GoogleApi.Dataproc.V1.Model.SparkSqlJob.t(),
status: GoogleApi.Dataproc.V1.Model.JobStatus.t(),
statusHistory: [GoogleApi.Dataproc.V1.Model.JobStatus.t()],
yarnApplications: [GoogleApi.Dataproc.V1.Model.YarnApplication.t()]
}
t() :: %GoogleApi.Dataproc.V1.Model.Job{ driverControlFilesUri: any(), driverOutputResourceUri: any(), hadoopJob: GoogleApi.Dataproc.V1.Model.HadoopJob.t(), hiveJob: GoogleApi.Dataproc.V1.Model.HiveJob.t(), jobUuid: any(), labels: map(), pigJob: GoogleApi.Dataproc.V1.Model.PigJob.t(), placement: GoogleApi.Dataproc.V1.Model.JobPlacement.t(), pysparkJob: GoogleApi.Dataproc.V1.Model.PySparkJob.t(), reference: GoogleApi.Dataproc.V1.Model.JobReference.t(), scheduling: GoogleApi.Dataproc.V1.Model.JobScheduling.t(), sparkJob: GoogleApi.Dataproc.V1.Model.SparkJob.t(), sparkSqlJob: GoogleApi.Dataproc.V1.Model.SparkSqlJob.t(), status: GoogleApi.Dataproc.V1.Model.JobStatus.t(), statusHistory: [GoogleApi.Dataproc.V1.Model.JobStatus.t()], yarnApplications: [GoogleApi.Dataproc.V1.Model.YarnApplication.t()] }
Link to this section Functions
Link to this function
decode(value, options) View Source
Unwrap a decoded JSON object into its complex fields.