GoogleApi.Dataproc.V1.Model.ClusterConfig (google_api_dataproc v0.48.0) View Source

The cluster config.

Attributes

  • autoscalingConfig (type: GoogleApi.Dataproc.V1.Model.AutoscalingConfig.t, default: nil) - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
  • configBucket (type: String.t, default: nil) - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
  • encryptionConfig (type: GoogleApi.Dataproc.V1.Model.EncryptionConfig.t, default: nil) - Optional. Encryption settings for the cluster.
  • endpointConfig (type: GoogleApi.Dataproc.V1.Model.EndpointConfig.t, default: nil) - Optional. Port/endpoint configuration for this cluster
  • gceClusterConfig (type: GoogleApi.Dataproc.V1.Model.GceClusterConfig.t, default: nil) - Optional. The shared Compute Engine config settings for all instances in a cluster.
  • gkeClusterConfig (type: GoogleApi.Dataproc.V1.Model.GkeClusterConfig.t, default: nil) - Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
  • initializationActions (type: list(GoogleApi.Dataproc.V1.Model.NodeInitializationAction.t), default: nil) - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
  • lifecycleConfig (type: GoogleApi.Dataproc.V1.Model.LifecycleConfig.t, default: nil) - Optional. Lifecycle setting for the cluster.
  • masterConfig (type: GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t, default: nil) - Optional. The Compute Engine config settings for the master instance in a cluster.
  • metastoreConfig (type: GoogleApi.Dataproc.V1.Model.MetastoreConfig.t, default: nil) - Optional. Metastore configuration.
  • secondaryWorkerConfig (type: GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t, default: nil) - Optional. The Compute Engine config settings for additional worker instances in a cluster.
  • securityConfig (type: GoogleApi.Dataproc.V1.Model.SecurityConfig.t, default: nil) - Optional. Security settings for the cluster.
  • softwareConfig (type: GoogleApi.Dataproc.V1.Model.SoftwareConfig.t, default: nil) - Optional. The config settings for software inside the cluster.
  • tempBucket (type: String.t, default: nil) - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
  • workerConfig (type: GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t, default: nil) - Optional. The Compute Engine config settings for worker instances in a cluster.

Link to this section Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Link to this section Types

Specs

t() :: %GoogleApi.Dataproc.V1.Model.ClusterConfig{
  autoscalingConfig: GoogleApi.Dataproc.V1.Model.AutoscalingConfig.t() | nil,
  configBucket: String.t() | nil,
  encryptionConfig: GoogleApi.Dataproc.V1.Model.EncryptionConfig.t() | nil,
  endpointConfig: GoogleApi.Dataproc.V1.Model.EndpointConfig.t() | nil,
  gceClusterConfig: GoogleApi.Dataproc.V1.Model.GceClusterConfig.t() | nil,
  gkeClusterConfig: GoogleApi.Dataproc.V1.Model.GkeClusterConfig.t() | nil,
  initializationActions:
    [GoogleApi.Dataproc.V1.Model.NodeInitializationAction.t()] | nil,
  lifecycleConfig: GoogleApi.Dataproc.V1.Model.LifecycleConfig.t() | nil,
  masterConfig: GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t() | nil,
  metastoreConfig: GoogleApi.Dataproc.V1.Model.MetastoreConfig.t() | nil,
  secondaryWorkerConfig:
    GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t() | nil,
  securityConfig: GoogleApi.Dataproc.V1.Model.SecurityConfig.t() | nil,
  softwareConfig: GoogleApi.Dataproc.V1.Model.SoftwareConfig.t() | nil,
  tempBucket: String.t() | nil,
  workerConfig: GoogleApi.Dataproc.V1.Model.InstanceGroupConfig.t() | nil
}

Link to this section Functions

Specs

decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.