View Source GoogleApi.Dataproc.V1.Model.SparkStandaloneAutoscalingConfig (google_api_dataproc v0.54.0)

Basic autoscaling configurations for Spark Standalone.

Attributes

  • gracefulDecommissionTimeout (type: String.t, default: nil) - Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
  • removeOnlyIdleWorkers (type: boolean(), default: nil) - Optional. Remove only idle workers when scaling down cluster
  • scaleDownFactor (type: float(), default: nil) - Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
  • scaleDownMinWorkerFraction (type: float(), default: nil) - Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
  • scaleUpFactor (type: float(), default: nil) - Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
  • scaleUpMinWorkerFraction (type: float(), default: nil) - Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Types

@type t() :: %GoogleApi.Dataproc.V1.Model.SparkStandaloneAutoscalingConfig{
  gracefulDecommissionTimeout: String.t() | nil,
  removeOnlyIdleWorkers: boolean() | nil,
  scaleDownFactor: float() | nil,
  scaleDownMinWorkerFraction: float() | nil,
  scaleUpFactor: float() | nil,
  scaleUpMinWorkerFraction: float() | nil
}

Functions

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.