View Source GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_TrainingInput (google_api_machine_learning v0.28.1)

Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.

Attributes

  • args (type: list(String.t), default: nil) - Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
  • enableWebAccess (type: boolean(), default: nil) - Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
  • encryptionConfig (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_EncryptionConfig.t, default: nil) - Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
  • evaluatorConfig (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t, default: nil) - Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
  • evaluatorCount (type: String.t, default: nil) - Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
  • evaluatorType (type: String.t, default: nil) - Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
  • hyperparameters (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_HyperparameterSpec.t, default: nil) - Optional. The set of Hyperparameters to tune.
  • jobDir (type: String.t, default: nil) - Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
  • masterConfig (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t, default: nil) - Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
  • masterType (type: String.t, default: nil) - Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
  • network (type: String.t, default: nil) - Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
  • packageUris (type: list(String.t), default: nil) - Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
  • parameterServerConfig (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t, default: nil) - Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
  • parameterServerCount (type: String.t, default: nil) - Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
  • parameterServerType (type: String.t, default: nil) - Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
  • pythonModule (type: String.t, default: nil) - Required. The Python module name to run after installing the packages.
  • pythonVersion (type: String.t, default: nil) - Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: Python '3.7' is available when runtime_version is set to '1.15' or later. Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
  • region (type: String.t, default: nil) - Required. The region to run the training job in. See the available regions for AI Platform Training.
  • runtimeVersion (type: String.t, default: nil) - Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
  • scaleTier (type: String.t, default: nil) - Required. Specifies the machine types, the number of replicas for workers and parameter servers.
  • scheduling (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_Scheduling.t, default: nil) - Optional. Scheduling options for a training job.
  • serviceAccount (type: String.t, default: nil) - Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
  • useChiefInTfConfig (type: boolean(), default: nil) - Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
  • workerConfig (type: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t, default: nil) - Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
  • workerCount (type: String.t, default: nil) - Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
  • workerType (type: String.t, default: nil) - Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.

Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Types

@type t() :: %GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_TrainingInput{
  args: [String.t()] | nil,
  enableWebAccess: boolean() | nil,
  encryptionConfig:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_EncryptionConfig.t()
    | nil,
  evaluatorConfig:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t() | nil,
  evaluatorCount: String.t() | nil,
  evaluatorType: String.t() | nil,
  hyperparameters:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_HyperparameterSpec.t()
    | nil,
  jobDir: String.t() | nil,
  masterConfig:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t() | nil,
  masterType: String.t() | nil,
  network: String.t() | nil,
  packageUris: [String.t()] | nil,
  parameterServerConfig:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t() | nil,
  parameterServerCount: String.t() | nil,
  parameterServerType: String.t() | nil,
  pythonModule: String.t() | nil,
  pythonVersion: String.t() | nil,
  region: String.t() | nil,
  runtimeVersion: String.t() | nil,
  scaleTier: String.t() | nil,
  scheduling:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_Scheduling.t() | nil,
  serviceAccount: String.t() | nil,
  useChiefInTfConfig: boolean() | nil,
  workerConfig:
    GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1_ReplicaConfig.t() | nil,
  workerCount: String.t() | nil,
  workerType: String.t() | nil
}

Functions

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.