GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (google_api_video_intelligence v0.33.0)

View Source

Annotation results for a single video.

Attributes

  • error (type: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t, default: nil) - If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.
  • explicitAnnotation (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation.t, default: nil) - Explicit content annotation.
  • faceAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_FaceAnnotation.t), default: nil) - Deprecated. Please use face_detection_annotations instead.
  • faceDetectionAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_FaceDetectionAnnotation.t), default: nil) - Face detection annotations.
  • frameLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t), default: nil) - Label annotations on frame level. There is exactly one element for each unique label.
  • inputUri (type: String.t, default: nil) - Video file location in Cloud Storage.
  • logoRecognitionAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LogoRecognitionAnnotation.t), default: nil) - Annotations for list of logos detected, tracked and recognized in video.
  • objectAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_ObjectTrackingAnnotation.t), default: nil) - Annotations for list of objects detected and tracked in video.
  • personDetectionAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_PersonDetectionAnnotation.t), default: nil) - Person detection annotations.
  • segment (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoSegment.t, default: nil) - Video segment on which the annotation is run.
  • segmentLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t), default: nil) - Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label.
  • segmentPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t), default: nil) - Presence label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Compared to the existing topical segment_label_annotations, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request.
  • shotAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoSegment.t), default: nil) - Shot annotations. Each shot is represented as a video segment.
  • shotLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t), default: nil) - Topical label annotations on shot level. There is exactly one element for each unique label.
  • shotPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t), default: nil) - Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topical shot_label_annotations, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request.
  • speechTranscriptions (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_SpeechTranscription.t), default: nil) - Speech transcription.
  • textAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_TextAnnotation.t), default: nil) - OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.

Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Types

t()

@type t() ::
  %GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults{
    error: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t() | nil,
    explicitAnnotation:
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation.t()
      | nil,
    faceAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_FaceAnnotation.t()
      ]
      | nil,
    faceDetectionAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_FaceDetectionAnnotation.t()
      ]
      | nil,
    frameLabelAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t()
      ]
      | nil,
    inputUri: String.t() | nil,
    logoRecognitionAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LogoRecognitionAnnotation.t()
      ]
      | nil,
    objectAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_ObjectTrackingAnnotation.t()
      ]
      | nil,
    personDetectionAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_PersonDetectionAnnotation.t()
      ]
      | nil,
    segment:
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoSegment.t()
      | nil,
    segmentLabelAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t()
      ]
      | nil,
    segmentPresenceLabelAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t()
      ]
      | nil,
    shotAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_VideoSegment.t()
      ]
      | nil,
    shotLabelAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t()
      ]
      | nil,
    shotPresenceLabelAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_LabelAnnotation.t()
      ]
      | nil,
    speechTranscriptions:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_SpeechTranscription.t()
      ]
      | nil,
    textAnnotations:
      [
        GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1beta2_TextAnnotation.t()
      ]
      | nil
  }

Functions

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.