GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_StreamingVideoAnnotationResults (google_api_video_intelligence v0.33.0)
View SourceStreaming annotation results corresponding to a portion of the video that is currently being processed. Only ONE type of annotation will be specified in the response.
Attributes
-
explicitAnnotation
(type:GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ExplicitContentAnnotation.t
, default:nil
) - Explicit content annotation results. -
frameTimestamp
(type:String.t
, default:nil
) - Timestamp of the processed frame in microseconds. -
labelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t)
, default:nil
) - Label annotation results. -
objectAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ObjectTrackingAnnotation.t)
, default:nil
) - Object tracking results. -
shotAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t)
, default:nil
) - Shot annotation results. Each shot is represented as a video segment.
Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Types
@type t() :: %GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_StreamingVideoAnnotationResults{ explicitAnnotation: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ExplicitContentAnnotation.t() | nil, frameTimestamp: String.t() | nil, labelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t() ] | nil, objectAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ObjectTrackingAnnotation.t() ] | nil, shotAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t() ] | nil }