View Source AWS.Evidently (aws-elixir v0.14.1)

You can use Amazon CloudWatch Evidently to safely validate new features by serving them to a specified percentage of your users while you roll out the feature.

You can monitor the performance of the new feature to help you decide when to ramp up traffic to your users. This helps you reduce risk and identify unintended consequences before you fully launch the feature.

You can also conduct A/B experiments to make feature design decisions based on evidence and data. An experiment can test as many as five variations at once. Evidently collects experiment data and analyzes it using statistical methods. It also provides clear recommendations about which variations perform better. You can test both user-facing features and backend features.

Summary

Functions

This operation assigns feature variation to user sessions.

Creates an Evidently feature that you want to launch or test.

Creates a launch of a given feature.

Creates a project, which is the logical object in Evidently that can contain features, launches, and experiments.

Use this operation to define a segment of your audience.

This operation assigns a feature variation to one given user session.

Returns the details about one experiment.

Retrieves the results of a running or completed experiment.

Returns the details about one feature.

Returns the details about one launch.

Returns the details about one launch.

Returns information about the specified segment.

Returns configuration details about all the experiments in the specified project.

Returns configuration details about all the features in the specified project.

Returns configuration details about all the launches in the specified project.

Returns configuration details about all the projects in the current Region in your account.

Use this operation to find which experiments or launches are using a specified segment.

Returns a list of audience segments that you have created in your account in this Region.

Displays the tags associated with an Evidently resource.

Sends performance events to Evidently.

Stops an experiment that is currently running.

Stops a launch that is currently running.

Assigns one or more tags (key-value pairs) to the specified CloudWatch Evidently resource.

Use this operation to test a rules pattern that you plan to use to create an audience segment.

Removes one or more tags from the specified resource.

Updates the description of an existing project.

Updates the data storage options for this project.

Functions

Link to this function

batch_evaluate_feature(client, project, input, options \\ [])

View Source

This operation assigns feature variation to user sessions.

For each user session, you pass in an entityID that represents the user. Evidently then checks the evaluation rules and assigns the variation.

The first rules that are evaluated are the override rules. If the user's entityID matches an override rule, the user is served the variation specified by that rule.

Next, if there is a launch of the feature, the user might be assigned to a variation in the launch. The chance of this depends on the percentage of users that are allocated to that launch. If the user is enrolled in the launch, the variation they are served depends on the allocation of the various feature variations used for the launch.

If the user is not assigned to a launch, and there is an ongoing experiment for this feature, the user might be assigned to a variation in the experiment. The chance of this depends on the percentage of users that are allocated to that experiment. If the user is enrolled in the experiment, the variation they are served depends on the allocation of the various feature variations used for the experiment.

If the user is not assigned to a launch or experiment, they are served the default variation.

Link to this function

create_experiment(client, project, input, options \\ [])

View Source

Creates an Evidently experiment.

Before you create an experiment, you must create the feature to use for the experiment.

An experiment helps you make feature design decisions based on evidence and data. An experiment can test as many as five variations at once. Evidently collects experiment data and analyzes it by statistical methods, and provides clear recommendations about which variations perform better.

You can optionally specify a segment to have the experiment consider only certain audience types in the experiment, such as using only user sessions from a certain location or who use a certain internet browser.

Don't use this operation to update an existing experiment. Instead, use UpdateExperiment.

Link to this function

create_feature(client, project, input, options \\ [])

View Source

Creates an Evidently feature that you want to launch or test.

You can define up to five variations of a feature, and use these variations in your launches and experiments. A feature must be created in a project. For information about creating a project, see CreateProject. Don't use this operation to update an existing feature. Instead, use UpdateFeature.

Link to this function

create_launch(client, project, input, options \\ [])

View Source

Creates a launch of a given feature.

Before you create a launch, you must create the feature to use for the launch.

You can use a launch to safely validate new features by serving them to a specified percentage of your users while you roll out the feature. You can monitor the performance of the new feature to help you decide when to ramp up traffic to more users. This helps you reduce risk and identify unintended consequences before you fully launch the feature.

Don't use this operation to update an existing launch. Instead, use UpdateLaunch.

Link to this function

create_project(client, input, options \\ [])

View Source

Creates a project, which is the logical object in Evidently that can contain features, launches, and experiments.

Use projects to group similar features together.

To update an existing project, use UpdateProject.

Link to this function

create_segment(client, input, options \\ [])

View Source

Use this operation to define a segment of your audience.

A segment is a portion of your audience that share one or more characteristics. Examples could be Chrome browser users, users in Europe, or Firefox browser users in Europe who also fit other criteria that your application collects, such as age.

Using a segment in an experiment limits that experiment to evaluate only the users who match the segment criteria. Using one or more segments in a launch allows you to define different traffic splits for the different audience segments.

For more information about segment pattern syntax, see Segment rule pattern syntax.

The pattern that you define for a segment is matched against the value of evaluationContext, which is passed into Evidently in the EvaluateFeature operation, when Evidently assigns a feature variation to a user.

Link to this function

delete_experiment(client, experiment, project, input, options \\ [])

View Source

Deletes an Evidently experiment.

The feature used for the experiment is not deleted.

To stop an experiment without deleting it, use StopExperiment.

Link to this function

delete_feature(client, feature, project, input, options \\ [])

View Source

Deletes an Evidently feature.

Link to this function

delete_launch(client, launch, project, input, options \\ [])

View Source

Deletes an Evidently launch.

The feature used for the launch is not deleted.

To stop a launch without deleting it, use StopLaunch.

Link to this function

delete_project(client, project, input, options \\ [])

View Source

Deletes an Evidently project.

Before you can delete a project, you must delete all the features that the project contains. To delete a feature, use DeleteFeature.

Link to this function

delete_segment(client, segment, input, options \\ [])

View Source

Deletes a segment.

You can't delete a segment that is being used in a launch or experiment, even if that launch or experiment is not currently running.

Link to this function

evaluate_feature(client, feature, project, input, options \\ [])

View Source

This operation assigns a feature variation to one given user session.

You pass in an entityID that represents the user. Evidently then checks the evaluation rules and assigns the variation.

The first rules that are evaluated are the override rules. If the user's entityID matches an override rule, the user is served the variation specified by that rule.

If there is a current launch with this feature that uses segment overrides, and if the user session's evaluationContext matches a segment rule defined in a segment override, the configuration in the segment overrides is used. For more information about segments, see CreateSegment and Use segments to focus your audience.

If there is a launch with no segment overrides, the user might be assigned to a variation in the launch. The chance of this depends on the percentage of users that are allocated to that launch. If the user is enrolled in the launch, the variation they are served depends on the allocation of the various feature variations used for the launch.

If the user is not assigned to a launch, and there is an ongoing experiment for this feature, the user might be assigned to a variation in the experiment. The chance of this depends on the percentage of users that are allocated to that experiment.

If the experiment uses a segment, then only user sessions with evaluationContext values that match the segment rule are used in the experiment.

If the user is enrolled in the experiment, the variation they are served depends on the allocation of the various feature variations used for the experiment.

If the user is not assigned to a launch or experiment, they are served the default variation.

Link to this function

get_experiment(client, experiment, project, options \\ [])

View Source

Returns the details about one experiment.

You must already know the experiment name. To retrieve a list of experiments in your account, use ListExperiments.

Link to this function

get_experiment_results(client, experiment, project, input, options \\ [])

View Source

Retrieves the results of a running or completed experiment.

No results are available until there have been 100 events for each variation and at least 10 minutes have passed since the start of the experiment. To increase the statistical power, Evidently performs an additional offline p-value analysis at the end of the experiment. Offline p-value analysis can detect statistical significance in some cases where the anytime p-values used during the experiment do not find statistical significance.

Experiment results are available up to 63 days after the start of the experiment. They are not available after that because of CloudWatch data retention policies.

Link to this function

get_feature(client, feature, project, options \\ [])

View Source

Returns the details about one feature.

You must already know the feature name. To retrieve a list of features in your account, use ListFeatures.

Link to this function

get_launch(client, launch, project, options \\ [])

View Source

Returns the details about one launch.

You must already know the launch name. To retrieve a list of launches in your account, use ListLaunches.

Link to this function

get_project(client, project, options \\ [])

View Source

Returns the details about one launch.

You must already know the project name. To retrieve a list of projects in your account, use ListProjects.

Link to this function

get_segment(client, segment, options \\ [])

View Source

Returns information about the specified segment.

Specify the segment you want to view by specifying its ARN.

Link to this function

list_experiments(client, project, max_results \\ nil, next_token \\ nil, status \\ nil, options \\ [])

View Source

Returns configuration details about all the experiments in the specified project.

Link to this function

list_features(client, project, max_results \\ nil, next_token \\ nil, options \\ [])

View Source

Returns configuration details about all the features in the specified project.

Link to this function

list_launches(client, project, max_results \\ nil, next_token \\ nil, status \\ nil, options \\ [])

View Source

Returns configuration details about all the launches in the specified project.

Link to this function

list_projects(client, max_results \\ nil, next_token \\ nil, options \\ [])

View Source

Returns configuration details about all the projects in the current Region in your account.

Link to this function

list_segment_references(client, segment, max_results \\ nil, next_token \\ nil, type, options \\ [])

View Source

Use this operation to find which experiments or launches are using a specified segment.

Link to this function

list_segments(client, max_results \\ nil, next_token \\ nil, options \\ [])

View Source

Returns a list of audience segments that you have created in your account in this Region.

Link to this function

list_tags_for_resource(client, resource_arn, options \\ [])

View Source

Displays the tags associated with an Evidently resource.

Link to this function

put_project_events(client, project, input, options \\ [])

View Source

Sends performance events to Evidently.

These events can be used to evaluate a launch or an experiment.

Link to this function

start_experiment(client, experiment, project, input, options \\ [])

View Source

Starts an existing experiment.

To create an experiment, use CreateExperiment.

Link to this function

start_launch(client, launch, project, input, options \\ [])

View Source

Starts an existing launch.

To create a launch, use CreateLaunch.

Link to this function

stop_experiment(client, experiment, project, input, options \\ [])

View Source

Stops an experiment that is currently running.

If you stop an experiment, you can't resume it or restart it.

Link to this function

stop_launch(client, launch, project, input, options \\ [])

View Source

Stops a launch that is currently running.

After you stop a launch, you will not be able to resume it or restart it. Also, it will not be evaluated as a rule for traffic allocation, and the traffic that was allocated to the launch will instead be available to the feature's experiment, if there is one. Otherwise, all traffic will be served the default variation after the launch is stopped.

Link to this function

tag_resource(client, resource_arn, input, options \\ [])

View Source

Assigns one or more tags (key-value pairs) to the specified CloudWatch Evidently resource.

Projects, features, launches, and experiments can be tagged.

Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.

Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.

You can use the TagResource action with a resource that already has tags. If you specify a new tag key for the resource, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the resource, the new tag value that you specify replaces the previous value for that tag.

You can associate as many as 50 tags with a resource.

For more information, see Tagging Amazon Web Services resources.

Link to this function

test_segment_pattern(client, input, options \\ [])

View Source

Use this operation to test a rules pattern that you plan to use to create an audience segment.

For more information about segments, see CreateSegment.

Link to this function

untag_resource(client, resource_arn, input, options \\ [])

View Source

Removes one or more tags from the specified resource.

Link to this function

update_experiment(client, experiment, project, input, options \\ [])

View Source

Updates an Evidently experiment.

Don't use this operation to update an experiment's tag. Instead, use TagResource.

Link to this function

update_feature(client, feature, project, input, options \\ [])

View Source

Updates an existing feature.

You can't use this operation to update the tags of an existing feature. Instead, use TagResource.

Link to this function

update_launch(client, launch, project, input, options \\ [])

View Source

Updates a launch of a given feature.

Don't use this operation to update the tags of an existing launch. Instead, use TagResource.

Link to this function

update_project(client, project, input, options \\ [])

View Source

Updates the description of an existing project.

To create a new project, use CreateProject. Don't use this operation to update the data storage options of a project. Instead, use UpdateProjectDataDelivery.

Don't use this operation to update the tags of a project. Instead, use TagResource.

Link to this function

update_project_data_delivery(client, project, input, options \\ [])

View Source

Updates the data storage options for this project.

If you store evaluation events, you an keep them and analyze them on your own. If you choose not to store evaluation events, Evidently deletes them after using them to produce metrics and other experiment results that you can view.

You can't specify both cloudWatchLogs and s3Destination in the same operation.