View Source AWS.Evidently (aws-elixir v1.0.4)
You can use Amazon CloudWatch Evidently to safely validate new features by serving them to a specified percentage of your users while you roll out the feature.
You can monitor the performance of the new feature to help you decide when to ramp up traffic to your users. This helps you reduce risk and identify unintended consequences before you fully launch the feature.
You can also conduct A/B experiments to make feature design decisions based on evidence and data. An experiment can test as many as five variations at once. Evidently collects experiment data and analyzes it using statistical methods. It also provides clear recommendations about which variations perform better. You can test both user-facing features and backend features.
Link to this section Summary
Functions
This operation assigns feature variation to user sessions.
Creates an Evidently experiment.
Creates an Evidently feature that you want to launch or test.
Creates a launch of a given feature.
Creates a project, which is the logical object in Evidently that can contain features, launches, and experiments.
Use this operation to define a segment of your audience.
Deletes an Evidently experiment.
Deletes an Evidently feature.
Deletes an Evidently launch.
Deletes an Evidently project.
Deletes a segment.
This operation assigns a feature variation to one given user session.
Returns the details about one experiment.
Retrieves the results of a running or completed experiment.
Returns the details about one feature.
Returns the details about one launch.
Returns the details about one launch.
Returns information about the specified segment.
Returns configuration details about all the experiments in the specified project.
Returns configuration details about all the features in the specified project.
Returns configuration details about all the launches in the specified project.
Returns configuration details about all the projects in the current Region in your account.
Use this operation to find which experiments or launches are using a specified segment.
Returns a list of audience segments that you have created in your account in this Region.
Displays the tags associated with an Evidently resource.
Sends performance events to Evidently.
Starts an existing experiment.
Starts an existing launch.
Stops an experiment that is currently running.
Stops a launch that is currently running.
Assigns one or more tags (key-value pairs) to the specified CloudWatch Evidently resource.
Use this operation to test a rules pattern that you plan to use to create an audience segment.
Removes one or more tags from the specified resource.
Updates an Evidently experiment.
Updates an existing feature.
Updates a launch of a given feature.
Updates the description of an existing project.
Updates the data storage options for this project.
Link to this section Functions
This operation assigns feature variation to user sessions.
For each user session, you pass
in an entityID
that represents the user. Evidently then checks the evaluation
rules and assigns the variation.
The first rules that are evaluated are the override rules. If the user's
entityID
matches an override rule, the user is served the variation specified
by that rule.
Next, if there is a launch of the feature, the user might be assigned to a variation in the launch. The chance of this depends on the percentage of users that are allocated to that launch. If the user is enrolled in the launch, the variation they are served depends on the allocation of the various feature variations used for the launch.
If the user is not assigned to a launch, and there is an ongoing experiment for this feature, the user might be assigned to a variation in the experiment. The chance of this depends on the percentage of users that are allocated to that experiment. If the user is enrolled in the experiment, the variation they are served depends on the allocation of the various feature variations used for the experiment.
If the user is not assigned to a launch or experiment, they are served the default variation.
Creates an Evidently experiment.
Before you create an experiment, you must create the feature to use for the experiment.
An experiment helps you make feature design decisions based on evidence and data. An experiment can test as many as five variations at once. Evidently collects experiment data and analyzes it by statistical methods, and provides clear recommendations about which variations perform better.
You can optionally specify a segment
to have the experiment consider only
certain audience
types in the experiment, such as using only user sessions from a certain
location or who use a certain internet browser.
Don't use this operation to update an existing experiment. Instead, use UpdateExperiment.
Creates an Evidently feature that you want to launch or test.
You can define up to five variations of a feature, and use these variations in your launches and experiments. A feature must be created in a project. For information about creating a project, see CreateProject. Don't use this operation to update an existing feature. Instead, use UpdateFeature.
Creates a launch of a given feature.
Before you create a launch, you must create the feature to use for the launch.
You can use a launch to safely validate new features by serving them to a specified percentage of your users while you roll out the feature. You can monitor the performance of the new feature to help you decide when to ramp up traffic to more users. This helps you reduce risk and identify unintended consequences before you fully launch the feature.
Don't use this operation to update an existing launch. Instead, use UpdateLaunch.
Creates a project, which is the logical object in Evidently that can contain features, launches, and experiments.
Use projects to group similar features together.
To update an existing project, use UpdateProject.
Use this operation to define a segment of your audience.
A segment is a portion of your audience that share one or more characteristics. Examples could be Chrome browser users, users in Europe, or Firefox browser users in Europe who also fit other criteria that your application collects, such as age.
Using a segment in an experiment limits that experiment to evaluate only the users who match the segment criteria. Using one or more segments in a launch allows you to define different traffic splits for the different audience segments.
For more information about segment pattern syntax, see Segment rule pattern syntax.
The pattern that you define for a segment is matched against the value of
evaluationContext
, which
is passed into Evidently in the
EvaluateFeature
operation,
when Evidently assigns a feature variation to a user.
delete_experiment(client, experiment, project, input, options \\ [])
View SourceDeletes an Evidently experiment.
The feature used for the experiment is not deleted.
To stop an experiment without deleting it, use StopExperiment.
Deletes an Evidently feature.
Deletes an Evidently launch.
The feature used for the launch is not deleted.
To stop a launch without deleting it, use StopLaunch.
Deletes an Evidently project.
Before you can delete a project, you must delete all the features that the project contains. To delete a feature, use DeleteFeature.
Deletes a segment.
You can't delete a segment that is being used in a launch or experiment, even if that launch or experiment is not currently running.
This operation assigns a feature variation to one given user session.
You pass in an
entityID
that represents the user. Evidently then checks the evaluation rules
and assigns the variation.
The first rules that are evaluated are the override rules. If the user's
entityID
matches an override rule, the user is served the variation specified
by that rule.
If there is a current launch with this feature that uses segment overrides, and
if the user session's evaluationContext
matches a segment rule defined in a
segment override, the configuration in the segment overrides is used. For more
information
about segments, see
CreateSegment and
Use segments to focus your
audience.
If there is a launch with no segment overrides, the user might be assigned to a variation in the launch. The chance of this depends on the percentage of users that are allocated to that launch. If the user is enrolled in the launch, the variation they are served depends on the allocation of the various feature variations used for the launch.
If the user is not assigned to a launch, and there is an ongoing experiment for this feature, the user might be assigned to a variation in the experiment. The chance of this depends on the percentage of users that are allocated to that experiment.
If the experiment uses a segment, then only
user sessions with evaluationContext
values that match the segment rule are
used in the experiment.
If the user is enrolled in the experiment, the variation they are served depends on the allocation of the various feature variations used for the experiment.
If the user is not assigned to a launch or experiment, they are served the default variation.
Returns the details about one experiment.
You must already know the experiment name. To retrieve a list of experiments in your account, use ListExperiments.
get_experiment_results(client, experiment, project, input, options \\ [])
View SourceRetrieves the results of a running or completed experiment.
No results are available until there have been 100 events for each variation and at least 10 minutes have passed since the start of the experiment. To increase the statistical power, Evidently performs an additional offline p-value analysis at the end of the experiment. Offline p-value analysis can detect statistical significance in some cases where the anytime p-values used during the experiment do not find statistical significance.
Experiment results are available up to 63 days after the start of the experiment. They are not available after that because of CloudWatch data retention policies.
Returns the details about one feature.
You must already know the feature name. To retrieve a list of features in your account, use ListFeatures.
Returns the details about one launch.
You must already know the launch name. To retrieve a list of launches in your account, use ListLaunches.
Returns the details about one launch.
You must already know the project name. To retrieve a list of projects in your account, use ListProjects.
Returns information about the specified segment.
Specify the segment you want to view by specifying its ARN.
list_experiments(client, project, max_results \\ nil, next_token \\ nil, status \\ nil, options \\ [])
View SourceReturns configuration details about all the experiments in the specified project.
list_features(client, project, max_results \\ nil, next_token \\ nil, options \\ [])
View SourceReturns configuration details about all the features in the specified project.
list_launches(client, project, max_results \\ nil, next_token \\ nil, status \\ nil, options \\ [])
View SourceReturns configuration details about all the launches in the specified project.
list_projects(client, max_results \\ nil, next_token \\ nil, options \\ [])
View SourceReturns configuration details about all the projects in the current Region in your account.
list_segment_references(client, segment, max_results \\ nil, next_token \\ nil, type, options \\ [])
View SourceUse this operation to find which experiments or launches are using a specified segment.
list_segments(client, max_results \\ nil, next_token \\ nil, options \\ [])
View SourceReturns a list of audience segments that you have created in your account in this Region.
Displays the tags associated with an Evidently resource.
Sends performance events to Evidently.
These events can be used to evaluate a launch or an experiment.
start_experiment(client, experiment, project, input, options \\ [])
View SourceStarts an existing experiment.
To create an experiment, use CreateExperiment.
Starts an existing launch.
To create a launch, use CreateLaunch.
stop_experiment(client, experiment, project, input, options \\ [])
View SourceStops an experiment that is currently running.
If you stop an experiment, you can't resume it or restart it.
Stops a launch that is currently running.
After you stop a launch, you will not be able to resume it or restart it. Also, it will not be evaluated as a rule for traffic allocation, and the traffic that was allocated to the launch will instead be available to the feature's experiment, if there is one. Otherwise, all traffic will be served the default variation after the launch is stopped.
Assigns one or more tags (key-value pairs) to the specified CloudWatch Evidently resource.
Projects, features, launches, and experiments can be tagged.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.
Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.
You can use the TagResource
action with a resource that already has tags.
If you specify a new tag key for the resource,
this tag is appended to the list of tags associated
with the alarm. If you specify a tag key that is already associated with the
resource, the new tag value that you specify replaces
the previous value for that tag.
You can associate as many as 50 tags with a resource.
For more information, see Tagging Amazon Web Services resources.
Use this operation to test a rules pattern that you plan to use to create an audience segment.
For more information about segments, see CreateSegment.
Removes one or more tags from the specified resource.
update_experiment(client, experiment, project, input, options \\ [])
View SourceUpdates an Evidently experiment.
Don't use this operation to update an experiment's tag. Instead, use TagResource.
Updates an existing feature.
You can't use this operation to update the tags of an existing feature. Instead, use TagResource.
Updates a launch of a given feature.
Don't use this operation to update the tags of an existing launch. Instead, use TagResource.
Updates the description of an existing project.
To create a new project, use CreateProject. Don't use this operation to update the data storage options of a project. Instead, use UpdateProjectDataDelivery.
Don't use this operation to update the tags of a project. Instead, use TagResource.
update_project_data_delivery(client, project, input, options \\ [])
View SourceUpdates the data storage options for this project.
If you store evaluation events, you an keep them and analyze them on your own. If you choose not to store evaluation events, Evidently deletes them after using them to produce metrics and other experiment results that you can view.
You can't specify both cloudWatchLogs
and s3Destination
in the same
operation.