Experiment Events

Overview

The goal of incorporating Experimental events is to help drive analyses of A/B tests. This is required for teams that run A/B tests and would like to use Looker or have the team’s Data member analyze the experiment’s performance.

Experiment Naming Convention

To keep experiment information consistent across games for easier analysis, please do your best to use the following naming convention:

{game_code}_{platform}_{test.name}

  • game_code - Three-letter game code, e.g. FNI, SBP, FMT, etc.
    • If game doesn’t have a Three-letter code, just put name: game.that.doest.have.code
  • platform
    • ios - ios
    • and - android
    • all - in case you run this test on both platforms at the same time (start and end dates should be equaled)
  • test.name - whatever test name you like, but if you need include multiple words, use dot symbol . instead of spaces.

Available Events

  • ab_cohort: Indicates individual user’s membership in an experiment. The event should be fired every session users log in

Critical Parameters

  • Required
    • experiment_name: the name of the experiment, e.g., “midLevelInter”
    • experiment_cohort: the name of the variant or cohort within the experiment, e.g., “inter_no,” “inter_yes.”

Example

  • Cake Sort Puzzle 3D
    • The game ran an experiment called new_inter_timer to evaluate the impact of different interstitial timings in the game.

    • The setup of the experiment is as follows:

      VariantsinterBetweenTimeinterStartTime
           Aggressive (33.3%) 
      6060
           Baseline (33.3%) 
      9090
           Passive (33.3%) 
      120120
    • Each variant received a different value for the experiment’s interBetweenTime and interStartTime remote configs. It is crucial to note that each variant used both remote configurations.

    • ✅ The team should have fired the ab_cohort event based on the user’s allocation to the variant as either of the following:

      • LionAnalytics.AbCohort("new_inter_timer", "Aggressive")
      • LionAnalytics.AbCohort("new_inter_timer", "Passive")
      • LionAnalytics.AbCohort("new_inter_timer", "Baseline")
    • ❌ Instead, the team fired the ab_cohort event incorrectly as either of the following fired the event:

      • LionAnalytics.AbCohort("interBetweenTime", "60")
      • LionAnalytics.AbCohort("interBetweenTime", "90")
      • LionAnalytics.AbCohort("interBetweenTime", "120")
      • LionAnalytics.AbCohort("interStartTime", "60")
      • LionAnalytics.AbCohort("interStartTime", "90")
      • LionAnalytics.AbCohort("interStartTime", "120")

Implementation

  • The event should be fired every session users log in. If a user is in multiple tests, the event should be fired multiple times
  • This event will be attached as a global parameter and will be included in the payload of all proceeding events after it is fired once
  • Multiple experiments can be attached at once
  • Only fire this event for players included in the experiment. This produces unnecessary data costs for pre-processing the data that will not be used in evaluating experiment results.

Event Validation

  • If you do not have access to Looker, please use the LionAnalytics QA Tool following the instructions here: LionAnalytics QA Tool

  • If you have access to Looker, ensure that critical parameter values should be present [ref]

    • The value of the parameter should follow the following format:

      • {”experiment_name”: [”cohort_name”]}
    • Example:

      {"mid_level_inters":["midlevel_no"]}
      

Dashboard & Analysis

Analyses

General A/B test performance

  • whether the test result validates the product hypothesis
    • How much uplift/decrease was there in the KPIs that the experiment aimed at moving?
    • Are there any significant differences in the changes in the target KPIs?

Metrics

DAU

Installs

Ad Impressions and Revenue

Ad and IAP revenue (by Cohort/Tenure day), which can be used for calculating RPI

Avg. Sessions

Total Usage

Retention

Explore

AB Test Daily KPI: this Explore provides all the essential Daily KPIs and supports grouping by Experiment and Experiment Cohorts

Dashboards

Experiment Dashboard: this dashboard provides statistical significance calculation for the selected KPI

AB - Studio Management: this dashboard focuses on showing multiple KPIs and their differences at once