• Subscribe

  • Back

    Industry-leading Best Practices for Tracker Studies

    Mark Simon, Managing Director, North America/Managing Director, Toluna Digital

    Toluna’s best practices for transitioning trackers are industry standards for ensuring the integrity of tracker data. Approved by Nielsen’s Data Scientific Team, the practices constitute a framework that guarantees seamless transitioning of trackers by ensuring that any observed changes in metrics are natural and not artificially generated by changes in sample or methodology.

    Here is how the framework plays out.

    1. Assessing potential complexity of transitioning process based on trackers’ characteristics and parameters
      The process begins with an assessment of the complexity of the transitioning process based on a number of criteria, including sample definition and specific requirements, market, categories and topics addressed and survey design, length and specifics of questionnaires, incidence rate. Trackers with similar levels of complexity are grouped together.
    2. In each designated group, identifying one or more trackers to run as a pilot
      In cases where trackers meet criteria for relatively simple transitioning, a number of studies can be grouped together into a single test cell. If, on the other hand, trackers are more complex, additional cells will be established to ensure pilots will result in meaningful recommendations. In the case of very complex trackers, studies could be tested on an individual basis to ensure accuracy.
    3. Conducting parallel testing on pilot studies, collecting full Toluna sample simultaneously with the current provider (1-2 waves)
      For each pilot study a parallel Toluna study is conducted using the following guidelines:

      • The studies are fielded exactly the same way. The same or comparable sample size is used. The studies are treated identically in field, and started and stopped at the same time.
      • Data is processed exactly the same way. All aspects of data coding, editing and weighting are replicated
      • Full output, including field stats, is produced. Data is examined to ensure proper response-capture and coding. Statistics from the field, including raw incidence, completion rates, and length of interview are computed and compared to assess uniformity.
    4. Comparing data collected by Toluna with the data from current provider to determine if differences occur, necessitating further testing
      Following parallel testing, a thorough Impact Analysis is conducted to zero-in on the sources of any differences between Toluna data and results from a current provider. The test compares means, proportions and variance and performs statistical testing for all tracker variables including field, then creates a model charting degree of variance between the two studies.
    5. If significant differences occur, a Bootstrap Test is conducted to determine the perfect proportion of Toluna sample to be added to the next wave.
      In the bootstrapping simulations, increasing numbers of Toluna respondents are added to the current provider sample. The combined sample is then consistently compared with the original provider sample, until the optimal Toluna share in the combined sample per wave can be determined.

    Following Impact Analysis and Bootstrap Testing, additional pilots can be executed to fine-tune recommendations for trackers, especially those of different complexity. These exhaustive measures ensure a smooth transitioning process.

    FacebookTwitterLinkedInGoogle+