Overall Evaluation Criterion (OEC)
Jump to navigation
Jump to search
An Overall Evaluation Criterion (OEC) is an integrated evaluation criterion that quantifies the organizational objectives of an organizational experiment.
- Context:
- It can (typically) align with Organizational Objectives.
- It can (typically) demand the creation of Product Tracking Tools (to record key metrics automatically).
- It can (often) require Iterative Development (to be refined once additional understanding is obtained).
- It can (often) involve identifying certain metrics to quantify performance, even for goals that are difficult to measure.
- It can (often) guide the experimentation process by aligning to a key metric.
- It can be determined collaboratively, establishing agreeable metrics of measurement.
- It can require balancing the costs and benefits of its measurement.
- ...
- Example(s):
- A B2B Company Website OEC, such as Lead Conversion Efficiency.
- A Interactive Contract Review Online Product OEC, such as a Contract Review Accuracy and User Satisfaction (AUS) Combined Score (based on contract review accuracy and user satisfaction).
- A Marketing Campaign OEC, such as return on advertising spend (ROAS).
- ...
- Counter-Example(s):
- a Website Bounce Rate, because it can be readily improved in ways that harm organizational objectives.
- a Physical Store Customer Footfall Measure, because it can be manipulated in ways that don't necessarily translate to sales or customer loyalty.
- a Number of Cars Sold Measure, because it can be boosted in ways detrimental to the company's future.
- a New Bank Accounts Opened, because it can be artificially inflated, risking the bank's reputation.
- ...
- See: Hypothesis Testing, Customer Value, A/B Testing.
References
2018
- https://linkedin.com/pulse/overall-evaluation-criterion-oec-ronny-kohavi/
- It should be determined early to clarify goals and align the organization.
- It requires multiple iterations to refine as understanding evolves.
- It should be determined collaboratively to get agreement on what to measure.
- It requires creating instrumentation to track key metrics automatically.
- It requires making clear whether more or less of something is better to drive improvement.
- It requires identifying metrics to quantify performance, even for difficult to measure goals.
- It requires balancing the costs and benefits of measurement for optimization.
- It requires anchoring the experimentation process by aligning to a north star metric.
- It may require adjustment after interpretation of initial experimental results.
- It ultimately requires tying metrics to core business or product goals.
2009
- (Kohavi et al., 2009) ⇒ Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M. Henne. (2009). “Controlled Experiments on the Web: survey and practical guide.” In: Data Mining and Knowledge Discovery, 18(1). doi:10.1007/s10618-008-0114-1
- QUOTE: Overall Evaluation Criterion (OEC) (Roy 2001). A quantitative measure of the experiment’s objective. In statistics this is often called the Response or Dependent Variable (Mason et al. 1989; Box et al. 2005); other synonyms include Outcome, Evaluation metric, Performance metric, or Fitness Function (Quarto-vonTivadar 2006). Experiments may have multiple objectives and a scorecard approach might be taken (Kaplan and Norton 1996), although selecting a single metric, possibly as a weighted combination of such objectives is highly desired and recommended (Roy 2001, p. 50). A single metric forces tradeoffs to be made once for multiple experiments and aligns the organization behind a clear objective. A good OEC should not be short-term focused (e.g., clicks); to the contrary, it should include factors that predict long-term goals, such as predicted lifetime value and repeat visits. Ulwick describes some ways to measure what customers want (although not specifically for the web) (Ulwick 2005).
2005
- (Ulwick, 2005) ⇒ A. Ulwick. (2005). “What Customers Want: Using Outcome-driven Innovation to Create Breakthrough Products and Services." McGraw-Hill, ISBN: 0071408673.
2001
- (Roy, 2001) ⇒ Ranjit K. Roy. (2001). “Design of Experiments Using the Taguchi Approach: 16 Steps to Product and Process Improvement.” John Wiley & Sons,