2013 CounterfactualReasoningandLearn
- (Bottou et al., 2013) ⇒ Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. (2013). “Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising.” In: The Journal of Machine Learning Research, 14(1).
Subject Headings: Counterfactual Reasoning, Controlled Online Experiment.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222013%22+Counterfactual+Reasoning+and+Learning+Systems%3A+The+Example+of+Computational+Advertising
- http://dl.acm.org/citation.cfm?id=2567709.2567766&preflayout=flat#citedby
Quotes
Abstract
This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine.
1. Introduction
Statistical machine learning technologies in the real world are never without a purpose. Using their predictions, humans or machines make decisions whose circuitous consequences often violate the modeling assumptions that justified the system design in the first place. Such contradictions appear very clearly in the case of the learning systems that power web scale applications such as search engines, ad placement engines, or recommendation systems. For instance, the placement of advertisement on the result pages of Internet search engines depend on the bids of advertisers and on scores computed by statistical machine learning systems. Because the scores affect the contents of the result pages proposed to the users, they directly influence the occurrence of clicks and the corresponding advertiser payments. They also have important indirect effects. Ad placement decisions impact the satisfaction of the users and therefore their willingness to frequent this web site in the future. They also impact the return on investment observed by the advertisers and therefore their future bids. Finally they change the nature of the data collected for training the statistical models in the future.
These complicated interactions are clarified by important theoretical works. Under simplified assumptions, mechanism design (Myerson, 1981) leads to an insightful account of the advertiser feedback loop (Varian, 2007; Edelman et al., 2007). Under simplified assumptions, multiarmed bandits theory (Robbins, 1952; Auer et al., 2002; Langford and Zhang, 2008) and reinforcement learning (Sutton and Barto, 1998) describe the exploration/exploitation dilemma associated with the training feedback loop. However, none of these approaches gives a complete account of the complex interactions found in real-life systems.
This contribution proposes a novel approach: we view these complicated interactions as manifestations of the fundamental difference that separates correlation and causation. Using the ad placement example as a model of our problem class, we therefore argue that the language and the methods of causal inference provide flexible means to describe such complex machine learning systems and give sound answers to the practical questions facing the designer of such a system. Is it useful to pass a new input signal to the statistical model? Is it worthwhile to collect and label a new training set? What about changing the loss function or the learning algorithm? In order to answer such questions and improve the operational performance of the learning system, one needs to unravel how the information produced by the statistical models traverses the web of causes and effects and eventually produces measurable performance metrics.
Readers with an interest in causal inference will find in this paper (i) a real world example demonstrating the value of causal inference for large-scale machine learning applications, (ii) causal inference techniques applicable to continuously valued variables with meaningful confidence intervals, and (iii) quasi-static analysis techniques for estimating how small interventions affect certain causal equilibria. Readers with an interest in real-life applications will find (iv) a selection of practical counterfactual analysis technique s applicable to many real-life machine learning systems. Readers with an interest in computational advertising will find a principled framework that (v) explains how to soundly use machine learning techniques for ad placement, and (vi) conceptually connects machine learning and auction theory in a compelling manner.
The paper is organized as follow]]s. Section 2 gives an overview of the advertisement placement problem which serves as our main example. In particular, we stress some of the difficulties encountered when one approaches such a problem without a principled perspective. Section 3 provides a condensed review of the essential concepts of causal modeling and inference. Section 4 centers on formulating and answering counterfactual question s such as “how would the system have performed during the data collection period if certain interventions had been carried out on the system?” We describe importance sampling method s for counterfactual analysis, with clear conditions of validity and confidence intervals. Section 5 illustrates how the structure of the causal graph reveals opportunities to exploit prior information and vastly improve the confidence intervals. Section 6 describes how counterfactual analysis provides essential signals that can drive learning algorithms. Assume that we have identified interventions that would have caused the system to perform well during the data collection period. Which guarantee can we obtain on the performance of these same interventions in the future? Section 7 presents counterfactual differential techniques for the study of equilibria. Using data collected when the system is at equilibrium, we can estimate how a small intervention displaces the equilibrium. This provides an elegant and effective way to reason about long-term feedback effects. Various appendices complete the main text with information that we think more relevant to readers with specific backgrounds.
The paper is organized as follows. Section 2 gives an overview of the advertisement placement problem which serves as our main example. In particular, we stress some of the difficulties encountered when one approaches such a problem without a principled perspective. Section 3 provides a condensed review of the essential concepts of causal modeling and inference. Section 4 centers on formulating and answering counterfactual questions such as “how would the system have performed during the data collection period if certain interventions had been carried out on the system?” We describe importance sampling methods for counterfactual analysis, with clear conditions of validity and confidence intervals. Section 5 illustrates how the structure of the causal graph reveals opportunities to exploit prior information and vastly improve the confidence intervals. Section 6 describes how counterfactual analysis provides essential signals that can drive learning algorithms. Assume that we have identified interventions that would have caused the system to perform well during the data collection period. Which guarantee can we obtain on the performance of these same interventions in the future? Section 7 presents counterfactual differential techniques for the study of equilibria. Using data collected when the system is at equilibrium, we can estimate how a small intervention displaces the equilibrium. This provides an elegant and effective way to reason about long-term feedback effects. Various appendices complete the main text with information that we think more relevant to readers with specific backgrounds.
2. Causation Issues in Computational Advertising
After giving an overview of the advertisement placement problem, which serves as our main example, this section illustrates some of the difficulties that arise when one does not pay sufficient attention to the causal structure of the learning system.
2.1 Advertisement Placement
All Internet users are now familiar with the advertisement messages that adorn popular web pages. Advertisements are particularly effective on search engine result pages because users who are searching for something are good targets for advertisers who have something to offer. Several actors take part in this Internet advertisement game:
- Advertisers create advertisement messages, and place bids that describe how much they are willing to pay to see their ads displayed or clicked.
- Publishers provide attractive web services, such as, for instance, an Internet search engine.
They display selected ads and expect to receive payments from the advertisers. The infrastructure to collect the advertiser bids and select ads is sometimes provided by an advertising network on behalf of its affiliated publishers. For the purposes of this work, we simply consider a publisher large enough to run its own infrastructure.
• Users reveal information about their current interests, for instance, by entering a query in a search engine. They are offered web pages that contain a selection of ads (Figure 1). Users sometimes click on an advertisement and are transported to a web site controlled by the advertiser where they can initiate some business.
A conventional bidding language is necessary to precisely define under which conditions an advertiser is willing to pay the bid amount. In the case of Internet search advertisement, each bid specifies (a) the advertisement message, (b) a set of keywords, (c) one of several possible matching criteria between the keywords and the user query, and (d) the maximal price the advertiser is willing to pay when a user clicks on the ad after entering a query that matches the keywords according to the specified criterion.
Whenever a user visits a publisher web page, an advertisement placement engine runs an auction in real time in order to select winning ads, determine where to display them in the page, and compute the prices charged to advertisers, should the user click on their ad. Since the placement engine is operated by the publisher, it is designed to further the interests of the publisher. Fortunately for everyone else, the publisher must balance short term interests, namely the immediate revenue brought by the ads displayed on each web page, and long term interests, namely the future revenues resulting from the continued satisfaction of both users and advertisers. Auction theory explains how to design a mechanism that optimizes the revenue of the seller of a single object (Myerson, 1981; Milgrom, 2004) under various assumptions about the information available to the buyers regarding the intentions of the other buyers. In the case of the ad placement problem, the publisher runs multiple auctions and sells opportunities to receive a click. When nearly identical auctions occur thousand of times per second, it is tempting to consider that the advertisers have perfect information about each other. This assumption gives support to the popular generalized second price rank-score auction (Varian, 2007; Edelman et al., 2007):
�������� ���� � � Figure 1: Mainline and sidebar ads on a search result page. Ads placed in the mainline are more likely to be noticed, increasing both the chances of a click if the ad is relevant and the risk of annoying the user if the ad is not relevant.
• Let x represent the auction context information, such as the user query, the user profile, the date, the time, etc. The ad placement engine first determines all eligible ads a1 . . .an and the corresponding bids b1 . . .bn on the basis of the auction context x and of the matching criteria specified by the advertisers.
• For each selected ad ai and each potential position p on the web page, a statistical model outputs the estimate qi,p(x) of the probability that ad ai displayed in position p receives a user click. The rank-score ri,p(x) = biqi,p(x) then represents the purported value associated with placing ad ai at position p.
• Let L represent a possible ad layout, that is, a set of positions that can simultaneously be populated with ads, and let L be the set of possible ad layouts, including of course the empty layout. The optimal layout and the corresponding ads are obtained by maximizing the total rank-score max L∈L max i1,i2,... å p∈L rip,p(x), (1) subject to reserve constraints ∀p ∈ L, rip,p(x) ≥ Rp(x), and also subject to diverse policy constraints, such as, for instance, preventing the simultaneous display of multiple ads belonging to the same advertiser. Under mild assumptions, this discrete maximization problem is amenable to computationally efficient greedy algorithms (see appendix A.)
• The advertiser payment associated with a user click is computed using the generalized second price (GSP) rule: the advertiser pays the smallest bid that it could have entered without changing the solution of the discrete maximization problem, all other bids remaining equal. In other words, the advertiser could not have manipulated its bid and obtained the same treatment for a better price.
Under the perfect information assumption, the analysis suggests that the publisher simply needs to find which reserve prices Rp(x) yield the best revenue per auction. However, the total revenue of the publisher also depends on the traffic experienced by its web site. Displaying an excessive number of irrelevant ads can train users to ignore the ads, and can also drive them to competing web sites. Advertisers can artificially raise the rank-scores of irrelevant ads by temporarily increasing the bids. Indelicate advertisers can create deceiving advertisements that elicit many clicks but direct users to spam web sites. Experience shows that the continued satisfaction of the users is more important to the publisher than it is to the advertisers.
Therefore the generalized second price rank-score auction has evolved. Rank-scores have been augmented with terms that quantify the user satisfaction or the ad relevance. Bids receive adaptive discounts in order to deal with situations where the perfect information assumption is unrealistic. These adjustments are driven by additional statistical models. The ad placement engine should therefore be viewed as a complex learning system interacting with both users and advertisers.
2.2 Controlled Experiments
The designer of such an ad placement engine faces the fundamental question of testing whether a proposed modification of the ad placement engine results in an improvement of the operational performance of the system.
The simplest way to answer such a question is to try the modification. The basic idea is to randomly split the users into treatment and control groups (Kohavi et al., 2008). Users from the control group see web pages generated using the unmodified system. Users of the treatment groups see web pages generated using alternate versions of the system. Monitoring various performance metrics for a couple months usually gives sufficient information to reliably decide which variant of the system delivers the most satisfactory performance.
Modifying an advertisement placement engine elicits reactions from both the users and the advertisers. Whereas it is easy to split users into treatment and control groups, splitting advertisers into treatment and control groups demands special attention because each auction involves multiple advertisers (Charles et al., 2012). Simultaneously controlling for both users and advertisers is probably impossible.
Controlled experiments also suffer from several drawbacks. They are expensive because they demand a complete implementation of the proposed modifications. They are slow because each experiment typically demands a couple months. Finally, although there are elegant ways to efficiently run overlapping controlled experiments on the same traffic (Tang et al., 2010), they are limited by the volume of traffic available for experimentation.
It is therefore difficult to rely on controlled experiments during the conception phase of potential improvements to the ad placement engine. It is similarly difficult to use controlled experiments to drive the training algorithms associated with click probability estimation models. Cheaper and faster statistical methods are needed to drive these essential aspects of the development of an ad placement engine. Unfortunately, interpreting cheap and fast data can be very deceiving.
2.3 Confounding Data
Assessing the consequence of an intervention using statistical data is generally challenging because it is often difficult to determine whether the observed effect is a simple consequence of the intervention or has other uncontrolled causes.
Overall Patients with small stones Patients with large stones Treatment A: Open surgery 78% (273/350) 93%(81/87) 73%(192/263) Treatment B: Percutaneous nephrolithotomy 83%(289/350) 87% (234/270) 69% (55/80)
Table 1: A classic example of Simpson’s paradox. The table reports the success rates of two treatments for kidney stones (Charig et al., 1986, Tables I and II). Although the overall success rate of treatment B seems better, treatment B performs worse than treatment A on both patients with small kidney stones and patients with large kidney stones. See Section 2.3. For instance, the empirical comparison of certain kidney stone treatments illustrates this difficulty (Charig et al., 1986). Table 2.3 reports the success rates observed on two groups of 350 patients treated with respectively open surgery (treatment A, with 78% success) and percutaneous nephrolithotomy (treatment B, with 83% success). Although treatment B seems more successful, it was more frequently prescribed to patients suffering from small kidney stones, a less serious condition. Did treatment B achieve a high success rate because of its intrinsic qualities or because it was preferentially applied to less severe cases? Further splitting the data according to the size of the kidney stones reverses the conclusion: treatment A now achieves the best success rate for both patients suffering from large kidney stones and patients suffering from small kidney stones. Such an inversion of the conclusion is called Simpson’s paradox (Simpson, 1951).
The stone size in this study is an example of a confounding variable, that is an uncontrolled variable whose consequences pollute the effect of the intervention. Doctors knew the size of the kidney stones, chose to treat the healthier patients with the least invasive treatment B, and therefore caused treatment B to appear more effective than it actually was. If we now decide to apply treatment B to all patients irrespective of the stone size, we break the causal path connecting the stone size to the outcome, we eliminate the illusion, and we will experience disappointing results. When we suspect the existence of a confounding variable, we can split the contingency tables and reach improved conclusions. Unfortunately we cannot fully trust these conclusions unless we are certain to have taken into account all confounding variables. The real problem therefore comes from the confounding variables we do not know.
Randomized experiments arguably provide the only correct solution to this problem (see Stigler, 1992). The idea is to randomly chose whether the patient receives treatment A or treatment B. Because this random choice is independent from all the potential confounding variables, known and unknown, they cannot pollute the observed effect of the treatments (see also Section 4.2). This is why controlled experiments in ad placement (Section 2.2) randomly distribute users between treatment and control groups, and this is also why, in the case of an ad placement engine, we should be somehow concerned by the practical impossibility to randomly distribute both users and advertisers.
Overall q2 low q2 high q1 low 6.2% (124/2000) 5.1%(92/1823) 18.1% (32/176) q1 high 7.5%(149/2000) 4.8% (71/1500) 15.6% (78/500) Table 2: Confounding data in ad placement. The table reports the click-through rates and the click counts of the second mainline ad. The overall counts suggest that the click-through rate of the second mainline ad increases when the click probability estimate q1 of the top ad is high. However, if we further split the pages according to the click probability estimate q2 of the second mainline ad, we reach the opposite conclusion. See Section 2.4.
2.4 Confounding Data in Ad Placement
Let us return to the question of assessing the value of passing a new input signal to the ad placement engine click prediction model. Section 2.1 outlines a placement method where the click probability estimates qi,p(x) depend on the ad and the position we consider, but do not depend on other ads displayed on the page. We now consider replacing this model by a new model that additionally uses the estimated click probability of the top mainline ad to estimate the click probability of the second mainline ad (Figure 1). We would like to estimate the effect of such an intervention using existing statistical data.
We have collected ad placement data for Bing search result pages served during three consecutive hours on a certain slice of traffic. Let q1 and q2 denote the click probability estimates computed by the existing model for respectively the top mainline ad and the second mainline ad. After excluding pages displaying fewer than two mainline ads, we form two groups of 2000 pages randomly picked among those satisfying the conditions q1 < 0.15 for the first group and q1 ≥ 0.15 for the second group. Table 2.4 reports the click counts and frequencies observed on the second mainline ad in each group. Although the overall numbers show that users click more often on the second mainline ad when the top mainline ad has a high click probability estimate q1, this conclusion is reversed when we further split the data according to the click probability estimate q2 of the second mainline ad.
Despite superficial similarities, this example is considerably more difficult to interpret than the kidney stone example. The overall click counts show that the actual click-through rate of the second mainline ad is positively correlated with the click probability estimate on the top mainline ad. Does this mean that we can increase the total number of clicks by placing regular ads below frequently clicked ads?
Remember that the click probability estimates depend on the search query which itself depends on the user intention. The most likely explanation is that pages with a high q1 are frequently associated with more commercial searches and therefore receive more ad clicks on all positions. The observed correlation occurs because the presence of a click and the magnitude of the click probability estimate q1 have a common cause: the user intention. Meanwhile, the click probability estimate q2 returned by the current model for the second mainline ad also depend on the query and therefore the user intention. Therefore, assuming that this dependence has comparable strength, and assuming that there are no other causal paths, splitting the counts according to the magnitude of q2 factors out the effects of this common confounding cause. We then observe a negative correlation which now suggests that a frequently clicked top mainline ad has a negative impact on the click-through rate of the second mainline ad.
If this is correct, we would probably increase the accuracy of the click prediction model by switching to the new model. This would decrease the click probability estimates for ads placed in the second mainline position on commercial search pages. These ads are then less likely to clear the reserve and therefore more likely to be displayed in the less attractive sidebar. The net result is probably a loss of clicks and a loss of money despite the higher quality of the click probability model. Although we could tune the reserve prices to compensate this unfortunate effect, nothing in this data tells us where the performance of the ad placement engine will land. Furthermore, unknown confounding variables might completely reverse our conclusions. Making sense out of such data is just too complex !
2.5 A Better Way
It should now be obvious that we need a more principled way to reason about the effect of potential interventions. We provide one such more principled approach using the causal inference machinery (Section 3). The next step is then the identification of a class of questions that are sufficiently expressive to guide the designer of a complex learning system, and sufficiently simple to be answered using data collected in the past using adequate procedures (Section 4). A machine learning algorithm can then be viewed as an automated way to generate questions about the parameters of a statistical model, obtain the corresponding answers, and update the parameters accordingly (Section 6). Learning algorithms derived in this manner are very flexible: human designers and machine learning algorithms can cooperate seamlessly because they rely on similar sources of information.
3. Modeling Causal Systems
When we point out a causal relationship between two events, we describe what we expect to happen to the event we call the effect, should an external operator manipulate the event we call the cause. Manipulability theories of causation (von Wright, 1971; Woodward, 2005) raise this commonsense insight to the status of a definition of the causal relation. Difficult adjustments are then needed to interpret statements involving causes that we can only observe through their effects, “because they love me,” or that are not easily manipulated, “because the earth is round.”
Modern statistical thinking makes a clear distinction between the statistical model and the world. The actual mechanisms underlying the data are considered unknown. The statistical models do not need to reproduce these mechanisms to emulate the observable data (Breiman, 2001). Better models are sometimes obtained by deliberately avoiding to reproduce the true mechanisms (Vapnik, 1982, Section 8.6). We can approach the manipulability puzzle in the same spirit by viewing causation as a reasoning model (Bottou, 2011) rather than a property of the world. Causes and effects are simply the pieces of an abstract reasoning game. Causal statements that are not empirically testable acquire validity when they are used as intermediate steps when one reasons about manipulations or interventions amenable to experimental validation. This section presents the rules of this reasoning game. We largely follow the framework proposed by Pearl (2009) because it gives a clear account of the connections between causal models and probabilistic models.
x = f1(u, e1) Query context x from user intent u. a = f2(x, v, e2) Eligible ads (ai) from query x and inventory v. b = f3(x, v, e3) Corresponding bids (bi). q = f4(x,a, e4) Scores (qi,p,Rp) from query x and ads a. s = f5(a,q,b, e5) Ad slate s from eligible ads a, scores q and bids b. c = f6(a,q,b, e6) Corresponding click prices c. y = f7(s,u, e7) User clicks y from ad slate s and user intent u. z = f8(y, c, e8) Revenue z from clicks y and prices c. Figure 2: A structural equation model for ad placement. The sequence of equations describes the flow of information. The functions fk describe how effects depend on their direct causes. The additional noise variables ek represent independent sources of randomness useful to model probabilistic dependencies.
3.1 The Flow of Information
Figure 2 gives a deterministic description of the operation of the ad placement engine. Variable u represents the user and his or her intention in an unspecified manner. The query and query context x is then expressed as an unknown function of the u and of a noise variable e1. Noise variables in this framework are best viewed as independent sources of randomness useful for modeling a nondeterministic causal dependency. We shall only mention them when they play a specific role in the discussion. The set of eligible ads a and the corresponding bids b are then derived from the query x and the ad inventory v supplied by the advertisers. Statistical models then compute a collection of scores q such as the click probability estimates qi,p and the reserves Rp introduced in Section 2.1. The placement logic uses these scores to generate the “ad slate” s, that is, the set of winning ads and their assigned positions. The corresponding click prices c are computed. The set of user clicks y is expressed as an unknown function of the ad slate s and the user intent u. Finally the revenue z is expressed as another function of the clicks y and the prices c. Such a system of equations is named structural equation model (Wright, 1921). Each equation asserts a functional dependency between an effect, appearing on the left hand side of the equation, and its direct causes, appearing on the right hand side as arguments of the function. Some of these causal dependencies are unknown. Although we postulate that the effect can be expressed as some function of its direct causes, we do not know the form of this function. For instance, the designer of the ad placement engine knows functions f2 to f6 and f8 because he has designed them. However, he does not know the functions f1 and f7 because whoever designed the user did not leave sufficient documentation.
Figure 3 represents the directed causal graph associated with the structural equation model. Each arrow connects a direct cause to its effect. The noise variables are omitted for simplicity. The structure of this graph reveals fundamental assumptions about our model. For instance, the user clicks y do not directly depend on the scores q or the prices c because users do not have access to this information.
We hold as a principle that causation obeys the arrow of time: causes always precede their effects. Therefore the causal graph must be acyclic. Structural equation models then support two fundamental operations, namely simulation and intervention.
����������� � ����� � �������� �� � �� � ���� � �� ��� � �� �� � ������ � ������ �������
Figure 3: Causal graph associated with the structural equation model of Figure 2. The mutually independent noise variables e1 to e8 are implicit. The variables a, b, q, s, c, and z depend on their direct causes in known ways. In contrast, the variables u and v are exogenous and the variables x and y depend on their direct causes through unknown functions.
• Simulation – Let us assume that we know both the exact form of all functional dependencies and the value of all exogenous variables, that is, the variables that never appear in the left hand side of an equation. We can compute the values of all the remaining variables by applying the equations in their natural time sequence.
• Intervention – As long as the causal graph remains acyclic, we can construct derived structural equation models using arbitrary algebraic manipulations of the system of equations. For instance, we can clamp a variable to a constant value by rewriting the right-hand side of the corresponding equation as the specified constant value.
The algebraic manipulation of the structural equation models provides a powerful language to describe interventions on a causal system. This is not a coincidence. Many aspects of the mathematical notation were invented to support causal inference in classical mechanics. However, we no longer have to interpret the variable values as physical quantities: the equations simply describe the flow of information in the causal model (Wiener, 1948).
3.2 The Isolation Assumption
Let us now turn our attention to the exogenous variables, that is, variables that never appear in the left hand side of an equation of the structural model. Leibniz’s principle of sufficient reason claims that there are no facts without causes. This suggests that the exogenous variables are the effects of a network of causes not expressed by the structural equation model. For instance, the user intent u and the ad inventory v in Figure 3 have temporal correlations because both users and advertisers worry about their budgets when the end of the month approaches. Any structural equation model should then be understood in the context of a larger structural equation model potentially describing all things in existence.
Ads served on a particular page contribute to the continued satisfaction of both users and advertisers, and therefore have an effect on their willingness to use the services of the publisher in the future. The ad placement structural equation model shown in Figure 2 only describes the causal dependencies for a single page and therefore cannot account for such effects. Consider however a very
����������� ���
��� ��� ������ ����� ������ ������ ������� ����������� ���
��� ��� ������ ����� ������ ������ ������� ����������� ���
��� ��� ������ ����� ������ ������ ������� �����������
����������� ����������������� ����������������� �����������������
Figure 4: Conceptually unrolling the user feedback loop by threading instances of the single page causal graph (Figure 3). Both the ad slate st and user clicks yt have an indirect effect on the user intent ut+1 associated with the next query. large structural equation model containing a copy of the page-level model for every web page ever served by the publisher. Figure 4 shows how we can thread the page-level models corresponding to pages served to the same user. Similarly we could model how advertisers track the performance and the cost of their advertisements and model how their satisfaction affects their future bids. The resulting causal graphs can be very complex. Part of this complexity results from time-scale differences. Thousands of search pages are served in a second. Each page contributes a little to the continued satisfaction of one user and a few advertisers. The accumulation of these contributions produces measurable effects after a few weeks.
Many of the functional dependencies expressed by the structural equation model are left unspecified. Without direct knowledge of these functions, we must reason using statistical data. The most fundamental statistical data is collected from repeated trials that are assumed independent. When we consider the large structured equation model of everything, we can only have one large trial producing a single data point.1 It is therefore desirable to identify repeated patterns of identical equations that can be viewed as repeated independent trials. Therefore, when we study a structural equation model representing such a pattern, we need to make an additional assumption to expresses the idea that the outcome of one trial does not affect the other trials. We call such an assumption an isolation assumption by analogy with thermodynamics.2 This can be achieved by assuming that the exogenous variables are independently drawn from an unknown but fixed joint probability distribution. This assumption cuts the causation effects that could flow through the exogenous variables. The noise variables are also exogenous variables acting as independent source of randomness. The noise variables are useful to represent the conditional distribution P(effect |causes) using the equation effect= f (causes, e). Therefore, we also assume joint independence between all the noise variables and any of the named exogenous variable.3 For instance, in the case of the ad placement
1. See also the discussion on reinforcement learning, Section 3.5.
2. The concept of isolation is pervasive in physics. An isolated system in thermodynamics (Reichl, 1998, Section 2.D) or a closed system in mechanics (Landau and Lifshitz, 1969, §5) evolves without exchanging mass or energy with its surroundings. Experimental trials involving systems that are assumed isolated may differ in their initial setup and therefore have different outcomes. Assuming isolation implies that the outcome of each trial cannot affect the other trials.
3. Rather than letting two noise variables display measurable statistical dependencies because they share a common cause, we prefer to name the common cause and make the dependency explicit in the graph.
P � u, v, x,a,b q, s, c, y, z � = P(u, v) Exogenous vars. × P(x |u) Query. × P(a|x, v) Eligible ads. × P(b|x, v) Bids. × P(q|x,a) Scores. × P( s |a,q,b) Ad slate. × P(c |a,q,b) Prices. × P(y | s,u) Clicks. × P( z |y, c) Revenue. Figure 5: Markov factorization of the structural equation model of Figure 2. ��� ������� � ��� �� �� ���� � ����� � � � � �� � ������ � ��� � � � ������
������
���� ��
Figure 6: Bayesian network associated with the Markov factorization shown in Figure 5. model shown in Figure 2, we assume that the joint distribution of the exogenous variables factorizes as P(u, v, e1, . . . , e8) = P(u, v)P(e1) . . .P(e8) . Since an isolation assumption is only true up to a point, it should be expressed clearly and remain under constant scrutiny. We must therefore measure additional performance metrics that reveal how the isolation assumption holds. For instance, the ad placement structural equation model and the corresponding causal graph (figures 2 and 3) do not take user feedback or advertiser feedback into account. Measuring the revenue is not enough because we could easily generate revenue at the expense of the satisfaction of the users and advertisers. When we evaluate interventions under such an isolation assumption, we also need to measure a battery of additional quantities that act as proxies for the user and advertiser satisfaction. Noteworthy examples include ad relevance estimated by human judges, and advertiser surplus estimated from the auctions (Varian, 2009). 3.3 Markov Factorization Conceptually, we can draw a sample of the exogenous variables using the distribution specified by the isolation assumption, and we can then generate values for all the remaining variables by simulating the structural equation model.
This process defines a generative probabilistic model representing the joint distribution of all variables in the structural equation model. The distribution readily factorizes as the product of the joint probability of the named exogenous variables, and, for each equation in the structural equation model, the conditional probability of the effect given its direct causes (Spirtes et al., 1993; Pearl, 2000). As illustrated by figures 5 and 6, this Markov factorization connects the structural equation model that describes causation, and the Bayesian network that describes the joint probability distribution followed by the variables under the isolation assumption.4 Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equation model is an algebraic object. As long as the causal graph remains acyclic, algebraic manipulations are interpreted as interventions on the causal system. The Bayesian network is a generative statistical model representing a class of joint probability distributions, and, as such, does not support algebraic manipulations. However, the symbolic representation of its Markov factorization is an algebraic object, essentially equivalent to the structural equation model.
3.4 Identification, Transportation, and Transfer Learning
Consider a causal system represented by a structural equation model with some unknown functional dependencies. Subject to the isolation assumption, data collected during the operation of this system follows the distribution described by the corresponding Markov factorization. Let us first assume that this data is sufficient to identify the joint distribution of the subset of variables we can observe. We can intervene on the system by clamping the value of some variables. This amounts to replacing the right-hand side of the corresponding structural equations by constants. The joint distribution of the variables is then described by a newMarkov factorization that shares many factors with the original Markov factorization. Which conditional probabilities associated with this new distribution can we express using only conditional probabilities identified during the observation of the original system? This is called the identifiability problem. More generally, we can consider arbitrarily complex manipulations of the structural equation model, and we can perform multiple experiments involving different manipulations of the causal system. Which conditional probabilities pertaining to one experiment can be expressed using only conditional probabilities identified during the observation of other experiments? This is called the transportability problem. Pearl’s do-calculus completely solves the identifiability problem and provides useful tools to address many instances of the transportability problem (see Pearl, 2012). Assuming that we know the conditional probability distributions involving observed variables in the original structural equation model, do-calculus allows us to derive conditional distributions pertaining to the manipulated structural equation model.
Unfortunately, we must further distinguish the conditional probabilities that we know (because we designed them) from those that we estimate from empirical data. This distinction is important because estimating the distribution of continuous or high cardinality variables is notoriously difficult. Furthermore, do-calculus often combines the estimated probabilities in ways that amplify estimation errors. This happens when the manipulated structural equation model exercises the variables in ways that were rarely observed in the data collected from the original structural equation model.
4. Bayesian networks are directed graphs representing the Markov factorization of a joint probability distribution: the arrows no longer have a causal interpretation.
Therefore we prefer to use much simpler causal inference techniques (see sections 4.1 and 4.2). Although these techniques do not have the completeness properties of do-calculus, they combine estimation and transportation in a manner that facilitates the derivation of useful confidence intervals.
3.5 Special Cases
Three special cases of causal models are particularly relevant to this work. • In the multi-armed bandit (Robbins, 1952), a user-defined policy function p determines the distribution of action a ∈ {1. . .K}, and an unknown reward function r determines the distribution of the outcome y given the action a (Figure 7). In order to maximize the accumulated rewards, the player must construct policies p that balance the exploration of the action space with the exploitation of the best action identified so far (Auer et al., 2002; Audibert et al., 2007; Seldin et al., 2012). • The contextual bandit problem (Langford and Zhang, 2008) significantly increases the complexity of multi-armed bandits by adding one exogenous variable x to the policy function p and the reward functions r (Figure 8). • Both multi-armed bandit and contextual bandit are special case of reinforcement learning (Sutton and Barto, 1998). In essence, a Markov decision process is a sequence of contextual bandits where the context is no longer an exogenous variable but a state variable that depends on the previous states and actions (Figure 9). Note that the policy function p, the reward function r, and the transition function s are independent of time. All the time dependencies are expressed using the states st . These special cases have increasing generality. Many simple structural equation models can be reduced to a contextual bandit problem using appropriate definitions of the context x, the action a and the outcome y. For instance, assuming that the prices c are discrete, the ad placement structural equation model shown in Figure 2 reduces to a contextual bandit problem with context (u, v), actions (s, c) and reward z. Similarly, given a sufficiently intricate definition of the state variables st , all structural equation models with discrete variables can be reduced to a reinforcement learning problem. Such reductions lose the fine structure of the causal graph. We show in Section 5 how this fine structure can in fact be leveraged to obtain more information from the same experiments. Modern reinforcement learning algorithms (see Sutton and Barto, 1998) leverage the assumption that the policy function, the reward function, the transition function, and the distributions of the corresponding noise variables, are independent from time. This invariance property provides great benefits when the observed sequences of actions and rewards are long in comparison with the size of the state space. Only Section 7 in this contribution presents methods that take advantage of such an invariance. The general question of leveraging arbitrary functional invariances in causal graphs is left for future work.
4. Counterfactual Analysis
We now return to the problem of formulating and answering questions about the value of proposed changes of a learning system. Assume for instance that we consider replacing the score computation
a = p(e) Action a ∈ {1. . .K} y = r(a, e′ ) Reward y ∈ R
Figure 7: Structural equation model for the multi-armed bandit problem. The policy p selects a discrete action a, and the reward function r determines the outcome y. The noise variables e and e′ represent independent sources of randomness useful to model probabilistic dependencies.
a = p(x, e) Action a ∈ {1. . .K} y = r(x, a, e′) Reward y ∈ R Figure 8: Structural equation model for contextual bandit problem. Both the action and the reward depend on an exogenous context variable x. at = p(st−1, et) Action yt = r(st−1, at , e′ t ) Reward rt ∈ R st = s(st−1, at , e′′ t ) Next state Figure 9: Structural equation model for reinforcement learning. The above equations are replicated for all t ∈ {0. . . ,T}. The context is now provided by a state variable st−1 that depends on the previous states and actions.
…
…
…
…
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2013 CounterfactualReasoningandLearn | Léon Bottou Joaquin Quiñonero-Candela Jonas Peters Denis X. Charles D. Max Chickering Elon Portugaly Dipankar Ray Patrice Simard Ed Snelson | Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising | 2013 |