Ad Recommendation Task
(Redirected from Ad Recommendation)
Jump to navigation
Jump to search
An Ad Recommendation Task is a item recommendation task that involves selecting the most appropriate advertisement.
- Context:
- measures: maximize User Engagement and Click-Through Rates.
- It can (typically) be solved by an Ad Recommendation System (that implements an ad recommendation algorithm).
- It can (typically) require balancing the goals of Advertisers and the preferences of Users.
- It is influenced by Regulatory Frameworks, particularly in sectors with sensitive data.
- It can impact User Experience on a platform, with potential effects on Brand Perception and User Retention.
- It can (often) requires continuous optimization and testing, such as through A/B Testing.
- It can need to consider the Ethical Implications of ad targeting and personalization.
- …
- Example(s):
- Selecting targeted ads for a user on a Social Media Platform based on their interests and interactions.
- Displaying personalized travel deal ads to a user on a Travel Website based on their past booking history.
- …
- Counter-Example(s):
- A Product Recommendation Task.
- A Content Recommendation Task.
- A Music Recommendation Task, which focuses on suggesting music based on user preferences, not advertisements.
- A Search Engine Optimization Task, which is about improving website visibility in search engine results, not ad targeting.
- …
- See: Ad Recommendation System, User Profiling in Advertising, Ethical Considerations in Ad Recommendations, Targeted Advertising.
References
2015
- (Theocharous et al., 2015) ⇒ Georgios Theocharous, Philip S. Thomas, and Mohammad Ghavamzadeh. (2015). “Ad Recommendation Systems for Life-time Value Optimization.” In: Proceedings of the 24th international conference on world wide web, pp. 1305-1310.
- ABSTRACT: The main objective in the ad recommendation problem is to find a strategy that, for each visitor of the website, selects the ad that has the highest probability of being clicked. This strategy could be computed using supervised learning or contextual bandit algorithms, which treat two visits of the same user as two separate independent visitors, and thus, optimize greedily for a single step into the future. Another approach would be to use reinforcement learning (RL) methods, which differentiate between two visits of the same user and two different visitors, and thus, optimizes for multiple steps into the future or the life-time value (LTV) of a customer. While greedy methods have been well-studied, the LTV approach is still in its infancy, mainly due to two fundamental challenges: how to compute a good LTV strategy and how to evaluate a solution using historical data to ensure its "safety" before deployment. In this paper, we tackle both of these challenges by proposing to use a family of off-policy evaluation techniques with statistical guarantees about the performance of a new strategy. We apply these methods to a real ad recommendation problem, both for evaluating the final performance and for optimizing the parameters of the RL algorithm. Our results show that our LTV optimization algorithm equipped with these off-policy evaluation techniques outperforms the greedy approaches. They also give fundamental insights on the difference between the click through rate (CTR) and LTV metrics for performance evaluation in the ad recommendation problem.