Ad Recommendation Engine
(Redirected from Ad Recommender)
Jump to navigation
Jump to search
An Ad Recommendation Engine is an item recommendation system that implements an ad recommendation algorithm to solve an ad recommendation task.
- AKA: Ad Recommender.
- Context:
- It can (often) be integrated into Online Advertising Platforms
- It can (often) utilize A/B Testing to refine its recommendation strategies.
- It can (often) face challenges related to Privacy Concerns and Data Security.
- It can (typically) be subjected to Regulatory Compliance issues, especially in sectors like Healthcare Advertising.
- It can range from being an Offline Ad Recommendation System to being a Real-Time Ad Recommendation System.
- It can range from being an Personalized Ad Recommendation System to being a Non-Personalized Ad Recommendation System.
- It can range from being an Custem Ad Recommendation System to being a 3rd-Party Ad Recommender Platform-based Ad Recommendation System.
- …
- Example(s):
- Counter-Example(s):
- A Song Recommender Engine, which does not tailor its search results to individual user preferences.
- A Random Ad Placement System, which places advertisements without any underlying recommendation logic.
- …
- See: User Profiling in Advertising, Ethical Considerations in Ad Recommendations, Targeted Advertising.
References
2015
- (Theocharous et al., 2015) ⇒ Georgios Theocharous, Philip S. Thomas, and Mohammad Ghavamzadeh. (2015). “Ad Recommendation Systems for Life-time Value Optimization.” In: Proceedings of the 24th international conference on world wide web, pp. 1305-1310.
- ABSTRACT: The main objective in the ad recommendation problem is to find a strategy that, for each visitor of the website, selects the ad that has the highest probability of being clicked. This strategy could be computed using supervised learning or contextual bandit algorithms, which treat two visits of the same user as two separate independent visitors, and thus, optimize greedily for a single step into the future. Another approach would be to use reinforcement learning (RL) methods, which differentiate between two visits of the same user and two different visitors, and thus, optimizes for multiple steps into the future or the life-time value (LTV) of a customer. While greedy methods have been well-studied, the LTV approach is still in its infancy, mainly due to two fundamental challenges: how to compute a good LTV strategy and how to evaluate a solution using historical data to ensure its "safety" before deployment. In this paper, we tackle both of these challenges by proposing to use a family of off-policy evaluation techniques with statistical guarantees about the performance of a new strategy. We apply these methods to a real ad recommendation problem, both for evaluating the final performance and for optimizing the parameters of the RL algorithm. Our results show that our LTV optimization algorithm equipped with these off-policy evaluation techniques outperforms the greedy approaches. They also give fundamental insights on the difference between the click through rate (CTR) and LTV metrics for performance evaluation in the ad recommendation problem.