MoST-Rec will take place on November 7 as a half-day workshop colocated with the 28th ACM International Conference on Information and Knowledge Management (CIKM’19).
Time & Location: From 2:00 PM to 5:00 PM in Room 303A
The workshop program will start with a keynote talk on “The Impact of Personalization Services on Society” by Dr. Frank Hopfgartner, University of Sheffield, UK. This will be followed by short (15-min) presentations of five research papers accepted through the open call, and an open poster or discussion session to allow researchers to interact and discuss the presented research papers in further detail.
For registration, please visit www.cikm2019.net/registration.html.
Accepted Papers
Authors: Bhagirath Addepalli, Hua Li and Delbert Dueck
Abstract: In this article we consider the ranking problem on Maps Query Auto-Complete, and share insights on making ranking improvements. Query auto-completion (QAC) on Maps differs from traditional auto-completion in that it involves both query- and entity retrieval and ranking. The QAC system on a Maps product is typically tasked with recommending up to 5 business, place, or address entities, and / or completed query-suggestions, to user-typed query prefixes.
Several ideas were considered for ranking improvements. For conciseness, we examine the impact and the relative importance of the following factors: a) ranking problem formulation, b) training data generation, size, and instance, c) model-type and ensembling, d) new and high-value features, e) hyperparameter optimization, and f) data freshness and distribution drift. The performance of these factors was evaluated relative to a simple baseline model, trained using Bing query-logs. Incorporating the factors helped achieve approximately 37% of the maximum possible gain. The intuition behind the performance of the various factors, along with the practical aspects and constraints of model development are detailed.
Cite paper: Bhagirath Addepalli, Hua Li and Delbert Dueck, “Model Selection and Hyperparameter Tuning In Maps Query Auto-Completion Ranking,” Workshop on Model Selection and Parameter Tuning in Recommender Systems (MoST-Rec) @ CIKM’19, Beijing, China, November 2019.
Authors: Dmytro Pukhkaiev and Uwe Assmann
Efficiency of self-optimizing systems is heavily dependent on their optimization strategies, e.g., choosing exact or approximate solver. A choice of such a strategy, in turn, is influenced by numerous factors, such as re-optimization time, size of the problem, optimality constraints, etc. Exact solvers are domain-independent and can guarantee optimality but suffer from scaling, while approximate solvers offer a “good-enough” solution in exchange for a lack of generality and parameter-dependence. In this paper we discuss the trade-offs between exact and approximate optimizers for solving a quality-based software selection and hardware mapping problem from the scalability perspective. We show that even a simple heuristic can compete with thoroughly developed exact solvers under condition of an effective parameter tuning. Moreover, we discuss robustness of the obtained algorithm’s configuration. Last but not least, we present a software product line for parameter tuning, which comprise the main features of this process and can serve as a platform for further research in the area of parameter tuning.
Cite paper: Dmytro Pukhkaiev and Uwe Assmann, “Parameter Tuning for Self-optimizing Software at Scale,” Workshop on Model Selection and Parameter Tuning in Recommender Systems (MoST-Rec) @ CIKM’19, Beijing, China, November 2019.
Authors: Lidan Wang, Franck Dernoncourt and Trung Bui
Abstract: The performance of many machine learning models depends on their hyper-parameter settings. Bayesian Optimization has become a successful tool for hyper-parameter optimization of machine learning algorithms, which aims to identify optimal hyper-parameters during an iterative sequential process. However, most of the Bayesian Optimization algorithms are designed to select models for effectiveness only and ignore the important issue of model training efficiency. Given that both model effectiveness and training time are important for real-world applications, models selected for effectiveness may not meet the strict training time requirements necessary to deploy in a production environment. In this work, we present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency. We propose an objective that captures the tradeoff between these two metrics and demonstrate how we can jointly optimize them in a principled Bayesian Optimization framework. Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency while maintaining strong effectiveness as compared to stateof-the-art Bayesian Optimization algorithms.
Cite: Lidan Wang, Franck Dernoncourt and Trung Bui, “Bayesian Optimization for Selecting Efficient Machine Learning Models,” Workshop on Model Selection and Parameter Tuning in Recommender Systems (MoST-Rec) @ CIKM’19, Beijing, China, November 2019.
Authors: Anton Lysenko, Egor Shikov and Klavdiya Bochenina
Abstract: Due to the availability of large amounts of data recommender systems have quickly gained popularity in the banking sphere. However, time-sensitive recommender systems, which take into account the temporal behavior and the recurrent activities of users to predict the expected time and category of next purchase, are still an active field of research. Many researchers tend to use population-level features or their low-rank approximations because the client’s purchase history is very sparse with few observations for some time intervals and product categories. But such approaches inevitably lead to a loss of accuracy. In this paper we present a generative model of client spending based on the temporal point processes framework, which takes into account individual purchase histories of clients. We also tackle the problem of poor statistics for people with low transactional activity using effective intensity function parametrizations, and several other techniques such as smoothing daily intensity levels and taking into account population-level purchase rates for clients with a small number of transactions. The model is highly interpretable and its training time scales linearly to millions of transactions and cubically to hundreds of thousands of users. Different temporal-process models were tested, our model with all the incorporated modifications has shown the best results in terms of both error of time prediction and the accuracy of category prediction.
Cite: Anton Lysenko, Egor Shikov and Klavdiya Bochenina, “Combination of Individual and Group Patterns For Time-sensitive Purchase Recommendation,” Workshop on Model Selection and Parameter Tuning in Recommender Systems (MoST-Rec) @ CIKM’19, Beijing, China, November 2019.
Authors: Shubham Gautam, Pallavi Gupta, Satyajit Swain, Rohit Ranjan
Abstract: Music streaming services of the likes of Spotify, Apple Music, Gaana, JioSaavn, YouTube Music, Wynk, etc. have been the go-to platform choices for music listening in the Indian entertainment ecosystem spanning 16+ vernacular regional languages besides English language music. In such a diverse market with huge music catalog repository across multiple languages, enabling music discovery is a challenge by relying purely on recommendations based on user listening history. In this paper, we present an approach to improve music discovery in a language including long tail content discovery and improving relevancy ranking of songs in a given recommendation set through audio fingerprinting based techniques that tap into the inherent characteristics of a music audio file. We illustrate that model selection can be done in a suitable manner using appropriate parameter tuning strategies in machine learning based models by showing their effects on online as well as offline experimentation. We also present our evaluation approach on offline metrics and present our conclusion on the trade-off and balancing that needs to be considered as there is no one size fits all approach applicable in the diverse nature of music.
Cite paper: Shubham Gautam, Pallavi Gupta, Satyajit Swain, Rohit Ranjan, “Discovery Oriented Model Selection in Music Recommendation,” Workshop on Model Selection and Parameter Tuning in Recommender Systems (MoST-Rec) @ CIKM’19, Beijing, China, November 2019.