Call For Papers

This workshop aims to foster discussions in several fields that are of interest to our growing community of recommendation system builders. On the practical side, we would like to encourage sharing of architecture and algorithm best practices in large-scale recommender systems as they are practiced in industry, as well as particular challenges and pain points. We hope this will guide future research that is system aware. On the research side, we focus on bringing in ideas and evaluations on scaling beyond the current generation of big data systems, with improved recommendation metrics. We believe the brightest minds from both sides will mutually benefit from the discussions and accelerate problem solving.

We invite submissions in two formats: extended abstracts (1-8 pages), or slides (15-20 slides). By accepting slides, we hope to lower the writing burden for industry participants. However, since slides submissions sometimes are short on details, we might request clarification or additional editing as condition for acceptance. We encourage contributions in new theoretical research, practical solutions to particular aspects of scaling a recommender, best practices in scaling evaluation systems, and creative new applications of big data to large scale recommendation systems.

Our topics of interests include, but are not limited to:

Data & Algorithms in Large-scale RS:

  • Scalable deep learning algorithm
  • Big data processing in offline/near-line/online modules
  • Data platforms for recommendation
  • Large, unstructured and social data for recommendation
  • Heterogeneous data fusion
  • Sampling techniques
  • Parallel algorithms
  • Algorithm validation and correctness checking

Systems of Large-scale RS:

  • Architecture
  • Programming Model
  • Cloud platforms best for recommenders
  • Real-time recommendation
  • Online learning for recommendation
  • Scalability and Robustness

Evaluation of Large-scale RS:

  • Comparison of algorithms’ application and effectiveness in different domains
  • Offline optimization and online measurement consistency
  • Evaluation metrics alignment with product/project goal
  • Large user studies
  • A/B testing methodology

Submissions

Submissions will be through EasyChair: https://easychair.org/conferences/?conf=lsrs2017.

Advertisements