Posts by Collection

publications

Robustness of Meta Matrix Factorization Against Strict Privacy Constraints

Peter Müllner, Dominik Kowald, and Elisabeth Lex. 43rd European Conference on IR Research (ECIR), 2021. [pdf, code, slides]

We explore the reproducibility of MetaMF, a meta matrix factorization framework introduced by Lin et al and study the impact of meta learning on the accuracy of MetaMF’s recommendations. Also, we investigate the robustness of MetaMF against strict privacy constraints, i.e., how much data a user is willing to share with the recommender system. Our study illustrates that we can reproduce most of Lin et al.’s results. Plus, meta learning is essential for MetaMF’s robustness against strict privacy constraints. Read more

Support the underground: characteristics of beyond-mainstream music listeners

Dominik Kowald, Peter Müllner, Eva Zangerle, Christine Bauer, Markus Schedl, and Elisabeth Lex. EPJ Data Science, 2021. [pdf, code]

To better understand the characteristics of beyond-mainstream music listeners, we analyze users of the Last.fm platform. Our analysis reveals four subgroups of beyond-mainstream music listeners that differ with respect to their preferred music and their demographic characteristics. Also, we find significant differences between the groups with respect to the quality of music recommendations. Specifically, our results show a positive correlation between a subgroup’s openness towards music listened to by members of other subgroups and recommendation accuracy. Read more

Position Paper on Simulating Privacy Dynamics in Recommender Systems

Peter Müllner, Elisabeth Lex, and Dominik Kowald. Workshop on Simulation Methods for Recommender Systems co-located with the ACM Conference on Recommender Systems, 2021. [pdf]

In this position paper, we present a conceptual approach to integrate privacy into recommender system simulations, whose key elements are privacy agents. Also, we identify three critical topics for future research in privacy-aware recommender system simulations: (i) How could we model users’ privacy preferences and protect users from performing any privacy-threatening actions? (ii) To what extent do privacy agents modify the users’ document preferences? (iii) How do privacy preferences and privacy protections impact recommendations and privacy of others? Read more

Towards employing recommender systems for supporting data and algorithm sharing

Peter Müllner, Stefan Schmerda, Dieter Theiler, Stefanie Lindstaedt, and Dominik Kowald. 1st International Workshop on Data Economy (DE) co-located with the International Conference on emerging Networking EXperiments and Technologies (CoNEXT), 2022. [pdf, slides]

The efficient sharing of data and algorithms relies on the active interplay between users, data providers, and algorithm providers. We identify six recommendation scenarios for supporting data and algorithm sharing, where four of these scenarios substantially differ from the traditional recommendation scenarios in e-commerce applications. We find that collaboration-based recommendations provide the most accurate recommendations in all scenarios. Plus, the recommendation accuracy strongly depends on the specific scenario, e.g., algorithm recommendations for users are a more difficult problem than algorithm recommendations for datasets. Finally, the content-based approach generates the least popularity-biased recommendations that cover the most datasets and algorithms. Read more

User Privacy in Recommender Systems

Peter Müllner. 45th European Conference on Information Retrieval (ECIR), 2023. [pdf, slides]

The utilization of user data for generating recommendations can pose severe threats to user privacy, e.g., the inadvertent leakage of user data to untrusted parties or other users. Instead of the plain application of privacy-enhancing techniques, which could lead to decreased accuracy, we tackle the problem itself, i.e., the utilization of user data. With this, we aim to equip recommender systems with means to provide high-quality recommendations that respect users’ privacy. Read more

ReuseKNN: Neighborhood Reuse for Differentially Private KNN-Based Recommendations

Peter Müllner, Elisabeth Lex, Markus Schedl, Dominik Kowald. ACM Transactions on Intelligent Systems and Technology (TIST), 2023. [pdf, code]

To reduce the privacy risk of users in k nearest neighbor recommender systems, existing work applies differential privacy by adding randomness to the neighbors’ ratings, which unfortunately reduces the accuracy of UserKNN. In this work, we introduce ReuseKNN, a novel differentially private KNN-based recommender system. The main idea is to identify small but highly reusable neighborhoods so that (i) only a minimal set of users requires protection with differential privacy and (ii) most users do not need to be protected with differential privacy since they are only rarely exploited as neighbors Read more

Differential privacy in collaborative filtering recommender systems: a review

Peter Müllner, Elisabeth Lex, Markus Schedl, and Dominik Kowald. Frontiers in Big Data - Recommender Systems, 2023. [pdf]

We review 26 recommendation approaches that apply differential privacy, and we highlight research that improves the trade-off between recommendation quality and user privacy. Also, we classify these approaches based on how they apply DP, i.e., to the user representation, the model updates, or after model training. Finally, we discuss open issues of research on differentially private recommender systems, e.g., considering the relation between privacy and fairness, and the users’ different needs for privacy. Read more

The Impact of Differential Privacy on Recommendation Accuracy and Popularity Bias

Peter Müllner, Elisabeth Lex, Markus Schedl, Dominik Kowald. 46th European Conference on Information Retrieval (ECIR), 2024. [pdf, code, slides]

We study the impact of Differential Privacy (DP) on recommendation accuracy and popularity bias by comparing recommendation lists generated with and without DP. We find that nearly all users receive different recommendations than without DP and that large parts of the recommendation lists are different. Plus, we observe a substantial drop in recommendation accuracy and a sharp increase in popularity bias. Furthermore, especially user groups that prefer unpopular items experience a substantial increase in popularity bias when DP is applied. Read more

Making Alice Appear Like Bob: A Probabilistic Preference Obfuscation Method For Implicit Feedback Recommendation Models

Gustavo Escobedo, Marta Moscati, Peter Müllner, Simone Kopeinik, Dominik Kowald, Elisabeth Lex. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2024. [pdf]

In this work, we introduce SBO, a novel probabilistic obfuscation method for user preference data designed to improve the accuracy–privacy trade-off for recommendations. Our experiments reveal that SBO outperforms comparable approaches with respect to the accuracy–privacy trade-off. Specifically, we can reduce the leakage of users’ protected attributes while maintaining on-par recommendation accuracy. Read more

AI-Powered Immersive Assistance for Interactive Task Execution in Industrial Environments

Tomislav Duricic, Peter Müllner, Nicole Weidinger, Neven ElSayed, Dominik Kowald, Eduardo Veas. 27th European Conference on Artificial Intelligence (ECAI), 2024. [pdf]

In this work, we demonstrate an immersive assistance system powered by Artificial Intelligence (AI) that supports users in performing complex tasks in industrial environments. Our system leverages a Virtual Reality (VR) environment that resembles a juice mixer setup. This digital twin of a physical setup simulates complex industrial machinery used to mix preparations or liquids (e.g., similar to the pharmaceutical industry) and includes various containers, sensors, pumps, and flow controllers. The core components of our multimodal AI assistant are a large language model and a speech-to-text model that process a video and audio recording of an expert performing the task in a VR environment. This demonstration showcases the potential of our AI-powered assistant to reduce cognitive load, increase productivity, and enhance safety in industrial environments. Read more

Establishing and evaluating trustworthy AI: overview and research challenges

Dominik Kowald, Sebastian Scher, Viktoria Pammer-Schindler, Peter Müllner, Kerstin Waxnegger, Lea Demelius, Angela Fessl, Maximilian Toller, Inti Gabriel Mendoza Estrada, Ilija Simic, Vedran Sabol, Andreas Trügler, Eduardo Veas, Roman Kern, Tomislav Nad, Simone Kopeinik. Frontiers in Big Data (Sec. Machine Learning and Artificial Intelligence), 2024. [pdf]

In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: (1) human agency and oversight, (2) fairness and non-discrimination, (3) transparency and explainability, (4) robustness and accuracy, (5) privacy and security, and (6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to (1) interdisciplinary research, (2) conceptual clarity, (3) context-dependency, (4) dynamics in evolving systems, and (5) investigations in real-world contexts. Read more

talks