Collaborative Filtering systems are essentially social systems which base their recommendation on the judgment of a large number of people. However, like other social systems, they are also vulnerable to manipulation. Lies and Propaganda may be spread by malicious users who may have an interest in promoting an item, or downplaying the popularity of another one. By doing this systematically, with either multiple identities, or by involving more people, malicious shilling user profiles can be injected into a collaborative recommender system which can significantly affect the robustness of a recommender system. While current detection algorithms are able to use certain characteristics of shilling profiles to detect them, they suffer from low precision, and require a large amount of training data. The aim of this work is to explore simpler unsupervised alternatives which exploit the nature of shilling profiles, and can be easily plugged into collaborative filtering framework to add robustness. Two statistical methods are developed and experimentally shown to provide high accuracy in shilling attack detection.
Subjects: 1.10 Information Retrieval; 12. Machine Learning and Discovery
Submitted: Apr 30, 2007