In messages to be published on the social networks the system automatically protects the words that may infringe the privacy of the author by changing them for more general words. Depending on how well the readers know the author, they will be able to access a more or less detailed version of the message. Only the social network operator will have the most protected version of the message
Political opinions, diseases, work skills, relationships… Personal information of this sort can be decoded only by reading the messages that users publish on the social networks. Even if the profile contains no details and all the privacy conditions made available by the social network are applied, messages contain data that give clues about our private lives that we may not want everybody to know. What is more, the network will have access to this information forever: it effectively owns it and can sell it to third parties. In this regard, some companies provide this personal information to others, which, for example, want to see the profiles of candidates for certain job vacancies, or insurance companies that want to know whether future clients are as healthy as they say they are.
In the face of these user privacy problems and the strategies of the networks to exploit them, a URV research team consisting of David Sánchez and Alexandre Viejo – from the research group CRISES of the Department of Computer Engineering and Mathematics – designed a system that automatically adapts messages published on the social networks so that they reach readers with one level of detail or another, depending on how well they know the author. Only the network operator has access to the most protected version of the message.
Users, then, can choose which elements they regard as private and with whom they wish to share them. When they write a message, the system will automatically change certain words that it believes infringe the author’s privacy. As a result, a ‘filtered’ message will be published so that the author gets maximum protection, while readers will see versions with differing levels of detail depending on how well they know the author and what the author has authorized. To do this, the system encrypts and conceals more detailed versions of the protected terms that have been posted on the social network in images that are attached to the message. So when the readers authorized by the author wish to access the message, the system will automatically provide a more detailed message (or even the original) and will replace the protected terms published on the network with those that have been stored in the image and which have been authorized by the author. Authorized readers, then, will be sent versions with more or less detail depending on how well they know the author. These messages will be quite different from the ones that will be published on the network and the ones that will be seen by external users, which will contain private data under any circumstances.
At present, the researchers have designed the system, which has been published in the scientific journal Expert Systems with Applications. It can be implemented as an application (mobile or desktop) or as a connector for web browsers.
Reference: “Enforcing transparent access to private content in social networks by means of automatic sanitization”. Alexandre Viejo , David Sánchez, Expert Systems with Applications. Volume 42, Issue 23, 15 December 2015, Pages 9366–9378