Project Description
TagUBig – Taming your Big Data
Every day, we generate great amounts of data from and to various devices, and blindly share those data over different platforms, service providers and individual users, all over the globe. It is not possible today for an individual to be sure that what s/he has shared is exactly what s/he wants to be shared and to whom. Individuals have no control over their Big Data (BiDa) before it is released “in the wild”. On the other hand, individual’s personal BiDa can be a useful source to better understand what data are used for the most common interactions between the individual and the various systems and, when we know this, what are the most adequate measures to protect it. In summary, we can better tame what we better know. The main goal of TagUBig is to answer the following research question: Can individuals use their BiDa to control and improve transparency, privacy and usability, when interacting with an application?
In the ambit of this project, an access control decision model is being developed to automatically learn from an individual’s BiDa and from live data collected from every interaction a user makes, comprising human, social and technical context at that moment (e.g., time, location, previous interactions, type of connection/device, etc), and decides what is the most transparent, secure and usable way to both ask and retrieve the results of each request, to and from the application at hand. BiDa stays at the user’s side and is constantly analysed to control and improve security of shared data and interactions with an application, by: a) learning and adapting data protection and visualization according to a user’s purpose; b) detecting security vulnerabilities and inconsistencies of user’s interactions with the application; and c) providing more fine-grained accountability and auditing information to better detect policy violations.
Abstract
TagUBig framework comprises several components to automatically learn from an individual’s BiDa (Big Data) and from live data collected from every interaction a user makes, comprising human, social and technical context at that moment (e.g., time, location, previous interactions, type of connection/device, etc.) and decides what is the most transparent, secure and usable way to both ask and retrieve the results of each request, to and from the application at hand.
Funding Institution
FCT
Global Budget
50.000€
CINTESIS Budget
50.000€
Reference
IF/00693/2015
Duration
01/01/2017 – 31/12/2021
CINTESIS researchers involved
Joana Muchagata – bolseira