Machine learning (ML) systems are now ubiquitous and work well in several applications, but it is still relatively unexplored how much information they can leak. ML is no longer just an academic research topic but a field that is increasingly applied and affects our life. It is well known that ML is powered by data, but what is less known is that the data is usually collected without our consent; and what is worse, some data are sensitive in nature. Is it then safe to assume that our data are securely hidden inside these machine learning black-boxes? Can the data used for training be inferred from a machine learning model? And how easy is it to steal the machine learning model itself just by looking at its predictions?
Find out in our new blog post titled Machine Learning Leaks and Where to Find Them written by Maria Rigaki, a bright PhD student in cybersecurity working in the Stratosphere Lab together with Head of the Laboratory Sebastián García who also contributed to the article.