Rice University computer scientists have discovered an inexpensive way for tech companies to implement a rigorous form of personal data privacy when using or sharing large databases for machine learning.
The researchers aim to solve the problem with a new method using a technique called locality sensitive hashing. They found they could create a small summary of an enormous database of sensitive records. The method is both safe to make publicly available and useful for algorithms that use kernel sums, one of the basic building blocks of machine learning, and for machine-learning programs that perform common tasks like classification, ranking and regression analysis.
The method also allows companies to both reap the benefits of large-scale, distributed machine learning and uphold a rigorous form of data privacy called differential privacy. Differential privacy, which is used by more than one tech giant, is based on the idea of adding random noise to obscure individual information.
There are elegant and powerful techniques to meet differential privacy standards today, but none of them scale. The computational overhead and the memory requirements grow exponentially as data becomes more dimensional. Data is increasingly high-dimensional, meaning it contains both many observations and many individual features about each observation.
There are many cases where machine learning could benefit society if data privacy could be ensured. There is huge potential for improving medical treatments or finding patterns of discrimination, for example, if we could train machine learning systems to search for patterns in large databases of medical or financial records. Today, that’s essentially impossible because data privacy methods do not scale.
– Anshumali Shrivastava, Associate Professor, Computer Science, Rice University
The new method scales for high-dimensional data. The sketches are small and the computational and memory requirements for constructing them are also easy to distribute. Engineers today must either sacrifice their budget or the privacy of their users if they wish to use kernel sums. This new method changes the economics of releasing high-dimensional information with differential privacy. This latest method is simple, fast and 100 times less expensive to run than existing methods.
This is the latest innovation from the researchers who have developed numerous algorithmic strategies to make machine learning and data science faster and more scalable. They and their collaborators have found a more efficient way for social media companies to keep misinformation from spreading online and discovered how to train large-scale deep learning systems up to 10 times faster for “extreme classification” problems.
Big data has enormous potential in the public sector. A government’s everyday activities, such as managing social benefits, collecting taxes, monitoring the national health and education systems, recording traffic data and issuing official documents generate and collect vast amounts of data every day. Information that is readily available in real-time enables government agencies and departments to identify areas in need of attention, make more informed decisions more quickly, and implement necessary changes.
Since big data is so versatile, it can be used in a variety of industries and settings, including healthcare. As reported by OpenGov Asia, The COVID-19 pandemic revealed how big data and analytics technologies are being used in the public health sector.
For example, governments and organisations developed contact tracing, where phone numbers and location data from mobile devices were combined with lab results in public health systems to issue alerts when an individual came in contact with a confirmed COVID patient. This information empowered people to preemptively self-isolate and/or head for rapid testing.
Public health agencies must understand how to use data effectively as the use of big data during the pandemic is essential. They should start working on plans to protect the privacy of the end-user and comply with the evolving laws around personal data privacy.