April 27, 2017
-
Evan Wright
,

Data Privacy in a World of Outsourced Artificial Intelligence

<p><strong>Artificial intelligence(AI)</strong> and <strong>deep learning</strong> can lead to powerful business insights.  Many executives are ready to harness the power of this technology but one main challenge holds them back.  Hiring technical talent for cybersecurity is hard enough in itself; hiring technical talent for AI is a much bigger challenge.</p><p>This problem was recently faced by the UK’s National Health Service(NHS).  Tremendous results have been demonstrated recently using computer vision techniques to identify specific types of illness in medical patients by looking at scans of the patient’s body.  Artificial Intelligence has a strong track record of effectively predicting medical conditions such as <a href="http://www.dailymail.co.uk/sciencetech/article-4343264/New-computer-program-detect-early-traces-cancer.html">Cancer</a>, <a href="http://www.itweb.co.za/index.php?option=com_content&amp;view=article&amp;id=161089">Heart attacks</a> and many other image-based diagnoses.</p><p>Medical information is particularly sensitive to medical organizations like the NHS, but it is also among the most lucrative types of PII to cybercriminals.  Many freely available AI/machine learning software packages exist such libraries as theano, torch, cntk, and tensorflow.  Despite the availability of these tools, many organizations like the NHS do not have sufficient access to experts able to run powerful machine learning tools.  Without this type of collaboration many illnesses may go unidentified and people could die.  So the NHS* decided to partner with <a href="http://www.theverge.com/2017/3/16/14932764/deepmind-google-uk-nhs-health-data-analysis">DeepMind</a>, a company acquired by Alphabet/Google.  The University of Cambridge and the Economist wrote an <a href="https://link.springer.com/article/10.1007/s12553-017-0179-1">article</a> detailing many aspects of the contract.</p><p>As a result, DeepMind gets access to 1.6 million medical records and a neat application of its technology, in addition to undisclosed <a href="http://www.businessinsider.com/nhs-deepmind-contract-foi-2016-12?r=UK&amp;IR=T">funding</a>. This data includes blood tests, medical diagnostics and historical patient records but also even more sensitive data such as HIV diagnosis and prior drug use. In the sub-discipline of machine learning called Deep Learning, the algorithms are particularly dependent on having a large data corpus.</p><p>When an organization is faced with the choice of outsourcing sensitive information to experts, what are the choices?  Any organization outsourcing information should redact all personally identifiable information such as name and personal identifiers.  This instead can be represented by a pseudonym - a unique mapping such as a hash function - where the unique identifier and the PII are held only by the trusted entity (NHS  in this case).  Furthermore, semi-sensitive information that would have value to the ML model should be abstracted.  For example, geographical location may be a powerful indicator of an illness, but the raw data could be used to reverse-engineer PII of a given patient.  In this case binning the information so a little fidelity is lost is an effective trade-off between empowering the AI’s prediction power and protecting patient confidentiality.  For example, grouping specific addresses into zip codes or counties may be a nice trade-off in this space.</p><p>The tradeoff of security and predictive power will likely be a challenging problem for data owners. AI is able to combine many weak signals and often make surprising conclusions.  In one study by <a href="http://www.pnas.org/content/106/27/10975.full.pdf">CMU</a> researchers found social security numbers were surprisingly predictable, and the AI algorithms could usually reconstruct a SSN from information such as birthdate and gender.  So being able to guarantee that AI can’t reconstruct your PII is an unsolved problem, and likely very dependent on the data.   However, best-effort strategies like those outlined above can help mitigate against most concerns.</p><p>In the future this issue may change significantly.  Recent developments in <a href="https://research.googleblog.com/2017/04/federated-learning-collaborative.html">federated learning</a> may allow for increased flexibility where keeping data on premise may become more available.  A related technology of <a href="https://www.fedscoop.com/iarpa-director-new-homomorphic-crypto-math-magic/">homomorphic encryption</a> has been in the works for far longer.  In homomorphic encryption the computations occur on encrypted data without ever having to decrypt the data, which would significantly reduce the security concern.  We are still years out of technology solving this problem directly. In the interim the promise of the AI benefits are too great for most organizations to wait.</p><p>At Anomali, we deal with sensitive information regularly, as we help many organizations around the world winnow down data from across the enterprise and focus on the applicable security threats.  We address privacy issues with on-premise deployments such as Anomali Match; or by very tight access controls and data isolation like our Trusted Circles feature for sharing threat intelligence in our Threat Intelligence Platform, <a href="https://www.anomali.com/products/threatstream">ThreatStream</a>.</p><p>*The agreement was signed by the Royal Free NHS Trust, a small subordinate component of the much larger NHS. The Royal Free Trust is comprised of three hospitals in London.</p>

Get the Latest Anomali Updates and Cybersecurity News – Straight To Your Inbox

Become a subscriber to the Anomali Newsletter
Receive a monthly summary of our latest threat intelligence content, research, news, events, and more.