How to Protect Privacy in an AI-driven World

The issue of privacy and data protection is a matter that is currently being questioned by many people. The development of artificial intelligence and the widespread use of data access make the boundaries of data privacy blurry. Responding to this issue requires a strategy in responding to good privacy management in the midst of the increasingly limitless development of AI technology.
April 21, 2022

More data allows for more sophisticated and comprehensive analysis. Variety increases this power by allowing for new and unexpected conclusions and predictions. In addition, velocity allows for real-time analysis and sharing. Streams of data from mobile phones and other online devices increase the volume, diversity, and velocity of data about every aspect of our lives, making privacy a global public policy concern. This trend will most likely be accelerated by artificial intelligence. Machine learning and algorithmic decisions drive most of today's most privacy-sensitive data analysis, such as search algorithms, recommendation engines, and adtech networks. As artificial intelligence advances, it increases the ability to exploit personal information in ways that can impinge on private concerns by increasing the power and speed with which personal data can be analyzed.

Privacy Issues in The World of AI

Facial recognition technology provides a foretaste of the privacy concerns that may arise. Facial recognition has progressed rapidly from fuzzy images of cats to rapid (though still imperfect) recognition of individual humans, thanks to rich databases of digital photographs available via social media, websites, driver's license registries, surveillance cameras, and other sources. Facial recognition technology is being used in cities and airports across the world.

The policy of AI and the current privacy debate is explored in this policy brief. As government prepares comprehensive privacy legislation to close gaps in the current patchwork of federal and state privacy laws, it will have to decide whether or not to regulate the use of personal data in artificial intelligence systems. The problem for the government is to establish privacy law that protects individuals from any negative consequences of AI's use of personal data while not impeding AI research or entangling privacy legislation in complicated social and political quagmires. 

The limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities or Amazon's failed experiment with a hiring algorithm that replicated the company's existing disproportionately male workforce, are frequently brought up in the context of the privacy debate. Both of these issues are serious, but privacy regulation is hard enough without including all of the social and political issues that can develop from data use. To assess AI's impact on privacy, it's important to distinguish between data concerns that are common to all AI, such as the occurrence of false positives and negatives or overfitting to patterns, and those that are unique to the usage of personal data.

Algorithmic Decisions Issues

Algorithm Decisions shifts people's attention from the use of personal data in AI. This method focuses on algorithmic bias and the possibility for algorithms to cause unlawful or unintended discrimination in the judgments that they affect. These are key concerns for civil rights and consumer advocacy groups who represent people who are discriminated against unfairly.

Addressing algorithmic discrimination raises fundamental concerns about the extent of privacy laws. To begin, to what extent can or should legislation address algorithmic bias issues? Discrimination isn't always a privacy issue because it raises broad social issues that exist even without the gathering and use of personal data and fall under the purview of numerous civil rights legislation. Furthermore, because of the sensitive political topics they address and the several congressional committees with jurisdiction over various such matters, bringing these statutes available for debate might effectively open a Pandora's Box. Despite this, prejudice is still practiced based on personal characteristics such as skin color, sexual orientation, and national origin.

A data gathering and processing approach could have a number of implications for AI and algorithmic discrimination:

  • The use of algorithmic decision-making could be illuminated by data openness or disclosure requirements, as well as individual rights to access information about themselves.
  • Data stewardship obligations, such as those of justice and loyalty, may militate against the use of personal information in ways that are harmful or unfair to the people to whom the data belongs.
  • Data governance policies that mandate the appointment of privacy officers, the completion of privacy impact studies, or the creation of products using "privacy by design" may raise concerns about algorithm use.
  • Data collection and sharing rules may limit the aggregation of data that permits inferences and predictions, but they may come at the expense of the benefits of big and diversified datasets.

A number of proposals particularly address the subject, in addition to certain measures of general applicability that may indirectly affect algorithmic judgments.

Privacy Protection

There are two primary types of responses to AI that are currently being debated in privacy regulation. The first is a direct attack against discrimination. People of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, people living on low income, immigrants, and other vulnerable populations were among those who signed a joint letter urging the government to prohibit or monitor the use of personal information with discriminatory effects on "people of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, people living on low income, immigrants, and other vulnerable populations. Existing political, region, and local civil rights laws allow persons to file discrimination claims for many of the types of discrimination described in algorithmic discrimination proposals. These laws should not be harmed by any federal preemption or limitation on private rights of action in federal privacy legislation.

The second strategy takes a more indirect approach to risk, with accountability measures aimed at detecting discrimination in the handling of personal data. Several groups and businesses, as well as a number of legislators, have proposed such accountability. Their proposals come in a variety of formats:

  • Transparency: refers to the publication of information about the usage of algorithmic decision-making. While most consumers are uninterested in lengthy, detailed privacy policies, they do give regulators and other privacy watchdogs a benchmark against which to evaluate a company's data management and hold it accountable. This benchmark function could be improved by replacing current privacy policies with "privacy disclosures," which demand a detailed description of what and how data is gathered, used, and safeguarded. As a result, ensuring that these disclosures disclose major uses of personal data for algorithmic choices would assist watchdogs and consumers in identifying potential problems.
  • ​​Explainability: Transparency gives upfront warning of algorithmic decision-making, whereas explainability provides information about the usage of algorithms in specific decisions after the fact. This includes a "human-in-the-loop" component as well as a due process element to prevent anomalous or unfair outcomes. A sense of justice dictates that such a safety valve be available for algorithmic judgments that have a significant influence on people's lives. Explainability necessitates. 
  • (1) the identification of algorithmic judgments, 
  • (2) the deconstruction of specific decisions, and 
  • (3) the establishment of a channel via which a person can seek an explanation.

Reverse-engineering machine-learning algorithms can be difficult, if not impossible, and this difficulty grows as machine learning becomes more powerful.

  • Audits: Audits are used to review privacy practices after they have occurred. Most legislation proposals include basic accountability measures, such as self-audits or third-party audits, to ensure that corporations comply with their privacy procedures. Auditing the consequences of algorithmic decision-making, when combined with proactive risk assessments, can assist match foresight with hindsight; nevertheless, auditing machine-learning routines, like explainability, is complex and continually evolving.
  • Risk assessment: Risk assessments were first developed as "privacy impact assessments" within the federal government under the 1974 Privacy Act. Risk evaluations for algorithmic decision-making allow for the detection of potential design and data biases, as well as the impact on persons. The level of risk assessment should be appropriate to the relevance of the decision-making in question, which is determined by the implications of the decisions, the number of individuals and volume of data potentially affected, and the novelty and complexity of algorithmic processing.

Conclusion

Because it's impossible to predict machine learning outcomes and reverse-engineer algorithmic judgments, no one measure can guarantee that perverse effects will not occur. As a result, it makes sense to integrate measurements to work together where algorithmic decisions are important. Advance measures like openness and risk assessment, in combination with audit retrospective checks and human review of decisions, could aid in identifying and correcting unfair outcomes. These measures can be used to form a whole that is greater than the sum of its parts. By providing documentary proof that might be utilized in litigation, risk assessments, transparency, explainability, and audits would improve existing remedies for actionable discrimination. However, because not all algorithmic decision-making is consequential, these constraints should be tailored to the objective risk.

Adapted from: Brookings.Edu

Written by Denny Fardian
contact us

Ready to accelerate your digital transformation?

Send us an email, and we will answer your questions regarding our products and services.
Contact Us