More data allows for more sophisticated and comprehensive analysis. Variety increases this power by allowing for new and unexpected conclusions and predictions. In addition, velocity allows for real-time analysis and sharing. Streams of data from mobile phones and other online devices increase the volume, diversity, and velocity of data about every aspect of our lives, making privacy a global public policy concern. This trend will most likely be accelerated by artificial intelligence. Machine learning and algorithmic decisions drive most of today's most privacy-sensitive data analysis, such as search algorithms, recommendation engines, and adtech networks. As artificial intelligence advances, it increases the ability to exploit personal information in ways that can impinge on private concerns by increasing the power and speed with which personal data can be analyzed.
Facial recognition technology provides a foretaste of the privacy concerns that may arise. Facial recognition has progressed rapidly from fuzzy images of cats to rapid (though still imperfect) recognition of individual humans, thanks to rich databases of digital photographs available via social media, websites, driver's license registries, surveillance cameras, and other sources. Facial recognition technology is being used in cities and airports across the world.
The policy of AI and the current privacy debate is explored in this policy brief. As government prepares comprehensive privacy legislation to close gaps in the current patchwork of federal and state privacy laws, it will have to decide whether or not to regulate the use of personal data in artificial intelligence systems. The problem for the government is to establish privacy law that protects individuals from any negative consequences of AI's use of personal data while not impeding AI research or entangling privacy legislation in complicated social and political quagmires.
The limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities or Amazon's failed experiment with a hiring algorithm that replicated the company's existing disproportionately male workforce, are frequently brought up in the context of the privacy debate. Both of these issues are serious, but privacy regulation is hard enough without including all of the social and political issues that can develop from data use. To assess AI's impact on privacy, it's important to distinguish between data concerns that are common to all AI, such as the occurrence of false positives and negatives or overfitting to patterns, and those that are unique to the usage of personal data.
Algorithm Decisions shifts people's attention from the use of personal data in AI. This method focuses on algorithmic bias and the possibility for algorithms to cause unlawful or unintended discrimination in the judgments that they affect. These are key concerns for civil rights and consumer advocacy groups who represent people who are discriminated against unfairly.
Addressing algorithmic discrimination raises fundamental concerns about the extent of privacy laws. To begin, to what extent can or should legislation address algorithmic bias issues? Discrimination isn't always a privacy issue because it raises broad social issues that exist even without the gathering and use of personal data and fall under the purview of numerous civil rights legislation. Furthermore, because of the sensitive political topics they address and the several congressional committees with jurisdiction over various such matters, bringing these statutes available for debate might effectively open a Pandora's Box. Despite this, prejudice is still practiced based on personal characteristics such as skin color, sexual orientation, and national origin.
A data gathering and processing approach could have a number of implications for AI and algorithmic discrimination:
A number of proposals particularly address the subject, in addition to certain measures of general applicability that may indirectly affect algorithmic judgments.
There are two primary types of responses to AI that are currently being debated in privacy regulation. The first is a direct attack against discrimination. People of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, people living on low income, immigrants, and other vulnerable populations were among those who signed a joint letter urging the government to prohibit or monitor the use of personal information with discriminatory effects on "people of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, people living on low income, immigrants, and other vulnerable populations. Existing political, region, and local civil rights laws allow persons to file discrimination claims for many of the types of discrimination described in algorithmic discrimination proposals. These laws should not be harmed by any federal preemption or limitation on private rights of action in federal privacy legislation.
The second strategy takes a more indirect approach to risk, with accountability measures aimed at detecting discrimination in the handling of personal data. Several groups and businesses, as well as a number of legislators, have proposed such accountability. Their proposals come in a variety of formats:
Reverse-engineering machine-learning algorithms can be difficult, if not impossible, and this difficulty grows as machine learning becomes more powerful.
Conclusion
Because it's impossible to predict machine learning outcomes and reverse-engineer algorithmic judgments, no one measure can guarantee that perverse effects will not occur. As a result, it makes sense to integrate measurements to work together where algorithmic decisions are important. Advance measures like openness and risk assessment, in combination with audit retrospective checks and human review of decisions, could aid in identifying and correcting unfair outcomes. These measures can be used to form a whole that is greater than the sum of its parts. By providing documentary proof that might be utilized in litigation, risk assessments, transparency, explainability, and audits would improve existing remedies for actionable discrimination. However, because not all algorithmic decision-making is consequential, these constraints should be tailored to the objective risk.
Adapted from: Brookings.Edu