Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.
The California Privacy Rights Act (CPRA), Virginia Consumer Data Protection Act (VCDPA), Canada’s Consumer Privacy Protection Act (CPPA), and many more international regulations all highlight significant improvements made to data privacy in recent years. Under these laws, businesses can suffer serious consequences from mishandling consumer data.
In addition to the regulatory consequences of a data breach, laws such as the CCPA allow consumers to hold companies directly liable for data breaches under a private law of action.
While these regulations certainly amplify the consequences of consumer data misuse, they are still not enough — and perhaps never enough — to protect marginalized communities. Almost three quarters of online households fear for their digital security and privacy, with most of the concerns belonging to disadvantaged populations.
Marginalized groups are often negatively impacted by technology and can be at great risk when automated decision-making tools such as artificial intelligence (AI) and machine learning (ML) form prejudice against them or when their data is misused. AI technologies are even shown to maintain discrimination in tenant selection, financial loans, hiring processes, and more.
Demographic bias in AI and ML tools is quite common, as design review processes have a substantial lack of human diversity to ensure their prototypes are inclusive for all. Technology companies need to evolve their current approaches to using AI and ML to ensure they don’t negatively impact underserved communities. This article explores why diversity should play a critical role in data privacy and how companies can create more inclusive and ethical technologies.
- 1 The Threats Facing Marginalized Groups
- 2 The lack of diversity in automated tools
- 3 An inclusive design review process
- 4 The future of ethical AI tools
The Threats Facing Marginalized Groups
Communities that are underserved are at great risk when sharing their data online, and unfortunately data privacy laws cannot protect them from overt discrimination. Even if current regulations were as inclusive as possible, there are many ways these populations could be harmed. For example, data brokers can still collect a person’s geolocation and sell it to groups targeting protesters. Information about a person’s participation in a rally or protest can be used in a number of intrusive, unethical and potentially illegal ways.
While this scenario is only hypothetical, in practice there have been many instances where similar situations have occurred. a 2020 research report detailed the data security and privacy risks LGBTQ people are exposed to on dating apps. Reported threats included blatant state surveillance, facial recognition monitoring, and app data shared with advertisers and data brokers. Minority groups have always been sensitive to such risks, but companies that proactively implement change can help mitigate these risks.
The lack of diversity in automated tools
While more and more progress has been made in recent years in diversifying the technology industry, a fundamental shift is needed to minimize the lingering bias in AI and ML algorithms. In reality, 66.1% of data scientists are reportedly white and almost 80% are male, highlighting a dire lack of diversity among AI teams. As a result, AI algorithms are trained based on the insights and knowledge of the teams building them.
AI algorithms not trained to recognize certain groups of people can do significant damage. For example the American Civil Liberties Union (ACLU) released research in 2018 showing that Amazon’s facial recognition software “Rekognition” falsely matched 28 members of Congress with mugshots. However, 40% of the false matches were people of color, despite only making up 20% of Congress. To avoid future instances of AI bias, companies need to rethink their design review processes to ensure they are inclusive for everyone.
An inclusive design review process
There may not be a single source of truth to reduce bias, but there are many ways organizations can improve their design review process. Here are four easy ways technology organizations can reduce bias in their products.
1. Ask challenging questions
Developing a list of questions to ask and answer during the design review process is one of the most effective methods of creating a more inclusive prototype. These questions can help AI teams identify problems they hadn’t thought of before.
Key questions include whether the datasets they use contain enough data to avoid certain types of bias or whether they have performed tests to determine the quality of the data they use. By asking and responding to difficult questions, data scientists can improve their prototype by determining whether to review additional data or involve a third-party expert in the design review process.
2. Hire a privacy professional
Like any other compliance professional, privacy experts were originally seen as innovation bottlenecks. However, as more and more data regulations have been introduced in recent years, chief privacy officers have become a core part of the C-suite.
In-house privacy professionals are essential to serve as experts in the design review process. Privacy experts can provide an unbiased opinion on the prototype, help introduce difficult questions data scientists hadn’t thought of before, and help create inclusive, safe and secure products.
3. Use different voices
Organizations can bring out different voices and perspectives by expanding their recruiting efforts to include candidates from different demographics and backgrounds. These efforts should extend to the C-suite and the board of directors, as they can act as representatives for employees and customers who may not have a voice.
Increasing diversity and inclusivity within the workforce creates more room for innovation and creativity. Research shows that racially diverse companies have a 35% more likely to outperform their competitors, while organizations with high gender-diverse executive teams achieve 21% higher profits than competitors.
4. Implement Diversity, Equality and Inclusion (DE&I) training
At the heart of any diverse and inclusive organization is a strong DE&I program. By organizing workshops that educate employees about privacy, AI bias and ethics, they can understand why they should be interested in DE&I initiatives. Currently only 32% of companies enforce a DE&I employee training program. It is clear that DE&I initiatives must be prioritized in order to bring about real change within an organization and its products.
The future of ethical AI tools
While some organizations are well on their way to creating safer and more secure tools, others still need to make major improvements to create fully biased products. Incorporating the above recommendations into their design review process will not only bring them a few steps closer to creating inclusive and ethical products, but also increase their innovation and digital transformation efforts. Technology can benefit society immensely, but it is up to every enterprise to make it happen.
Veronica Torres, Global Privacy and Regulatory Advisor at Jumio.
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.