View all on-demand sessions from the Intelligent Security Summit here.
Most AI systems today are neural networks. Neural networks are algorithms that mimic a biological brain to process massive amounts of data. They’re known for being fast, but they’re inscrutable. Neural networks need huge amounts of data to learn to make decisions; however, the reasons for their decisions are hidden in countless layers of artificial neurons, all individually tuned to different parameters.
In other words, neural networks are “black boxes”. And the developers of a neural network not only have no control over what the AI is doing, they don’t even know Why it does what it does.
This is a horrifying reality. But it gets worse.
Despite the risk inherent in the technology, neural networks are beginning to manage the core infrastructure of critical business and government functions. As AI systems expand, the list of examples of dangerous neural networks grows longer every day. For example:
Contents
Event
Intelligent Security Summit on demand
Learn the critical role of AI and ML in cybersecurity and industry-specific case studies. Check out on-demand sessions today.
Look here
These outcomes range from deadly to comical to grossly offensive. And as long as neural networks are up and running, we are at risk of harm in many ways. Businesses and consumers are rightly concerned that as long as AI is here to stay opaqueit stays dangerous.
There is a regulatory response
In response to such concerns, the EU has issued a AI law – which becomes law in January – and the US has drafted a AI account of Rights Blueprint. Both tackle the problem of opacity head-on.
The EU’s AI law states that high-risk AI systems must be built with transparency, enabling an organization to locate and analyze potentially biased data and remove it from all future analysis. It completely removes the black box. EU AI law defines high-risk systems as critical infrastructure, human resources, essential services, law enforcement, border control, jurisprudence and surveillance. Virtually any major AI application developed for government and business use will be classified as a high-risk AI system and thus subject to the EU AI law.
Similarly, the US AI Bill of Rights claims that users must be able to understand the automated systems that affect their lives. It has the same purpose as the EU AI law: to protect the public from the real risk of opaque AI becoming dangerous AI. The Blueprint is currently a non-binding and therefore toothless white paper. However, its preliminary nature can be a virtue, as it gives AI scientists and lawyers time to work with lawmakers to shape the law appropriately.
In any case, it seems likely that both the EU and the US will require organizations to adopt AI systems that provide interpretable output to their users. In short, the AI of the future may need to be transparent, not opaque.
But does it go far enough?
Setting up new regulatory regimes is always a challenge. History provides us with no shortage of examples of ill-advised legislation accidentally crushing promising new industries. But it also offers counterexamples where well-crafted legislation has benefited both private business and the common good.
For example, when the dotcom revolution began, copyright lagged far behind the technology it was supposed to control. As a result, the early years of the Internet era were marred by intense litigation against businesses and consumers. Eventually, the expanded Digital Millennium Copyright Act (DMCA) was passed. Once businesses and consumers adjusted to the new laws, Internet businesses began to flourish and innovations such as social media that would have been impossible under the old laws were able to flourish.
The AI industry’s forward-thinking leaders have long understood that a similar legal framework will be needed for AI technology to reach its full potential. A well-established regulatory framework provides consumers with the assurance of legal protection for their data, privacy and security, while providing businesses with clear and objective rules under which they can confidently invest resources in innovative systems.
Unfortunately, neither the AI Act nor the AI Bill of Rights meet these objectives. Neither framework requires sufficient transparency from AI systems. Neither framework provides sufficient protection for the public or sufficient regulation for business.
a series of analyses as long as to the EU have pointed out the flaws in the AI law. (Similar criticisms could be lobbied to the AI Bill of Rights, with the added proviso that the US framework is not even intended to be binding policy.) These flaws include:
- It provides no criteria to define unacceptable risks to AI systems and no method for adding new risky uses to the law if such uses are found to pose a significant risk of harm. This is particularly problematic as AI systems are becoming more widely deployable.
- Only require companies to consider harm to individuals, excluding considerations of indirect and total harm to society. An AI system that has a very small effect on, for example, the voting behavior of each person can have a huge social impact in total.
- Allow virtually no public scrutiny of the assessment of whether AI meets the requirements of the law. Under the AI Act, companies self-assess their own AI systems for compliance without the intervention of any government agency. This is equivalent to asking drug companies to decide for themselves whether drugs are safe – a practice both the US and EU have found to be harmful to the public.
- Failure to properly define the responsible party for assessing AI for general use. If a general purpose AI can be used for risky purposes, does the law apply to it? If so, is the creator of the general-purpose AI responsible for compliance, or is the company using the AI for high-risk use? This vagueness creates a loophole that encourages blame shifting. Both companies can argue that it was their partner’s responsibility to judge themselves, not theirs.
In order for AI to spread safely in America and Europe, these flaws must be addressed.
What to do with dangerous AI until then
Until appropriate regulation is put in place, black-box neural networks will continue to use personal and professional data in ways that are completely opaque to us. What can someone do to protect themselves from opaque AI? At least:
- Asking questions. If you’re being discriminated against or rejected by an algorithm in any way, ask the company or supplier, “Why?” If they can’t answer that question, reconsider whether you should be doing business with them. You can’t trust an AI system to do the right thing if you don’t even know why it does what it does.
- Think carefully about the data you share. Does every app on your smartphone need to know your location? Does every platform you use have to go through your primary email address? A level of minimalism in data sharing can go a long way in protecting your privacy.
- Whenever possible, only do business with companies that follow data protection best practices and use transparent AI systems.
- Most importantly, support regulations that promote interpretability and transparency. Everyone deserves to understand why an AI is impacting their lives so much.
The risks of AI are real, but so are the benefits. By addressing the risk of opaque AI leading to dangerous outcomes, the AI Bill of Rights and the AI Act set the right course for the future. But the level of regulation is not yet robust enough.
Michael Capps is CEO of Diveplane.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.