Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.
Responsible AI, Ethical AI, Trustworthy AI.
Call it what you will – it’s a concept that’s impossible to ignore when you pay attention to the tech industry. As artificial intelligence (AI) has progressed and evolved rapidly, more and more voices have joined the call to ensure it stays safe. The nearly unanimous consensus is that AI can easily become biased, unethical, and even dangerous.
To address this ever-growing problem, the White House today issued a Blueprint for an AI Bill of Rights. This outlines five principles that should guide the design, use, and implementation of automated systems to protect Americans in this age of AI.
The issues with AI are well documented, the Blueprint points out — from insecure patient care systems to discriminatory algorithms used for recruiting and credit decisions.
Event
MetaBeat 2022
MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.
Register here
“The blueprint for an AI Bill of Rights is a guide to a society that protects all people from these threats — and uses technologies in ways that reinforce our highest values,” it reads.
Blueprint for AI Bill of Rights — not far enough
The EU has taken the lead in realizing an ethical AI future with its proposed EU AI law. Numerous organizations have also broached the concept of developing an overarching framework.
But the US has been notably lagging behind in the discussion — prior to today, the federal government hadn’t given concrete guidelines for protecting citizens from AI threats, even when President Joe Biden called for protections around privacy and data collection.
And many still say that the Blueprint, while a good start, doesn’t go far enough and doesn’t have what it takes to gain real traction.
“It is exciting to see the US join an international movement to help understand and master the impact of new computing technologies, especially artificial intelligence, to ensure the technologies improve human society in a positive way. said James Hendler, president of the Computer Machinery Association (ACM) technology policy council.
Hendler, a professor at Rensselaer Polytechnic Institute and one of the founders of the Semantic Web, pointed to recent statements, including the Rome calls for AI ethicsthe proposed EU regulation on AI and statements from the UN Commission.
They all call for a greater understanding of the impact of increasingly autonomous systems on human rights and human values,” he said. “The ACM Global Technology Council has worked with our member committees to update previous statements on algorithmic accountability, as we believe regulation of this technology should be a global, not just national, effort.”
Likewise, the Algorithmic Justice League posted on its Twitter page that the Blueprint is a “step in the right direction in the fight for algorithmic justice”. The League combines art and research to raise public awareness of racism, sexism, ability and other harmful forms of discrimination that can be perpetuated by AI.
Others point out that the blueprint does not recommend restrictions on the use of controversial forms of AI, such as those that identify people in real time through biometrics or facial images. Some also point out that it fails to address the critical issue of autonomous lethal weapons or smart cities.
Five Critical Principles
The White House Office of Science and Technology Policy (OSTP), which advises the president on science and technology, first spoke about its vision for the blueprint last year.
The five identified principles:
— Safe and effective systems: people must be protected from unsafe or ineffective systems.
— Algorithmic discrimination protection: people should not be discriminated against by algorithms and systems should be used and designed in a fair way.
— Data privacy: People should be protected from abusive data practices through built-in safeguards and should have control over how data about them is used.
— Notice and explanation: People need to know that an automated system is being used and understand how and why it contributes to outcomes that affect them.
— Human alternatives, consideration and relapse: People should be able to opt out, where necessary, and have access to a person who can quickly consider and resolve problems they encounter.
The blueprint is accompanied by a handbook, ‘From Principles to Practice’, which contains ‘detailed steps to update these principles in the technological design process’.
It was drawn up based on insights from researchers, technologists, lawyers, journalists and policy makers, noting that while automated systems have “provided extraordinary benefits”, they have also caused significant damage.
It concludes that “these principles help provide guidance when automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.”
The mission of VentureBeat is a digital city square for tech decision makers to learn about transformative business technology and transactions. Discover our briefings.
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.