View all on-demand sessions from the Intelligent Security Summit here.
Artificial intelligence (AI) is an ever-growing technology. More than nine out of ten of the country’s leading companies continually invest in AI products and services. As the popularity of this advanced technology grows and more companies adopt it, the responsible use of AI – often referred to as “ethical AI” – is becoming an important factor for companies and their customers.
Contents
What is ethical AI?
AI poses a number of risks to individuals and businesses. On an individual level, this advanced technology can jeopardize an individual’s safety, security, reputation, freedom and equality; it can also discriminate against specific groups of individuals. At a higher level, it can pose threats to national security, such as political instability, economic inequality, and military conflict. At the corporate level, it can pose financial, operational, reputational and compliance risks.
Ethical AI can protect individuals and organizations from threats like this and many others that can result from abuse. For example, airport TSA scanners are designed to provide safer air travel for all of us and can spot objects that normal metal detectors might miss. Then we found out that a few “bad actors” were using this technology and sharing nude passenger silhouettes. This has since been patched and fixed, but nonetheless it’s a great example of how abuse can damage people’s trust.
When such misuse of AI-based technology occurs, companies with a responsible AI policy and/or team will be better able to mitigate the problem.
Implement an ethical AI policy
A responsible AI policy can be a good first step to ensure your business is protected in the event of abuse. Before implementing such a policy, employers should conduct an AI risk assessment to determine: Where is AI being used across the company? Who uses the technology? What kinds of risks could result from this AI use? When can risks arise?
For example, does your company use AI in a warehouse that external partners can access during the holiday season? How can my company prevent and/or respond to abuse?
Once employers have taken a comprehensive look at the use of AI across their company, they can begin to develop policies that protect their company as a whole, including employees, customers and partners. In order to reduce the associated risks, companies must take into account certain important considerations. They must ensure that AI systems are designed to enhance cognitive, social and cultural skills; check whether the systems are fair; include transparency in all parts of the development; and hold any partners accountable.
In addition, companies should take into account the following three main components of an effective responsible AI policy:
- Legal AI: AI systems don’t work in a lawless world. A number of legally binding rules at national and international level already apply or are relevant to the development, implementation and use of these systems. Companies must ensure that the AI-based technologies they use comply with local, national or international laws in their region.
- Ethical AI: Compliance with ethical standards is necessary for responsible use. Four ethical principles, rooted in fundamental rights, must be respected to ensure that AI systems are developed, deployed and used responsibly: respect for human autonomy, prevention of harm, fairness and explainability.
- Robust AI: AI systems must operate in a safe, secure and reliable manner and safeguards must be implemented to prevent unintended adverse effects. Therefore, the systems must be robust both from a technical perspective (ensuring the technical robustness of the system as appropriate in a given context, such as the application domain or lifecycle stage), and from a social perspective (taking into account the context and environment in which the system works).
It is important to note that different companies may require different policies based on the AI-assisted technologies they use. However, these guidelines can help from a broader point of view.
Build a responsible AI team
Once a policy is in place and employees, partners and stakeholders have been notified, it is vital to ensure a company has a team to enforce it and hold abusers accountable for abuse.
The team can be customized to the needs of the business, but here’s a common example of a robust team for companies using AI-enabled technology:
- Chief Ethics Officer: Often referred to as Chief Compliance Officer, this role is responsible for determining what data should be collected and how it should be used; monitor AI misuse across the company; determining possible disciplinary action in response to abuse; and ensuring that teams train their employees on the policy.
- Responsible AI committee: This role, performed by an independent person/team, performs risk management by assessing the performance of an AI-based technology with different data sets, as well as the legal framework and ethical implications. After a reviewer approves the technology, the solution can be deployed or deployed to customers. This committee may include departments for ethics, compliance, data protection, legal, innovation, technology and information security.
- Purchasing department: This role ensures policy compliance by other teams/departments as they acquire new AI-based technologies.
Ultimately, an effective, accountable AI team can ensure that your company holds anyone who misuses AI accountable across the organization. Disciplinary action can range from HR intervention to suspension. Partners may be required to immediately cease using their products upon discovering misuse.
As employers continue to adopt new AI-based technologies, they should strongly consider implementing a responsible AI policy and team to efficiently counter abuse. By using the framework above, you can protect your employees, partners and stakeholders.
Mike Dunn is CTO at prosegur Safety.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.