View all on-demand sessions from the Intelligent Security Summit here.
As the use of AI becomes increasingly ubiquitous across industries and as widespread as electricity and clean drinking water, the conversation around the new technology begins to shift from how to implement AI responsibly. How is AI different from other software technologies we’ve used to build products, and is there a need for new regulations and new compliance frameworks?
That conversation has not yet spurred most organizations into action. In a recent study by Wakefield Research and Juniper Networks, 63% of companies said they are at least largely on their way to their planned goals for AI adoption, but only 9% have fully mature governance policies.
Any organization using or developing AI technology should pay more attention to its AI governance practices. Otherwise, they risk being surprised by AI legislation that is now being developed, while also endangering their business and customers through improperly developed AI.
Contents
The need for governance
Every new technology raises new questions about proper use and management, and AI is no exception. However, because AI solutions are designed to perform tasks similar to those performed by human domain experts, proper management of tasks that normally require human cognitive reasoning can be particularly difficult.
For example, AI in self-driving cars can be trusted to make split-second decisions, without human intervention, that have huge impacts on people’s lives both inside and outside the car. Determining how AI should and should not be used, and what safeguards can protect users, bystanders, and the manufacturer are important questions that are fundamentally different from previous forms of governance.
Business leaders, customers and regulators are also beginning to ask questions about how organizations use AI and data, from how it was developed to how it is governed. If AI makes decisions based on data, how do I know if the model has been trained with data that the trainer is allowed to use? How do I know what that data is used for?
Government leaders are already turning their sights to regulation. The AI law proposed in the European Union would have far-reaching consequences for how companies can and should use AI. Even the US is starting to issue guidelines on the use of AI. In early October, the White House Office of Science and Technology Policy released its much-anticipatedBlueprint for an AI Bill of Rights.” These are guidelines for companies to use when developing and deploying AI systems. While they are not yet true mandates, companies should be prepared for this to change.
Organizations of all types must prioritize governance at the same pace as adoption to protect their business, their technology, and their customers.
Strategies for AI governance
Companies must develop a governance structure designed to mitigate both current and future risks as local, state, and federal governments begin crafting AI governance legislation.
Steps organizations can take today include:
- Clarify how and when AI will be used within the organization, both for current and future planned use. AI is still a vague term that many organizations use to describe vastly different technology. Essentially, any technology that completes a task that previously required human cognition or reasoning can be broadly understood as AI. Organizations need to clearly understand where and how their organization uses AI, whether that AI is purchased or developed in-house.
- Develop consistent standards and ethics across the enterprise, including safeguards around ethical use of AI. Every major organization in the world already has standards, practices and safeguards for the technology they use. AI is a new technology and while similar to other software technologies, there are some differences. Since AI solutions tend to perform tasks similar to human domain experts, the same rules, ethics, and liabilities that apply to bad human behavior also apply to AI. The behavior of an AI solution tends to change over time as models are retrained or the environment around it changes. After organizations review their use of AI, they should also review their standards and ethical statements, both public and those used to train employees, and ensure they are comprehensive enough to cover all of the organization’s use of AI. to include.
- Ensure all governance policies are cross-functional and cover the entire AI ecosystem, including external ones. After you have a clear understanding of how AI is being used in your organization and what your standards are for ethical use of AI, it’s important to make sure that all external AI in use falls under the same standards. If you develop best practices but purchase AI-based solutions from a vendor that doesn’t follow the same governance rules, you leave your business and your customers vulnerable.
- Innovate with governance in mind. The best AI solutions are not worth creating if they cannot be responsibly integrated into existing governance policies. Once AI governance policies are clearly enshrined and established, it is critical to long-term success to ensure teams are trained on them and that alignment with goals remains the top priority.
Conclusion
AI is not just the future anymore; it is the current reality for many organizations. However, AI governance has still lagged AI adoption, but it is an essential part of avoiding potholes on the path to AI success. IT teams that stay ahead of the curve will play a critical role in shaping future strategy within their organizations and their industries by leading the way in setting standards for AI.
Bob Friday is chief AI officer at Juniper Networks.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.