Join our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI coverage. Learn more
Up-to-date study from PwC 1001 Business and Technology Executives in the US states that 73% respondents currently employ or plan to employ generative AI in their organizations.
However, only 58% of respondents have started assessing the risks associated with AI. For PwCResponsible AI is about values, security and trust and should be part of a company’s risk management processes.
Jenn Kosar, AI Assurance Leader at PwC US, said: VentureBeat that six months ago it would have been considered acceptable for companies to implement some AI projects without considering responsible AI strategies, but that is no longer the case.
“We’re further along in the cycle now, so it’s time to build responsible AI,” Kosar said. “Previous projects were internal and limited to small teams, but now we’re seeing mass adoption of generative AI.”
She added that artificial intelligence (AI) pilots actually provide a lot of insight into responsible AI strategies, as companies will be able to determine what works best for their teams and how they employ AI systems.
Responsible AI and risk assessment have been at the forefront of the news cycle in recent days after Elon Musk’s xAI deployed a novel image-generating service via its Grok-2 model on social media platform X (formerly Twitter). Early adopters report that the model appears to be largely unconstrained, allowing users to create all sorts of controversial and inflammatory content, including deepfakes of politicians and pop stars engaging in violent or overtly sexual situations.
Priorities to focus on
Survey respondents were asked about 11 capabilities that PwC identified as “a subset of the capabilities that organizations currently appear to be prioritizing most.” They include:
- Raising qualifications
- Acquiring AI Risk Specialists
- Periodic training
- Data privacy
- Data management
- Cybersecurity
- Testing the model
- Model management
- Third Party Risk Management
- Specialized AI Risk Management Software
- Monitoring and auditing
According to the PwC study, more than 80% reported progress on these capabilities. However, 11% said they had implemented all 11, although PwC said, “We suspect that many overestimate progress.”
He added that some of these responsible AI metrics can be arduous to manage, which may be why organizations have a tough time implementing them fully. PwC pointed to data governance, which will need to define AI models’ access to internal data and establish protective barriers. “Legacy” cybersecurity methods may not be enough to protect the model itself from attacks such as model poisoning.
Responsibility and responsible AI go hand in hand
To aid companies transform towards AI, PwC has proposed ways to develop a comprehensive, responsible AI strategy.
One is creating ownership, which Kosar said was one of the challenges survey respondents faced. She said it’s significant to ensure that the responsibility and ownership for responsible AI employ and implementation is assigned to a single executive. That means thinking about AI security as something that goes beyond the technology and having a chief AI officer or responsible AI leader who works with various stakeholders within the company to understand the business processes.
“Maybe AI will become a catalyst that bridges technology and operational risk,” Kosar said.
PwC also suggests rethinking the entire lifecycle of AI systems, moving beyond theory and implementing security and trust principles throughout the organization, preparing for any future regulations by doubling down on responsible AI practices, and developing a plan that is clear to stakeholders.
Kosar said what surprised her most about the survey were the comments from respondents who believe responsible AI adds commercial value to their businesses, which she believes will prompt more companies to think more deeply about the issue.
“Responsible AI as a concept is not just about risk, but should also be creative in terms of value. Organizations have said that they see responsible AI as a competitive advantage, that they can base services on trust,” she said.
