Uncategorized

Most U.S.-based companies have no idea how to mitigate AI risk. Credo AI wants to change that

Published

on

 
AI Governance: A Necessity for Modern Businesses
At Fast Company’s annual Impact Council meeting, Navrina Singh, founder and CEO of Credo AI, highlighted the critical need for companies to implement AI governance.
Companies face a pivotal decision in AI adoption: embrace the technology with its flaws and potential for spreading disinformation or risk becoming obsolete.
Navrina Singh emphasized at the Fast Company Impact Council annual meeting that we are in a “state of reinvention.” Embracing AI is no longer optional but essential for companies’ survival and success. It’s equally crucial for businesses to comprehend the risks AI poses to their organization.
“It’s really important to think about this lens of how is trust going to be built for responsible practices, rather than just trying to give in to the sphere of regulations?” Singh said.
Understanding the Risks
Singh founded Credo AI in 2020, having previously worked in the robotics industry when machine learning was emerging around 2010. While companies were enthusiastic about AI’s capabilities, Singh noticed a lack of discussion about potential dangers.
“When my daughter was born 10 years ago, I saw these powerful AI systems evolving as quickly as human brains. I realized that as engineers, we don’t take responsibility,” Singh said. “We’re excited by innovation and profit, but with AI, we can’t take that chance.”
Credo AI assists businesses in understanding the risks AI poses, mitigating those risks, and ensuring compliance with government standards. The company collaborates with both the European Commission and the Biden Administration on rights-based and risk-based regulations.
AI Governance in Europe and the U.S.
In Europe, where the EU AI Act passed in March, Singh noted a balance between technological progress and prioritizing citizens’ rights and trust.
“In Europe, innovation is driven by putting citizens and their rights front and center,” Singh said. In contrast, the U.S. faces a more complex regulatory landscape due to its state-level approach to regulation.
Despite the lack of concrete federal AI regulations in the U.S., the Biden Administration’s October 2023 executive order mandated hiring chief artificial intelligence officers. Singh stressed the importance of AI literacy across all job titles, not just among these officers.
“We need a multi-stakeholder oversight mechanism for AI,” she said. “If only AI experts manage oversight, they’ll be too removed from business outcomes like reputational damage, regulatory risk, and impact.”
Proactive AI Governance
Singh warned that the U.S. lags in AI literacy due to insufficient government oversight and reactionary regulation. When companies outsource AI adoption, they introduce significant risks.
Companies using AI technologies like ChatGPT must assess risk implications, from chatbots producing hallucinations to live agents’ roles. Without standardized risk management, companies react instead of proactively governing AI.
“Governance needs to be front and center,” Singh said. “Organizations that tackle AI governance proactively understand where true AI or generative AI is used in their operations.”
Key Takeaways

AI Adoption: Essential for modern businesses, not optional.
Risk Understanding: Companies must recognize and mitigate AI risks.
Collaborations: Credo AI partners with government bodies to shape AI regulations.
Global Regulatory Landscape: Differences in AI governance approaches between Europe and the U.S.
AI Literacy: Crucial across all job titles, not just AI officers.
Proactive Governance: Prevents reactionary positions and manages AI risks effectively.

Source: fastcompany.com
The post Most U.S.-based companies have no idea how to mitigate AI risk. Credo AI wants to change that appeared first on HIPTHER Alerts.

Trending

Exit mobile version