Uncategorized

Generative AI Security and Compliance Concerns Growing

Published

on

 
As companies increasingly embrace generative AI, many find themselves unprepared for the potential security and compliance threats that come with this technology.
Generative AI has dominated the collaboration and communications landscape over the past two years, with nearly every vendor in the market now offering a generative AI solution. This focus on AI capabilities is driving customer adoption.
According to Metrigy’s recent report, “Workplace Collaboration and Contact Center Security and Compliance: 2024-25,” nearly 39% of the 338 participants have already deployed or plan to deploy generative AI by the end of this year. Another 35.3% are evaluating future deployment. Among those achieving the highest return on investment (ROI) from their collaboration spend, over 54% are using or planning to use generative AI tools such as Google Gemini, Microsoft Copilot, and Zoom AI Companion.
Security Concerns Hold Back Some Companies
Despite the rapid adoption, almost 20% of participants indicated that their organizations are not allowing or planning to evaluate generative AI collaboration tools. The main reasons include a lack of perceived ROI or benefits and significant concerns about privacy, security, and compliance. Specific issues include:

Ensuring the accuracy of responses and protecting language models from hallucinations and poisoning.
Safeguarding customer information within large language models (LLMs).
Preventing data leakage and ensuring only authorized access to sensitive content.
Ensuring compliance for AI-generated content, including meeting summaries, transcripts, images, text, and other documents.
Maintaining the accuracy of transcripts and summaries.

Lack of Generative AI Security Strategies
Only 38.5% of companies currently using generative AI collaboration tools have developed and implemented a formal security strategy. However, among those with the highest collaboration ROI, more than 51% have a security and compliance strategy in place, indicating a correlation between high ROI and a proactive approach to generative AI security and compliance.
For those with strategies in place, testing AI response accuracy is a primary focus. The 2023 incident with Air Canada’s AI-driven chatbot creating a frequent flier benefit on its own has prompted many organizations to ensure the accuracy of generative AI chatbot responses. Techniques like retrieval-augmented generation (RAG), which allows chatbots to reference enterprise-designated knowledge bases and cite sources, are gaining interest.
Other common components of a generative AI security and compliance strategy include respecting document classifications to control content access and ensuring that generated content meets compliance requirements. Many companies also require vendors to support federated generative AI models, enhancing security, compliance, and data privacy. Additionally, about 42% of companies demand data localization capabilities to comply with regional regulations such as GDPR in the EU.
The Role of CISOs in Generative AI Security
Among companies with the highest ROI for their collaboration spend, nearly 74% involve their chief information security officer (CISO) in generative AI security and compliance planning, compared to fewer than 68% of less successful companies. Other key players include application owners and dedicated AI teams.
Generative AI security concerns extend beyond collaboration apps to other applications like CRM and HR. Many companies are also building their own generative AI platforms using offerings from Google, Microsoft, OpenAI, and others. Therefore, generative AI security and compliance strategies must integrate across all application areas.
Source: nojitter.com
The post Generative AI Security and Compliance Concerns Growing appeared first on HIPTHER Alerts.

Trending

Exit mobile version