Uncategorized
Govts, tech firms vow to cooperate against AI risks at Seoul summit
![govts,-tech-firms-vow-to-cooperate-against-ai-risks-at-seoul-summit](https://roboticulized.com/wp-content/uploads/2024/05/149299-govts-tech-firms-vow-to-cooperate-against-ai-risks-at-seoul-summit.jpg)
At the conclusion of a global summit in Seoul, more than a dozen countries alongside major tech firms pledged on Wednesday to collaborate in addressing the potential risks posed by artificial intelligence (AI), emphasizing its capacity to evade human control.
AI safety took center stage during the two-day gathering, with over two dozen nations, including the United States and France, committing to joint efforts against emerging threats from advanced AI technologies, identifying “severe risks” that include the potential for AI systems to aid non-state actors in activities related to chemical or biological weapons, as stated in a joint statement by the participating nations.
Moreover, concerns were raised regarding AI models that could potentially bypass human oversight through various means such as safeguard circumvention, manipulation, deception, or autonomous replication and adaptation.
The ministers’ statement followed a preceding commitment by major AI companies, including OpenAI and Google DeepMind, to transparently share their risk assessment methodologies and avoid deploying systems where risks exceed acceptable limits.
The Seoul summit, co-hosted by South Korea and Britain, aimed to build upon the consensus established at the inaugural AI safety summit last year. Michelle Donelan, UK technology secretary, stressed the urgency of matching the accelerating pace of AI development to effectively mitigate associated risks, emphasizing the need for broader societal resilience to AI-related risks.
Additionally, the summit witnessed the adoption of the Seoul AI Business Pledge by a consortium of tech companies, including Samsung Electronics and IBM, demonstrating a collective commitment to responsible AI development.
Christina Montgomery, IBM’s Chief Privacy and Trust Officer, underscored the importance of implementing safeguards to prevent AI misuse, highlighting the need for thoughtful consideration of AI’s societal implications.
While AI proponents tout its potential to revolutionize various sectors, concerns persist regarding its potential misuse, including election manipulation and deepfake disinformation. Calls for international standards to govern AI development and usage have been echoed, recognizing the significant regulatory challenges posed by AI’s rapid advancement.
Experts at the summit emphasized the need for inclusive AI development, addressing concerns about inequality in AI benefits. Rumman Chowdhury, an AI ethics expert, highlighted the disproportionate distribution of AI benefits and emphasized the importance of inclusivity in AI development to ensure equitable outcomes.
In conclusion, the summit underscored the imperative of collaborative efforts among governments, tech firms, and academic experts to navigate the complex challenges posed by AI while ensuring its responsible and inclusive development for the benefit of all.
Source: france24.com
The post Govts, tech firms vow to cooperate against AI risks at Seoul summit appeared first on HIPTHER Alerts.
Uncategorized
EU’s new AI rules: Industry opposed to revealing guarded trade secrets
![eu’s-new-ai-rules:-industry-opposed-to-revealing-guarded-trade-secrets](https://roboticulized.com/wp-content/uploads/2024/06/150199-eus-new-ai-rules-industry-opposed-to-revealing-guarded-trade-secrets.jpg)
New regulations in the European Union (EU) are set to compel companies to increase transparency regarding the data they use to train artificial intelligence (AI) systems, potentially unveiling closely guarded industry practices, reports the Times of India.
Since OpenAI, supported by Microsoft, introduced ChatGPT to the public 18 months ago, there has been a surge in public interest and investment in generative AI. This technology enables rapid generation of text, images, and audio content.
As the AI industry expands, concerns have emerged regarding how companies source data for training their models, particularly whether using content from popular books and movies without creators’ consent constitutes a breach of copyright.
The EU’s new AI Act, phased in over the next two years, mandates stricter regulations while allowing time for businesses to adjust to new requirements. Nevertheless, the practical implementation of these rules remains uncertain, notes the report.
Mandating “detailed summaries”
A contentious provision of the AI Act requires organizations deploying general-purpose AI models like ChatGPT to provide “detailed summaries” of the training data. The newly established AI Office plans to release a template for these summaries by early 2025 after consulting stakeholders. However, AI companies oppose disclosing their training data, arguing it as a trade secret that could unfairly benefit competitors if made public, the report reveals.
In the past year, major tech firms including Google, OpenAI, and Stability AI have faced lawsuits alleging unauthorized use of content for AI training. Despite US President Joe Biden’s executive orders addressing AI security risks, legal challenges regarding copyright remain largely untested, the report adds.
Backlash against OpenAI
Amid heightened scrutiny, tech companies have struck content-licensing deals with media outlets and websites. OpenAI, for instance, has partnered with the Financial Times and The Atlantic, while Google has collaborated with NewsCorp and Reddit.
Despite these efforts, OpenAI drew criticism in March when Chief Technology Officer Mira Murati declined to confirm whether YouTube videos were used to train its video-generating tool, Sora, citing potential violations of company terms and conditions.
Source: business-standard.com
The post EU’s new AI rules: Industry opposed to revealing guarded trade secrets appeared first on HIPTHER Alerts.
Uncategorized
Picsart teams up with Getty to take on Adobe’s ‘commercially-safe’ AI
![picsart-teams-up-with-getty-to-take-on-adobe’s-‘commercially-safe’-ai](https://roboticulized.com/wp-content/uploads/2024/06/150201-picsart-teams-up-with-getty-to-take-on-adobes-commercially-safe-ai.jpg)
Picsart and Getty Images are joining forces to develop an AI image generator exclusively trained on Getty’s licensed stock content.
According to Picsart, their AI lab is constructing a bespoke model from the ground up to power this tool. It aims to provide the platform’s paying subscribers with the ability to generate images that come with full commercial rights. This initiative seeks to address persistent concerns about potential copyright violations associated with AI-generated content. The Picsart / Getty Images generator is slated for launch later this year and will be accessible through Picsart’s API services.
This collaboration bears similarities to Adobe’s Firefly AI model, initially introduced as a prompt-based image generation tool within Photoshop last year. Adobe has since expanded its integration across various Creative Cloud applications. Adobe’s model also emphasizes commercial safety by training on stock images from Adobe’s own library, along with openly licensed or out-of-copyright content. However, questions remain about the integrity of the training data and user trust in Adobe’s approach.
Getty Images has previously ventured into commercially-focused AI products through partnerships with Bria AI and Runway, and by teaming up with Nvidia to introduce “Generative AI by Getty Images,” leveraging its extensive catalog of licensed images. Adobe’s widespread integration of the Firefly model into popular applications like Photoshop, Illustrator, Lightroom, and Express may pose a challenge for Picsart’s new offering in terms of attracting creatives away from Adobe’s established ecosystem.
Source: theverge.com
The post Picsart teams up with Getty to take on Adobe’s ‘commercially-safe’ AI appeared first on HIPTHER Alerts.
Uncategorized
Microsoft to delay release of Recall AI feature on security concerns
![microsoft-to-delay-release-of-recall-ai-feature-on-security-concerns](https://roboticulized.com/wp-content/uploads/2024/06/150203-microsoft-to-delay-release-of-recall-ai-feature-on-security-concerns.jpg)
On June 13, Microsoft announced that it will postpone the rollout of its AI-powered feature “Recall” with new computers next week due to privacy concerns. Instead, the tech giant plans to offer Recall for preview to a smaller group later, following feedback and additional testing.
Recall is designed to track various activities from web browsing to voice chats, compiling a searchable history stored on the user’s computer. This allows users to easily retrieve past actions, even months later.
Originally slated for broad availability on June 18 for Copilot+ PC users, Recall will now undergo a preview phase exclusively within Microsoft’s Windows Insider Program (WIP) in the coming weeks. This decision, as stated in a blog post by the Redmond, Washington-based company, underscores their commitment to ensuring a trusted, secure, and reliable experience for all customers.
Copilot+ PCs, introduced in May, feature advanced AI capabilities aimed at enhancing user interactions and productivity. The WIP, a platform for software testing, enables enthusiasts to preview upcoming Windows operating system features.
Microsoft intends to incorporate feedback from the WIP community before extending the Recall preview to all Copilot+ PC users in the near future.
Following the feature’s announcement, concerns over privacy were swiftly voiced on social media, with some users fearing potential surveillance implications. Elon Musk, prominent technologist and billionaire, likened Recall to a scenario from the dystopian series “Black Mirror,” highlighting societal apprehensions about the impact of advanced technologies.
Source: reuters.com
The post Microsoft to delay release of Recall AI feature on security concerns appeared first on HIPTHER Alerts.
-
Artificial Intelligence5 days ago
ADP Rings NASDAQ Opening Bell Celebrates 75 Years at the Forefront of Payroll & HR Innovation
-
Artificial Intelligence6 days ago
Pressidium® Introduces New Hosting Plans and Global Data Centers
-
Artificial Intelligence5 days ago
TraceLink Partners with Genpact to Expand Delivery of Digital Supply Chain Innovation for Healthcare and Life Sciences Sectors
-
Artificial Intelligence5 days ago
Empowering data practitioners will help shape generative AI deployment for enterprise, says new MIT Technology Review Insights research report
-
Uncategorized5 days ago
Hughes Europe Expands into Military, Defence & Government Sectors with Launch of Konnect VHTS Services in Collaboration with Telespazio
-
Uncategorized3 days ago
AI Could Transform UK’s Public Finances as Labour Touts Gains
-
Artificial Intelligence1 day ago
Borderless AI & Cohere team up to build a custom-trained AI model for HR professionals, taking on Deel and Remote.com
-
Artificial Intelligence5 days ago
Cell & Gene Therapy Supply Chain Software Market Expected to Grow at a 13.9% CAGR from 2024-2031 | Latest Report by InsightAce Analytic