Connect with us
European Gaming Congress 2024

Uncategorized

New material found by AI could reduce lithium use in batteries

Published

on

new-material-found-by-ai-could-reduce-lithium-use-in-batteries

 
A brand new substance, which could reduce lithium use in batteries, has been discovered using artificial intelligence (AI) and supercomputing.

The findings were made by Microsoft and the Pacific Northwest National Laboratory (PNNL), which is part of the US Department of Energy.

Scientists say the material could potentially reduce lithium use by up to 70%.

Since its discovery the new material has been used to power a lightbulb.

Microsoft researchers used AI and supercomputers to narrow down 32 million potential inorganic materials to 18 promising candidates in less than a week – a screening process that could have taken more than two decades to carry out using traditional lab research methods.

Advertisement
Stake.com

The process from inception to the development of a working battery prototype took less than nine months.

The two organisations achieved this by using advanced AI and high-performance computing which combines large numbers of computers to solve complex scientific and mathematical tasks.

Executive vice president of Microsoft, Jason Zander, told the BBC one of the tech giant’s missions was to “compress 250 years of scientific discovery into the next 25”.

“And we think technology like this will help us do that. This is the way that this type of science I think is going to get done in the future,” he said.

The problem with lithium

Advertisement
Stake.com

Lithium is often referred to as “white gold” because of its market value and silvery colour. It is one of the key components in rechargeable batteries (lithium-ion batteries) that power everything from electric vehicles (EVs) to smartphones.

As the need for the metal ramps up and the demand for EVs rises, the world could face a shortage of the material as soon as 2025, according to the International Energy Agency.

It is also expected that demand for lithium-ion batteries will increase up to tenfold by 2030, according to the US Department for Energy, so manufacturers are constantly building battery plants to keep up.

Lithium mining can be controversial as it can take several years to develop and has a considerable impact on the environment. Extracting the metal requires large amounts of water and energy, and the process can leave huge scars in the landscape, as well as toxic waste.

Dr Nuria Tapia-Ruiz, who leads a team of battery researchers at the chemistry department at Imperial College London, said any material with reduced amounts of lithium and good energy storage capabilities are “the holy grail” in the lithium-ion battery industry.

Advertisement
Stake.com

“AI and supercomputing will become crucial tools for battery researchers in the upcoming years to help predict new high-performing materials,” she said.

But Dr Edward Brightman, lecturer in chemical engineering at the University of Strathclyde, said the tech would need to be “treated with a bit of caution”.

“It could throw up spurious results, or results that look good at first, and then turn out to either be a material that is known or that can’t be synthesised in the lab,” he said.

This AI-derived material, which at the moment is simply called N2116, is a solid-state electrolyte that has been tested by scientists who took it from a raw material to a working prototype.

It has the potential to be a sustainable energy storage solution because solid-state batteries are safer than traditional liquid or gel-like lithium.

Advertisement
Stake.com

In the near future, faster charging solid-state lithium batteries promise to be even more energy-dense, with thousands of charge cycles.
 

How is this AI different?

The way in which this technology works is by using a new type of AI that Microsoft has created, trained on molecular data that can actually figure out chemistry.

“This AI is all based on scientific materials, database and properties,” explained Mr Zander.

“The data is very trustworthy for using it for scientific discovery.”

Advertisement
Stake.com

After the software narrowed down the 18 candidates, battery experts at PNNL then looked at them and picked the final substance to work on in the lab.

Karl Mueller from PNNL said the AI insights from Microsoft pointed them “to potentially fruitful territory so much faster” than under normal working conditions.

“[We could] modify, test and tune the chemical composition of this new material and quickly evaluate its technical viability for a working battery, showing the promise of advanced AI to accelerate the innovation cycle,” he said.

 
Source: BBC
The post New material found by AI could reduce lithium use in batteries appeared first on Hipther Alerts.

Advertisement
Stake.com
Continue Reading
Advertisement
Stake.com

Uncategorized

Meta and Apple Hold Back New AI Features For European Users Over Regulatory Concerns

Published

on

meta-and-apple-hold-back-new-ai-features-for-european-users-over-regulatory-concerns

 
American Big Tech vs. European Regulators: The AI Battleground
Artificial Intelligence has become the latest contentious issue between American tech giants and European regulators, who are closely monitoring AI services for potential privacy, antitrust, and consumer protection violations.
Apple and Meta Withhold AI Features in Europe
In recent weeks, Apple and Meta have delayed the release of new AI features in the European market to avoid potential penalties. However, the vast size of the European market might compel these companies to address regulatory concerns.
Apple Intelligence and DMA Compliance Fears
Apple recently unveiled Apple Intelligence, a suite of new AI features, including the integration of ChatGPT into iOS. However, due to concerns that its deal with OpenAI might violate the EU’s Digital Markets Act (DMA), Apple has postponed the rollout of these features for EU users.
“We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” Apple stated.
While US customers will access the new AI features later this year, EU iPhone users will have to wait until Apple can ensure the software upgrade complies fully with the DMA.
Balancing Regulatory Compliance
Apple faces a challenging situation: non-compliance with DMA regulations could result in fines of up to 10% of its global turnover. On the other hand, not providing the same AI features and services in the EU as in other regions could harm its business in Europe.
This regulatory challenge is not unique to Apple. Meta is also navigating similar issues, although its concerns are more focused on privacy laws than antitrust regulations.
Meta’s AI Data Gathering and GDPR Complaints
In early June, Meta announced changes to its privacy policy that would allow the company to use personal data from millions of Europeans to train its AI models. This move drew criticism from Max Schrems, a well-known privacy advocate who has previously sued Meta for violating the EU’s General Data Protection Regulation (GDPR).
“Meta is basically saying that it can use ‘any data from any source for any purpose and make it available to anyone in the world’, as long as it’s done via ‘AI technology’. This is clearly the opposite of GDPR compliance,” said Schrems.
Through his campaign group noyb, Schrems filed complaints with 11 EU privacy watchdogs, including the Data Protection Commission (DPC) in Ireland.
Meta AI Launch Delayed in Europe
Following these complaints, Meta has suspended its plans to use data from EU and UK citizens to train its AI systems and has delayed the launch of Meta AI in the region. Meta AI, built on Llama 3, is intended to rival ChatGPT and integrate into platforms like Facebook Messenger, Instagram, and WhatsApp.
Stefano Fratta, Meta’s Global Engagement Director, defended the company’s approach, stating, “We are following the example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI. Our approach is more transparent and offers easier controls than many of our industry counterparts.”
The Likelihood of Compromise
Both Apple and Meta maintain that offering new AI services without violating European regulations is currently unfeasible. However, history suggests that regulators are unlikely to back down in such standoffs. In similar past situations, American companies have often been forced to compromise.
Meta, for instance, has previously threatened to withdraw its services from the EU but has never followed through. For Apple, permanently limiting AI features for European users is also not a viable long-term strategy. Ultimately, US tech giants will need to find ways to deploy AI services that comply with European regulations.
Source: ccn.com
The post Meta and Apple Hold Back New AI Features For European Users Over Regulatory Concerns appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Quickcode.ai Raises $1.1M in Seed Funding

Published

on

quickcodeai-raises-$1.1m-in-seed-funding

 
Quickcode.ai, a McLean, VA-based provider of artificial intelligence software for the trade compliance industry, has secured $1.1 million in Seed funding.
The funding round was supported by PS27 Ventures and DataTribe.
The company plans to use the funds to expand its operations and enhance its development efforts.
Led by CEO Shannon Hynds, Quickcode.ai leverages large language models and a science-based user interface to streamline data management for trade compliance professionals. Its products include a 24/7 cloud-based product compliance monitoring platform, a specialized integration for SaaS providers needing access to trade compliance data, and an API that facilitates the assignment of compliance data to extensive product datasets.
Source: finsmes.com/
The post Quickcode.ai Raises $1.1M in Seed Funding appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Massachusetts Attorney General Clarifies Position on Artificial Intelligence

Published

on

massachusetts-attorney-general-clarifies-position-on-artificial-intelligence

 
Massachusetts Attorney General Issues Advisory on AI Regulation
The Massachusetts Attorney General’s Office (AGO) has issued an advisory clarifying that existing Massachusetts law applies to artificial intelligence (AI) in the same way it applies to any other product in commerce.
Massachusetts Attorney General Andrea Campbell is the first AG in the country to provide such guidance about AI. The advisory acknowledges AI’s potential societal benefits and emphasizes Massachusetts’s significant role in guiding the technology’s development.
However, the primary purpose of the advisory is to warn AI developers, suppliers, and users that Massachusetts law, including the Massachusetts Consumer Protection Act (Chapter 93A), applies to AI. This Act prohibits unfair or deceptive business practices in Massachusetts.
The AGO provided the following examples of unfair or deceptive AI business practices:

Falsely advertising the quality, value, or usability of AI systems.
Supplying a defective, unusable, or impractical AI system for the advertised purpose.
Misrepresenting the reliability, performance, safety, or conditions of an AI system, including claims of bias-free operation.
Selling an AI system that breaches warranty by not being fit for its ordinary or specified purpose.
Misrepresenting audio or video of a person to deceive others into business transactions or sharing personal information, as in cases of deepfakes, voice cloning, or fraudulent chatbots.
Failing to comply with Massachusetts laws intended to protect public health, safety, or welfare.

The advisory also reminds businesses that AI systems must comply with privacy protection, anti-discrimination, and federal consumer protection laws.
Increasing AI Regulation
AI is expected to face increasing regulation and litigation at both state and federal levels. At the national level, the Biden administration issued an Executive Order in October 2023, directing federal agencies to address AI’s growing utility and risks. Following this, the Federal Trade Commission proposed a rule prohibiting AI from impersonating humans, and the Department of Labor announced principles for AI systems in the workplace. Other federal agencies are also taking action.
In 2024, Colorado and Utah passed AI laws likely to serve as models for other states. The Colorado Artificial Intelligence Act and Utah’s Artificial Intelligence Policy Act integrate AI use within existing state consumer protection laws. Reflecting the AGO’s warning, plaintiffs have begun asserting privacy and consumer claims based on AI technology on business websites.
Internationally, the EU Artificial Intelligence Act, enacted on March 13, 2024, categorizes AI applications by risk level and regulates them accordingly. Unacceptable risk applications are banned, while high-risk applications are subject to strict precautionary measures and oversight. AI developers and suppliers doing business in Europe should ensure compliance with the EU AI Act.
Preparing for AI Compliance, Enforcement, and Litigation Risks
Given the uncertainty surrounding future AI deployment and how laws will be applied, compliance obligations and enforcement risks are likely to increase. Businesses should consult with experienced counsel before deploying new AI tools to mitigate risks. Organizations should consider the following measures:

Develop an internal AI policy governing the use of AI in the workplace.
Update due diligence practices to understand third-party vendors’ use of AI, including data collection, transmission, storage, and usage in training AI tools.
Monitor state and federal laws for new legal developments affecting compliance obligations.
Ensure appropriate governance processes, including continuous monitoring and testing for AI quality and absence of bias.
Provide clear disclosure about AI tools, functions, and features, including notifications when customers engage with an AI assistant.
Modify privacy policies and terms and conditions to explain AI technology use and available opt-out or dispute resolution options for customers.
Review and update third-party contracts for AI-related terms, disclosure obligations, and liability allocation.

By taking these steps, businesses can better navigate the evolving regulatory landscape and ensure compliance with emerging AI laws.
Source: natlawreview.com
The post Massachusetts Attorney General Clarifies Position on Artificial Intelligence appeared first on HIPTHER Alerts.

Advertisement
Stake.com
Continue Reading

Trending