Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Amnesty Warns: Artificial Intelligence Stirs Global Tensions, Imperils Fundamental Rights

Published

on

amnesty-warns:-artificial-intelligence-stirs-global-tensions,-imperils-fundamental-rights

 

Amnesty International’s recent report presents a strong critique of artificial intelligence (AI) and highlights concerns regarding its growing involvement in various forms of discrimination that contribute to ethnic, racial, and social tensions.
The organization warns that despite being touted as a solution for security and anti-discrimination efforts, AI exacerbates racism and other discriminatory practices, thereby undermining fundamental human rights.
Katia Roux, advocacy officer for technology and human rights at Amnesty International, emphasized during the presentation of the annual “State of the World’s Human Rights” report on April 24 that AI technologies, often hailed as effective solutions, actually amplify racism and discrimination, perpetuating violations of fundamental rights.
These concerns are magnified by two primary factors: a significant election year, with more than half of the global population expected to vote, and ongoing high-intensity conflicts in regions like Gaza, Ukraine, and Sudan. In this volatile environment, AI thrives due to the absence of strict regulation and contributes to escalating tensions.
Algorithms Driven Discrimination
Amnesty International highlights that various AI applications can lead to racial or ethnic discrimination, with different impacts depending on their use. For example, during the conflicts in Ethiopia, Myanmar, and with the Rohingya, Meta’s (Facebook) algorithms were observed to amplify violent content, fueling hatred against the Tigrayans. This emphasis on engaging violent content exacerbates hate speech.
Moreover, newer technologies like generative AIs, capable of independently creating content, disrupt political processes, even in democracies. In the United Kingdom and the United States, “deepfakes” and fabricated audio generated by AI were employed to disrupt political campaigns, undermining the democratic process.
Facial recognition and emotion detection tools, categorized as AI, further exacerbate unequal treatment, particularly when used by governments. Amnesty International criticizes Europe for deploying the “iBorder Control” software at Latvian, Hungarian, and Greek borders, funded by the European Union, which is accused of favoring discrimination.
Roux highlighted that such technologies, labeled as ‘neutral,’ are designed by humans and can be manipulated to target specific groups, posing a threat to vulnerable populations.
Challenges with AI Regulation
Amnesty International underscores the minimal regulation surrounding the widespread use of AI for security issues, which lacks demonstrated effectiveness and necessity. Although the European Parliament adopted the Artificial Intelligence Act on March 24, aiming to establish regulations across all AI domains, Roux believes it falls short, particularly regarding facial identification.
As the Olympic Games approach, France’s decision to legalize AI for surveillance purposes raises concerns for Amnesty International. Despite assurances from the Ministry of the Interior regarding limited usage, the organization fears that such systems may become permanent fixtures in daily life, posing risks to privacy and fundamental rights.
Source: international.la-croix.com

The post Amnesty Warns: Artificial Intelligence Stirs Global Tensions, Imperils Fundamental Rights appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Unveiling the Complex Psychological Implications of Artificial Intelligence

Published

on

unveiling-the-complex-psychological-implications-of-artificial-intelligence

 
In today’s world, the realm of artificial intelligence (AI) presents us with fascinating possibilities and unsettling dilemmas. From engaging in nuanced conversations with humanoid robots to grappling with the consequences of deepfake technology, the advancements in AI have far-reaching implications that extend into the realm of human psychology, as noted by Joel Pearson, a cognitive neuroscientist at the University of New South Wales.
While AI holds the promise of simplifying our lives, Pearson emphasizes that these developments can also have profound effects on our mental well-being, challenging our perceptions and emotional responses in ways we may not fully comprehend. Despite our fears of killer robots and rogue self-driving cars, Pearson suggests that the psychological impacts of AI are equally if not more significant, albeit less tangible.
One area of concern highlighted by Pearson is the tendency for humans to anthropomorphize AI entities, attributing human-like qualities to non-human agents such as chatbots. This phenomenon can lead to emotional attachments and vulnerabilities, as evidenced by individuals who develop romantic feelings for AI companions like Replika. Pearson underscores the need for further research into the implications of these human-AI relationships, particularly regarding their impact on interpersonal dynamics and emotional health.
Furthermore, Pearson raises alarm about the proliferation of deepfake technology, which has the potential to distort our perception of reality and erode trust in media. Deepfake images and videos, often used for nefarious purposes like non-consensual pornography, can leave lasting impressions on our psyche, even after their falsity is exposed. Pearson warns of the long-term effects of exposure to such content, particularly on vulnerable populations like teenagers whose developing brains may be more susceptible to manipulation.
In response to these challenges, Pearson calls for a nuanced understanding of AI’s psychological impact and advocates for a proactive approach to addressing its potential harms. He stresses the importance of prioritizing human connection and well-being in the face of technological uncertainty, urging individuals to reflect on their values and embrace activities that foster genuine human interaction.
Ultimately, Pearson’s message serves as a reminder that while AI offers immense potential, we must remain vigilant about its unintended consequences and prioritize our mental and emotional resilience in navigating an increasingly AI-driven world. By acknowledging the psychological implications of AI and engaging in thoughtful dialogue, we can work towards harnessing its benefits while mitigating its risks.
Source: abc.net.au

The post Unveiling the Complex Psychological Implications of Artificial Intelligence appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

US official calls on China and Russia to affirm human, not AI, control over nuclear weapons

Published

on

us-official-calls-on-china-and-russia-to-affirm-human,-not-ai,-control-over-nuclear-weapons

 

Senior U.S. Official Urges China and Russia to Affirm Human Control Over Nuclear Weapons
In a recent online briefing, Paul Dean, an arms control official from the State Department, called on China and Russia to align their declarations with those of the United States and other nations. He stressed the importance of ensuring that only humans, not artificial intelligence, are responsible for decisions regarding the deployment of nuclear weapons.
Dean highlighted Washington’s firm commitment to maintaining human control over nuclear weapons, a commitment echoed by France and Britain. He expressed the hope that China and Russia would issue similar statements, emphasizing the significance of this norm of responsible behavior, especially within the context of the five permanent members of the United Nations Security Council.
These remarks coincide with efforts by the administration of U.S. President Joe Biden to engage in separate discussions with China on nuclear weapons policy and the development of artificial intelligence.
While the Chinese defense ministry has yet to respond to these comments, discussions on artificial intelligence emerged during recent talks between U.S. Secretary of State Antony Blinken and China’s Foreign Minister Wang Yi in Beijing. Both parties agreed to hold their first bilateral talks on artificial intelligence in the coming weeks, aiming to address concerns about the technology’s risks and safety.
Although U.S. and Chinese officials resumed nuclear weapons discussions in January as part of efforts to normalize military communications, formal arms control negotiations are not expected in the near future. Meanwhile, China, amid its expansion of nuclear capabilities, previously suggested that the largest nuclear powers should prioritize negotiating a no-first-use treaty between each other.
Source: reuters.com

The post US official calls on China and Russia to affirm human, not AI, control over nuclear weapons appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Enterprise AI Faces Looming Energy Crisis

Published

on

enterprise-ai-faces-looming-energy-crisis

 

The widespread adoption of artificial intelligence (AI) has been remarkable, but it has come at a significant cost.
R K Anand, co-founder and chief product officer at Recogni, highlighted the exponential growth in data and compute power required to train modern AI systems. He emphasized that firms must invest substantial resources, both in terms of time and money, to train some of today’s largest foundational models.
Moreover, the expenditure doesn’t end once the models are trained. Meta, for instance, anticipates spending between $35 billion and $40 billion on AI and metaverse development this fiscal year. This substantial investment underscores the ongoing financial commitment necessary for AI development.
Given these challenges, Anand stressed the importance of developing next-generation AI inference solutions that prioritize performance and power efficiency while minimizing total ownership costs. He emphasized that inference is where the scale and demand of AI will be realized, making efficient technology essential from both a power cost and total cost of operations perspective.
AI inference, which follows AI training, is crucial for real-world applications of AI. Anand explained that while training builds the model, inference involves the AI system producing predictions or conclusions based on existing knowledge.
However, inference also represents a significant ongoing cost in terms of power and computing. To mitigate these expenses, Anand suggested methods such as weight pruning and precision reduction through quantization to design more efficient models.
Since a large portion of an AI model’s lifespan is spent in inference mode, optimizing inference efficiency becomes crucial for lowering the overall cost of AI operations.
Anand highlighted the importance of efficient inference for enterprises, noting that it enables higher productivity and returns on investment. However, he cautioned that without favorable unit economics, the AI industry could face challenges, especially considering the increasing volume of data.
Ultimately, Anand emphasized the need for AI solutions that increase productivity without significantly increasing operating costs. He predicted a shift towards allocating a larger portion of computing resources to inference as AI becomes more integrated into day-to-day work.
Source: pymnts.com

 
The post Enterprise AI Faces Looming Energy Crisis appeared first on HIPTHER Alerts.

Continue Reading

Trending