Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

A congressman wanted to understand AI. So he went back to a college classroom to learn

Published

on

a-congressman-wanted-to-understand-ai.-so-he-went-back-to-a-college-classroom-to-learn

 
Don Beyer’s car dealerships were among the first in the U.S. to set up a website. As a representative, the Virginia Democrat leads a bipartisan group focused on promoting fusion energy. He reads books about geometry for fun.
So when questions about regulating artificial intelligence emerged, the 73-year-old Beyer took what for him seemed like an obvious step, enrolling at George Mason University to get a master’s degree in machine learning. In an era when lawmakers and Supreme Court justices sometimes concede they don’t understand emerging technology, Beyer’s journey is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligence as they consider laws that would shape its development.
Frightening to some, thrilling to others, baffling to many: Artificial intelligence has been called a transformative technology, a threat to democracy or even an existential risk for humanity. It will fall to members of Congress to figure out how to regulate the industry in a way that encourages its potential benefits while mitigating the worst risks.
“I tend to be an AI optimist,” Beyer told The Associated Press following a recent afternoon class on George Mason’s campus in suburban Virginia. “We can’t even imagine how different our lives will be in five years, 10 years, 20 years, because of AI. … There won’t be robots with red eyes coming after us any time soon. But there are other deeper existential risks that we need to pay attention to.
Risks like massive job losses in industries made obsolete by AI, programs that retrieve biased or inaccurate results, or deepfake images, video and audio that could be leveraged for political disinformation, scams or sexual exploitation. On the other side of the equation, onerous regulations could stymie innovation, leaving the U.S. at a disadvantage as other nations look to harness the power of AI.
Striking the right balance will require input not only from tech companies but also from the industry’s critics, as well as from the industries that AI may transform. While many Americans may have formed their ideas about AI from science fiction movies like “The Terminator” or “The Matrix,” it’s important that lawmakers have a clear-eyed understanding of the technology, said Rep. Jay Obernolte, R-Calif., and the chairman of the House’s AI Task Force.
When lawmakers have questions about AI, Obernolte is one of the people they seek out. He studied engineering and applied science at the California Institute of Technology and earned an M.S. in artificial intelligence at UCLA. The California Republican also started his own video game company. Obernolte said he’s been “very pleasantly impressed” with how seriously his colleagues on both sides of the aisle are taking their responsibility to understand AI.
That shouldn’t be surprising, Obernolte said. After all, lawmakers regularly vote on bills that touch on complicated legal, financial, health and scientific subjects. If you think computers are complicated, check out the rules governing Medicaid and Medicare.
Keeping up with the pace of technology has challenged Congress since the steam engine and the cotton gin transformed the nation’s industrial and agricultural sectors. Nuclear power and weaponry is another example of a highly technical subject that lawmakers have had to contend with in recent decades, according to Kenneth Lowande, a University of Michigan political scientist who has studied expertise and how it relates to policy-making in Congress.
Federal lawmakers have created several offices — the Library of Congress, the Congressional Budget Office, etc. — to provide resources and specialized input when necessary. They also rely on staff with specific expertise on subject topics, including technology.
Then there’s another, more informal form of education that many members of Congress receive.
“They have interest groups and lobbyists banging down their door to give them briefings,” Lowande said.
Beyer said he’s had a lifelong interest in computers and that when AI emerged as a topic of public interest he wanted to know more. A lot more. Almost all of his fellow students are decades younger; most don’t seem that fazed when they discover their classmate is a congressman, Beyer said.
He said the classes, which he fits in around his busy congressional schedule — are already paying off. He’s learned about the development of AI and the challenges facing the field. He said it’s helped him understand the challenges — biases, unreliable data — and the possibilities, like improved cancer diagnoses and more efficient supply chains.
Beyer is also learning how to write computer code.
“I’m finding that learning to code — which is thinking in this sort of mathematical, algorithmic step-by-step, is helping me think differently about a lot of other things — how you put together an office, how you work a piece of legislation,” Beyer said.
While a computer science degree isn’t required, it’s imperative that lawmakers understand AI’s implications for the economy, national defense, health care, education, personal privacy and intellectual property rights, according to Chris Pierson, CEO of the cybersecurity firm BlackCloak.
“AI is not good or bad,” said Pierson, who formerly worked in Washington for the Department of Homeland Security. “It’s how you use it.”
The work of safeguarding AI has already begun, though it’s the executive branch leading the way so far. Last month, the White House unveiled new rules that require federal agencies to show their use of AI isn’t harming the public. Under an executive order issued last year, AI developers must provide information on the safety of their products.
When it comes to more substantive action, America is playing catchup to the European Union, which recently enacted the world’s first significant rules governing the development and use of AI. The rules prohibit some uses — routine AI-enabled facial recognition by law enforcement, for one — while requiring other programs to submit information about safety and public risks. The landmark law is expected to serve as a blueprint for other nations as they contemplate their own AI laws.
As Congress begins that process, the focus must be on “mitigating potential harm,” said Obernolte, who said he’s optimistic that lawmakers from both parties can find common ground on ways to prevent the worst AI risks.
“Nothing substantive is going to get done that isn’t bipartisan,” he said.
To help guide the conversation lawmakers created a new AI task force (Obernolte is co-chairman), as well as an AI Caucus made up of lawmakers with a particular expertise or interest in the topic. They’ve invited experts to brief lawmakers on the technology and its impacts — and not just computer scientists and tech gurus either, but also representatives from different sectors that see their own risks and rewards in AI.
Rep. Anna Eshoo is the Democratic chairwoman of the caucus. She represents part of California’s Silicon Valley and recently introduced legislation that would require tech companies and social media platforms like Meta, Google or TikTok to identify and label AI-generated deepfakes to ensure the public isn’t misled. She said the caucus has already proved its worth as a “safe place” place where lawmakers can ask questions, share resources and begin to craft consensus.
“There isn’t a bad or silly question,” she said. “You have to understand something before you can accept or reject it.”
Source: apnews.com
The post A congressman wanted to understand AI. So he went back to a college classroom to learn appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Easyjet, eyeing record summer, boosts nerve center with AI

Published

on

easyjet,-eyeing-record-summer,-boosts-nerve-center-with-ai

 

Easyjet’s cutting-edge facility, equipped with artificial intelligence (AI), is now operational near Luton airport, managing approximately 2,000 flights daily across Europe as the airline anticipates heightened demand for the summer season.
Known as the Integrated Control Center (ICC), this facility plays a pivotal role in Easyjet’s operations, facilitating swift adjustments to flight schedules and monitoring aircraft health in real-time. Situated near London, the center boasts advanced capabilities, including engine analysis and even monitoring of onboard facilities like toilets.
As the aviation industry rebounds from the impacts of COVID-19, Easyjet has ramped up its recruitment efforts, doubling the workforce at the control center to 266 employees, who work tirelessly around the clock, overseeing operations displayed on large curved screens.
Gill Baudot, Easyjet’s director of network control, emphasized the airline’s anticipation of a busy summer season, projecting a significant increase in passenger numbers to 300,000 daily. The ICC plays a crucial role in adapting swiftly to any disruptions, from adverse weather conditions to technical issues, ensuring smooth logistics and minimizing passenger inconvenience.
To enhance decision-making during critical situations, Easyjet has integrated an AI tool similar to ChatGPT, aiding staff in tasks such as aircraft rerouting and crew assignment. Despite technological advancements, Baudot stressed the importance of human judgment in making final decisions.
Throughout the pandemic, Easyjet, under the leadership of CEO Johan Lundgren, has recognized the potential of data-driven solutions to enhance customer experience and operational efficiency. Lundgren highlighted the role of technology in optimizing various aspects of airline operations, from predicting catering needs to facilitating predictive maintenance decisions.
Mark Garrett, overseeing customer disruption at Easyjet, emphasized the center’s focus on passenger welfare, ensuring prompt communication and prioritization of flights, particularly those with significant numbers of children or group travelers, in the event of disruptions.
As Easyjet navigates through the challenges posed by COVID-19, its investment in technology, automation, and AI reflects a commitment to adaptability and efficiency in serving passengers and maintaining operational excellence in the aviation sector.
Source: techxplore.com

The post Easyjet, eyeing record summer, boosts nerve center with AI appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Building better ethical standards for AI, for democracy

Published

on

building-better-ethical-standards-for-ai,-for-democracy

 

In our swiftly advancing digital realm, Artificial Intelligence (AI) presents both a formidable challenge to and a significant opportunity for the health of democracy. One emerging threat to democratic systems stems from malicious actors who seek to exploit AI to disrupt societal cohesion.
The unchecked proliferation of AI language models has underscored the urgent need for robust ethical standards. Core principles such as privacy, algorithm transparency, user safety, fairness, and inclusivity have often been overlooked amid the rapid progress of AI technologies. Establishing and rigorously testing clear ethical guidelines is crucial to ensuring that AI operates within ethical boundaries and contributes positively to the common good.
A comprehensive approach is necessary, one that protects free speech while empowering users to identify and address bias and harmful content. Initially, efforts should focus on detecting and analyzing disinformation, biases, discrimination, hate speech, and deepfakes. Sophisticated tools leveraging machine learning and natural language processing techniques can be developed to identify and scrutinize harmful content in real-time. These tools should be tested with diverse user groups to safeguard free speech.
Furthermore, AI’s proactive engagement with misinformation can play a vital role in shaping public discourse. By actively countering falsehoods and promoting accurate information, AI-powered tools have the potential to guide conversations toward truthfulness and mitigate the spread of misinformation. This proactive approach not only limits the dissemination of harmful narratives but also fosters a culture of accountability and accuracy in digital spaces, enhancing public trust in AI’s ability to combat misinformation.
In addition to detection and engagement, the implementation of automated reporting systems is crucial in safeguarding democratic institutions from threats posed by state or terrorist-backed actors. These AI-powered systems can swiftly identify and flag harmful content to hosting platforms, enabling timely intervention and moderation. Streamlining the reporting process allows platforms to respond effectively, preserving the integrity of online discourse.
Transparency tools are also essential in building user trust and facilitating informed decision-making. By offering insights into the origins, legitimacy, and credibility of digital content, these tools empower individuals to navigate the digital landscape with discernment. From tracking sources to verifying links and fact-checking, transparency tools enable users to critically evaluate information and contribute to a safer online environment.
AI presents both opportunities and risks for democracy. It is imperative to ensure that AI aligns with our shared values to strengthen democratic institutions and uphold the highest standards of ethics and transparency. By prioritizing inclusivity, fairness, and accountability, we can harness AI’s potential to bolster resilience, safety, and trust in our democratic systems.
Source: diplomaticourier.com

The post Building better ethical standards for AI, for democracy appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

AVPN launches $15M AI Opportunity Fund with Google.org, ADB support

Published

on

avpn-launches-$15m-ai-opportunity-fund-with-google.org,-adb-support

 

AVPN, the premier network of social investors headquartered in Singapore, has unveiled the $15 million Artificial Intelligence (AI) Opportunity Fund: Asia-Pacific, in collaboration with Google.org and the Asian Development Bank (ADB).
In a statement released on Monday, AVPN outlined the fund’s objective as a three-year initiative aimed at equipping Asia’s workforce with essential AI knowledge and tools necessary for the evolving job market. The goal is to ensure that opportunities presented by AI are accessible to a broader spectrum of people in the region, particularly those from marginalized communities.
Through an open call, the fund seeks to identify and select non-profit organizations, social enterprises, and workforce associations across the Asia-Pacific region that effectively engage with workers most affected by AI-driven workforce transitions. Selected organizations will receive comprehensive support, including guidance, financial assistance, and customized AI training based on foundational courses developed by Google and its partners.
Scott Beaumont, President of Google Asia-Pacific, emphasized the transformative potential of AI in the region, stating, “AI presents tremendous opportunities for the Asia-Pacific region, but it’s crucial to equip people with the skills needed to thrive.” He expressed confidence that the AI Opportunity Fund will empower underserved communities and ensure widespread benefits from AI advancements.
The initiative comes at a time when employers in the Asia-Pacific region are increasingly optimistic about AI’s ability to enhance productivity. However, concerns persist among employees about potential job displacement due to AI automation, particularly among disadvantaged populations. Naina Subberwal Batra, CEO of AVPN, underscored the urgency of building an AI-ready workforce to unlock Asia’s full potential and address the socioeconomic impact of AI-driven job transitions.
Jason Rush, Principal Regional Cooperation Specialist at ADB, emphasized the importance of bridging skills and knowledge gaps in underserved communities to facilitate access to AI-driven opportunities. He highlighted the AI Opportunity Fund as a crucial tool to support organizations in upskilling and reskilling workers, enabling them to benefit from the evolving job landscape shaped by AI technology.
AVPN, comprising over 600 funders and resource providers across 33 markets, aims to enhance the flow and effectiveness of capital in Asia by enabling members to channel resources towards impactful initiatives. Google.org, Google’s philanthropic arm, collaborates with nonprofits, social enterprises, and civic entities to address societal challenges and create scalable, meaningful change. Through their combined efforts, AVPN and Google.org strive to harness the potential of AI to promote inclusive growth and opportunity across the Asia-Pacific region.
Source: technode.global

The post AVPN launches $15M AI Opportunity Fund with Google.org, ADB support appeared first on HIPTHER Alerts.

Continue Reading

Trending