Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Officials launch artificial intelligence research initiative

Published

on

officials-launch-artificial-intelligence-research-initiative

 

Officials announced an interdisciplinary research initiative on artificial intelligence’s real-life applications earlier this month.
The Trustworthy AI initiative, a plan to engage GW researchers across multiple fields, aims to improve existing AI models and research strategies for AI’s societal applications to increase user trust. Zoe Szajnfarber, the director of strategic initiatives for GW Engineering and a faculty director for GW TAI, said GW TAI seeks to unite GW faculty who research AI under one initiative to create opportunities for research collaborations across disciplines.
Szajnfarber said many faculty members at GW have ongoing research projects related to AI as well as larger programs like the Institute for Trustworthy AI in Law and Society and the Co-Design of Trustworthy AI Systems. She said it can be difficult for faculty to find joint research opportunities because AI research at the University spans across numerous schools, and programs like GW TAI connecting AI researchers did not exist prior.
“The challenge is that the work is so diverse and distributed across many disciplines that it’s sometimes hard to keep up, let alone find whom to connect with relevant collaborators on any given project,” Szajnfarber said in an email.
Szajnfarber said GW TAI will serve as a means to facilitate research collaborations and joint projects across disciplines at GW and to share AI-related events and opportunities in one place.
“I see GW TAI as a platform to bring together researchers who want to contribute to this important problem space of TAI in systems and for society,” Szajnfarber said.
Faculty involved in the initiative said they hope to bolster current AI models and study the implications of AI use in areas like consumer behavior, social justice issues and medical decisions.
Erica Wortham — the director of the GW Innovation Center and a co-principal investigator of Designing Trustworthy AI Systems, a program for doctoral students to conduct AI research — said she teaches a summer course for computer science and systems engineering doctoral students on designing AI solutions to solve real-world problems, like AI use for cashierless grocery stores. She said the partnership between students in two fields is an example of the Trustworthy AI initiative’s multidisciplinary approach and allows those designing AI to focus on addressing problems for those who use AI.
“You have the folks making the models and building the algorithms talking to folks that study technical systems in context,” Wortham said.
Douglas Crawford, an assistant professor of interior architecture and a GW TAI faculty member, said he hopes to collaborate with faculty members who create AI to develop architecture-specific models through the initiative.
He said architecture students utilize AI’s graphic design capabilities to create “inspirational imagery” and to generate quick mock-ups for their designs. But, since graphic AI is not specifically tailored to architecture, AI outputs include “hallucinations” like staircases that lead to a wall without a doorway, he said.
“I’m excited to be included amongst that and be able to offer up the unique perspective of someone in the Corcoran School who is working the graphic AI side of things,” Crawford said.
Nils Olsen, an assistant professor of organizational sciences and a GW TAI faculty member, said he looks forward to further examining AI’s impacts on consumer decisions and its uses in the medical field, like determining diagnoses, as a researcher in the initiative.
“Certainly there are a lot of opportunities,” Olsen said. “My real value to add there would be on the cognitive underpinnings, how people make decisions, literally in their brain.”
Olsen said he’s been conducting consumer behavior research since 2019 using AI bots that were cartoon versions of people from various racial groups to analyze how consumers would negotiate with the different bots over Airbnb prices. He said researchers aimed to assess if consumers would have a different level of aggression when negotiating with a Black, Asian or white individual and found that consumers perceived the bot resembling a Black individual as the most competent, likable and human.
Olsen said researchers are now thinking about the implications of those findings, as AI bots could begin to facilitate negotiations and customer service more frequently.
“They also understand where AI already is being implemented and where there could be opportunities for future kind of introductions of AI,” Olsen said.
Alexa Alice Joubin, the director of the Digital Humanities Institute and a professor of English, said she studies societal biases using AI because she found there are biases within AI algorithms through their responses that reflect various larger societal issues.
“My conclusion is that current AI is actually a social surveillance tool,” Joubin said. “Do you want to know about biases in society? Test it on AI. If you curate it correctly, what comes out actually reflects what the society collectively thinks.”
She said coders often think linearly about AI algorithms, and those in humanities often consider alternative approaches to AI use, which she said demonstrates the value of researchers in different fields collaborating as part of the initiative.
“It’s so that you don’t lose sight of what it is for, it’s for humans,” Joubin said. “That’s why humanities are here.”
Doug Evans, the founder of the Behavioral Research Insights and Digital Health Technology Institute and a professor of prevention and community health, said he hopes to explore how researchers can use AI to influence health-related behaviors through GW TAI.
“There may be developments or collaboration opportunities that arise that could benefit my work,” Evans said. “So I was very interested in that sort of thing.”
Source: gwhatchet.com

The post Officials launch artificial intelligence research initiative appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Demystifying the EU AI Act for IT Leaders

Published

on

demystifying-the-eu-ai-act-for-it-leaders

 

As the EU AI Act approaches its final passage, organizations involved in both developing and deploying AI technologies will face new transparency and risk assessment requirements, although the exact rules are yet to be finalized.
The European Parliament’s mid-March vote to approve the EU AI Act marks a significant milestone as the world’s first major legislation aimed at regulating the use and implementation of artificial intelligence applications.
While the vote does not signify the law’s definitive enactment, it does signal forthcoming regulatory changes that will impact many Chief Information Officers (CIOs) overseeing AI tool usage within their organizations. The legislation will not only affect entities directly engaged in AI development but also those simply utilizing AI technologies. Furthermore, these regulations will extend beyond the EU’s borders, impacting any organization interacting with EU residents.
The journey toward AI legislation has been years in the making, with the EU initially proposing the legislation in April 2021. Despite some advocacy for AI regulation from prominent figures like Elon Musk and Sam Altman, the EU AI Act also faces criticism.
The legislation will impose new obligations on organizations to validate, monitor, and audit the entire AI lifecycle. Kjell Carlsson, head of AI strategy at Domino Data Lab, expresses concern about the potential chilling effect of the law on AI research and adoption due to hefty fines and unclear definitions. However, ignoring the AI revolution to evade regulations is not a viable option, Carlsson emphasizes, as AI adoption is essential for organizational survival and growth.
The EU AI Act covers three main areas:

Banned uses of AI: Prohibitions include AI applications threatening human rights, such as biometric categorization systems based on sensitive physical features. Monitoring of employee or student emotions, social scoring, predictive policing based on personal profiles, and manipulation of human behavior are also banned.
Obligations for high-risk AI systems: Organizations utilizing high-risk AI tools must conduct risk assessments, mitigate risks, maintain use logs, ensure transparency, and provide human oversight. Examples of high-risk systems include those used in critical infrastructure, education, employment decisions, healthcare, and banking.
Transparency requirements: General-purpose AI systems must comply with transparency standards, including publishing detailed training data summaries. Additionally, deepfakes must be clearly labeled.

However, some challenges lie ahead, particularly regarding compliance with transparency rules and the impending regulations’ details. Organizations may struggle to meet transparency requirements, especially if they lack extensive documentation or robust data management practices. While the law isn’t retroactive, it will apply to existing AI systems, necessitating documentation of processes and data use.
EU regulators have up to 18 months from the law’s final passage to finalize specific definitions and rules, presenting additional uncertainties and challenges for compliance. The legislation’s focus on AI system effects rather than the systems themselves could pose difficulties given AI’s rapid evolution and unpredictability. As such, continued regulatory input and guidance will be essential for navigating the complexities of AI governance effectively.
Source: cio.com

 
The post Demystifying the EU AI Act for IT Leaders appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

How AI can drive career growth for mortgage professionals

Published

on

how-ai-can-drive-career-growth-for-mortgage-professionals

 

Artificial Intelligence Reshapes Mortgage Industry Dynamics
The mortgage industry is undergoing a profound transformation driven by the adoption of artificial intelligence (AI). While some employees express concerns about potential job displacement, executives are assuring them that AI will primarily automate routine tasks, allowing for more focus on other areas of their roles.
Generative AI has emerged as a valuable tool for lenders, aiding in tasks such as content creation, marketing material development, and email responses. However, there’s recognition that AI’s output requires human oversight and refinement, especially in critical areas like marketing copy.
Companies are cautious about deploying AI in customer-facing roles due to regulatory uncertainties, but some are exploring compliant AI chatbot solutions. Despite regulatory challenges, some lenders have begun experimenting with AI chatbots, while others are still evaluating their potential applications.
Katherine Campbell, founder of consulting firm Leopard Job, believes AI can enhance employee satisfaction by automating mundane tasks, allowing humans to focus on higher-value activities. She emphasizes that AI’s role is to complement human expertise, not replace it.
For example, Mr. Cooper has integrated AI into fulfillment and due diligence roles but takes a cautious approach in front-office functions. Underwriters at Mr. Cooper work alongside AI in a co-pilot mode, reviewing AI-generated decisions before proceeding.
Executives see AI as an opportunity to enhance productivity rather than replace jobs. For instance, Mr. Cooper has significantly increased its mortgage servicing portfolio while maintaining a similar headcount, leveraging technology to handle a larger volume of loans.
Despite uncertainties, AI is expected to continue its growth trajectory in the mortgage industry. Companies are increasingly leveraging AI for internal functions like staff education and customer interactions. Tools powered by generative and machine learning models are already in use at companies like Blend and Rocket Mortgage, streamlining workflows and providing assistance to loan officers.
Source: nationalmortgagenews.com

The post How AI can drive career growth for mortgage professionals appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Could a better understanding of how infants acquire language help us build smarter A.I. models?

Published

on

could-a-better-understanding-of-how-infants-acquire-language-help-us-build-smarter-ai.-models?

 

From Baby Talk to Baby A.I.: Exploring the Connection Between Infant Language Acquisition and Artificial Intelligence
The journey from babbling babies to sophisticated artificial intelligence (A.I.) systems may seem worlds apart, but researchers are increasingly finding intriguing parallels between these seemingly disparate domains. Could a deeper understanding of how infants learn language pave the way for more intelligent A.I. models? Let’s delve into this fascinating intersection of neuroscience and machine learning.
Infant language acquisition is a remarkable process that unfolds rapidly during the first few years of life. Babies are born with an innate capacity for language, but they must learn to understand and produce speech through exposure to linguistic input from their caregivers and environment. This process involves complex cognitive abilities, such as pattern recognition, statistical learning, and social interaction.
Similarly, A.I. systems learn from data, albeit in a vastly different manner. Machine learning algorithms process vast amounts of information to identify patterns and make predictions, much like the way infants learn from exposure to language input. However, while A.I. models excel at tasks like language translation and speech recognition, they often struggle with understanding context, ambiguity, and nuance—areas where human language learners excel.
By studying the mechanisms underlying infant language acquisition, researchers hope to uncover insights that could inform the development of more intelligent A.I. systems. One key area of focus is statistical learning, the ability to extract regularities and patterns from the input data. Infants demonstrate remarkable statistical learning abilities, enabling them to discern the structure of their native language from the stream of auditory input.
Researchers believe that incorporating principles of statistical learning into A.I. algorithms could improve their ability to understand and generate natural language. By analyzing large datasets of text and speech, A.I. systems could learn to identify linguistic patterns and relationships, leading to more accurate language processing and generation.
Social interaction also plays a crucial role in infant language development, as babies learn from their caregivers through joint attention, imitation, and feedback. Similarly, A.I. systems could benefit from interactive learning paradigms that involve human interaction and feedback. By engaging in dialogue with users, A.I. agents could refine their language skills and adapt to individual preferences and contexts.
Moreover, insights from cognitive neuroscience could inspire novel architectures and algorithms for A.I. models. For example, neuroscientists have identified specialized brain regions involved in language processing, such as Broca’s area and Wernicke’s area. Mimicking these neural circuits in artificial neural networks could lead to more biologically inspired A.I. systems capable of robust language understanding and production.
In summary, the study of infant language acquisition offers valuable insights that could inform the development of more intelligent A.I. models. By understanding the cognitive mechanisms underlying language learning in infants, researchers hope to design A.I. systems that exhibit human-like language abilities, unlocking new possibilities for natural language understanding, communication, and interaction. As we continue to unravel the mysteries of the human mind, we may find that the key to smarter A.I. lies in the babbling of babies.
Source: nytimes.com

The post Could a better understanding of how infants acquire language help us build smarter A.I. models? appeared first on HIPTHER Alerts.

Continue Reading

Trending