Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Unleashing the Power of Digital-ESG: Transforming Sustainability in the AI Age

Published

on

 

DQ Institute, a renowned global organization dedicated to setting global standards for digital intelligence, is proud to announce the launch of the Digital-ESG Assessment and Reporting Standards in collaboration with Taejae Future Consensus Institute, a leading think-tank in Korea. The “Digital Age, A Pathway to Sustainability” Forum was held at The Korean Foundation of Advanced Studies on July 13th, with the participation of Ban Ki-moon, the 8th UN Secretary-General, Irina Bokova, the former Director General of UNESCO, Kim Yong-Hak, the former President of Yonsei University, and Dr. Yuhyun Park, the founder of DQ Institute.

In response to escalating concerns surrounding the existential threats posed by AI, the DQ Institute aims to ensure the mitigation of AI risks and other digital-related risks, including child online safety issues, fake news, cyber-attacks, manipulation, and privacy invasions while also addressing environment, social and governance issues. Hence, the introduction of the Digital-ESG Assessment and Reporting Standards marks a significant milestone in setting a new global agenda. The guidelines provide a comprehensive checklist and framework for companies or investment communities to address digital-related risks while fostering “human-centered technology” in the AI age.

The rapid advancement and widespread implementation of AI, big data, analytics, and other emerging technologies have triggered a seismic digital transformation of companies across all industries. However, this transformation has also given rise to various digital-related risks.  While the United Nations Sustainable Development Goals (SDGs) and existing ESG frameworks primarily focus on climate-related issues, digital-related concerns have not been adequately addressed. The loophole has allowed businesses with potential or existing negative impacts on the digital environment to evade their ESG responsibilities, without facing significant risks to their investment portfolio or reputations. By incorporating Digital-ESG into sustainability strategies, a balance can be struck between harnessing the potential of technology and protecting citizens, digital and physical planets, and global society.

Mr. Ban Ki-moon, the 8th UN Secretary-General, commented: “The private sector plays a crucial role in achieving sustainability and needs to be held accountable from an Environmental, Social, and Governance (ESG) perspective. Digital components need to be incorporated into ESG as we try to update our sustainability goals, with the global race for AI dominance raging on. We need to expand the traditional ESG framework to encompass various aspects of companies’ digital activities, such as digital inclusion, digital skills, digital safety and security, digital rights, and more.”

Recognizing these challenges, the DQ Institute emphasizes the urgency of addressing critical issues such as human-centered tech initiatives, enhancement of environmental issues through cross-economy, digital human capital development, digital inclusion, stakeholder digital safety & wellbeing, data security & system reliability, digital rights & ethics, as well as digital communication & stakeholder engagement.

Dr. Yuhyun Park, founder of DQ Institute expressed her enthusiasm by stating, “We take great pride in introducing our new Digital-ESG Assessment and Reporting Standards at a valuable event with the participation of distinguished guests. Our aim is to provide industry leaders with a practical tool and guidance that will assist every company striving to adopt AI and digital technology. Digital-ESG empowers companies to drive their technological advancements with transparency, accountability, and ethical standards. With Digital-ESG, companies can maximize empowerment and resources while minimizing their harm and control.”

The DQ Institute has also developed the TDFD (Taskforce on Digital-related Financial Disclosure), an initiative that consists of organizations as well as individual members from all sectors, academia, financial institutions, companies, and public agencies to support organizations with adoption, integration, and reporting on the Digital-ESG goals which go beyond risk management strategies and provides the foundation for purpose-driven business model, operation, and management enabling companies to make positive contributions to society while maximizing their profits through their digital activities.

To learn more about TDFD (Taskforce on Digital-related Financial Disclosure) and Digital-ESG Assessment and Reporting Standards, please visit https://tdfd-global.org/

Uncategorized

Demystifying the EU AI Act for IT Leaders

Published

on

demystifying-the-eu-ai-act-for-it-leaders

 

As the EU AI Act approaches its final passage, organizations involved in both developing and deploying AI technologies will face new transparency and risk assessment requirements, although the exact rules are yet to be finalized.
The European Parliament’s mid-March vote to approve the EU AI Act marks a significant milestone as the world’s first major legislation aimed at regulating the use and implementation of artificial intelligence applications.
While the vote does not signify the law’s definitive enactment, it does signal forthcoming regulatory changes that will impact many Chief Information Officers (CIOs) overseeing AI tool usage within their organizations. The legislation will not only affect entities directly engaged in AI development but also those simply utilizing AI technologies. Furthermore, these regulations will extend beyond the EU’s borders, impacting any organization interacting with EU residents.
The journey toward AI legislation has been years in the making, with the EU initially proposing the legislation in April 2021. Despite some advocacy for AI regulation from prominent figures like Elon Musk and Sam Altman, the EU AI Act also faces criticism.
The legislation will impose new obligations on organizations to validate, monitor, and audit the entire AI lifecycle. Kjell Carlsson, head of AI strategy at Domino Data Lab, expresses concern about the potential chilling effect of the law on AI research and adoption due to hefty fines and unclear definitions. However, ignoring the AI revolution to evade regulations is not a viable option, Carlsson emphasizes, as AI adoption is essential for organizational survival and growth.
The EU AI Act covers three main areas:

Banned uses of AI: Prohibitions include AI applications threatening human rights, such as biometric categorization systems based on sensitive physical features. Monitoring of employee or student emotions, social scoring, predictive policing based on personal profiles, and manipulation of human behavior are also banned.
Obligations for high-risk AI systems: Organizations utilizing high-risk AI tools must conduct risk assessments, mitigate risks, maintain use logs, ensure transparency, and provide human oversight. Examples of high-risk systems include those used in critical infrastructure, education, employment decisions, healthcare, and banking.
Transparency requirements: General-purpose AI systems must comply with transparency standards, including publishing detailed training data summaries. Additionally, deepfakes must be clearly labeled.

However, some challenges lie ahead, particularly regarding compliance with transparency rules and the impending regulations’ details. Organizations may struggle to meet transparency requirements, especially if they lack extensive documentation or robust data management practices. While the law isn’t retroactive, it will apply to existing AI systems, necessitating documentation of processes and data use.
EU regulators have up to 18 months from the law’s final passage to finalize specific definitions and rules, presenting additional uncertainties and challenges for compliance. The legislation’s focus on AI system effects rather than the systems themselves could pose difficulties given AI’s rapid evolution and unpredictability. As such, continued regulatory input and guidance will be essential for navigating the complexities of AI governance effectively.
Source: cio.com

 
The post Demystifying the EU AI Act for IT Leaders appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

How AI can drive career growth for mortgage professionals

Published

on

how-ai-can-drive-career-growth-for-mortgage-professionals

 

Artificial Intelligence Reshapes Mortgage Industry Dynamics
The mortgage industry is undergoing a profound transformation driven by the adoption of artificial intelligence (AI). While some employees express concerns about potential job displacement, executives are assuring them that AI will primarily automate routine tasks, allowing for more focus on other areas of their roles.
Generative AI has emerged as a valuable tool for lenders, aiding in tasks such as content creation, marketing material development, and email responses. However, there’s recognition that AI’s output requires human oversight and refinement, especially in critical areas like marketing copy.
Companies are cautious about deploying AI in customer-facing roles due to regulatory uncertainties, but some are exploring compliant AI chatbot solutions. Despite regulatory challenges, some lenders have begun experimenting with AI chatbots, while others are still evaluating their potential applications.
Katherine Campbell, founder of consulting firm Leopard Job, believes AI can enhance employee satisfaction by automating mundane tasks, allowing humans to focus on higher-value activities. She emphasizes that AI’s role is to complement human expertise, not replace it.
For example, Mr. Cooper has integrated AI into fulfillment and due diligence roles but takes a cautious approach in front-office functions. Underwriters at Mr. Cooper work alongside AI in a co-pilot mode, reviewing AI-generated decisions before proceeding.
Executives see AI as an opportunity to enhance productivity rather than replace jobs. For instance, Mr. Cooper has significantly increased its mortgage servicing portfolio while maintaining a similar headcount, leveraging technology to handle a larger volume of loans.
Despite uncertainties, AI is expected to continue its growth trajectory in the mortgage industry. Companies are increasingly leveraging AI for internal functions like staff education and customer interactions. Tools powered by generative and machine learning models are already in use at companies like Blend and Rocket Mortgage, streamlining workflows and providing assistance to loan officers.
Source: nationalmortgagenews.com

The post How AI can drive career growth for mortgage professionals appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Could a better understanding of how infants acquire language help us build smarter A.I. models?

Published

on

could-a-better-understanding-of-how-infants-acquire-language-help-us-build-smarter-ai.-models?

 

From Baby Talk to Baby A.I.: Exploring the Connection Between Infant Language Acquisition and Artificial Intelligence
The journey from babbling babies to sophisticated artificial intelligence (A.I.) systems may seem worlds apart, but researchers are increasingly finding intriguing parallels between these seemingly disparate domains. Could a deeper understanding of how infants learn language pave the way for more intelligent A.I. models? Let’s delve into this fascinating intersection of neuroscience and machine learning.
Infant language acquisition is a remarkable process that unfolds rapidly during the first few years of life. Babies are born with an innate capacity for language, but they must learn to understand and produce speech through exposure to linguistic input from their caregivers and environment. This process involves complex cognitive abilities, such as pattern recognition, statistical learning, and social interaction.
Similarly, A.I. systems learn from data, albeit in a vastly different manner. Machine learning algorithms process vast amounts of information to identify patterns and make predictions, much like the way infants learn from exposure to language input. However, while A.I. models excel at tasks like language translation and speech recognition, they often struggle with understanding context, ambiguity, and nuance—areas where human language learners excel.
By studying the mechanisms underlying infant language acquisition, researchers hope to uncover insights that could inform the development of more intelligent A.I. systems. One key area of focus is statistical learning, the ability to extract regularities and patterns from the input data. Infants demonstrate remarkable statistical learning abilities, enabling them to discern the structure of their native language from the stream of auditory input.
Researchers believe that incorporating principles of statistical learning into A.I. algorithms could improve their ability to understand and generate natural language. By analyzing large datasets of text and speech, A.I. systems could learn to identify linguistic patterns and relationships, leading to more accurate language processing and generation.
Social interaction also plays a crucial role in infant language development, as babies learn from their caregivers through joint attention, imitation, and feedback. Similarly, A.I. systems could benefit from interactive learning paradigms that involve human interaction and feedback. By engaging in dialogue with users, A.I. agents could refine their language skills and adapt to individual preferences and contexts.
Moreover, insights from cognitive neuroscience could inspire novel architectures and algorithms for A.I. models. For example, neuroscientists have identified specialized brain regions involved in language processing, such as Broca’s area and Wernicke’s area. Mimicking these neural circuits in artificial neural networks could lead to more biologically inspired A.I. systems capable of robust language understanding and production.
In summary, the study of infant language acquisition offers valuable insights that could inform the development of more intelligent A.I. models. By understanding the cognitive mechanisms underlying language learning in infants, researchers hope to design A.I. systems that exhibit human-like language abilities, unlocking new possibilities for natural language understanding, communication, and interaction. As we continue to unravel the mysteries of the human mind, we may find that the key to smarter A.I. lies in the babbling of babies.
Source: nytimes.com

The post Could a better understanding of how infants acquire language help us build smarter A.I. models? appeared first on HIPTHER Alerts.

Continue Reading

Trending