Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

AI is set to transform hiring requirements: Report

Published

on

ai-is-set-to-transform-hiring-requirements:-report

 

A Survey by EY Reveals Shifts in Workforce Dynamics Due to AI Adoption
Results from an EY survey indicate that approximately 50% of technology leaders foresee a blend of layoffs and hiring within the next six months as a consequence of AI implementation.
The integration of AI is causing a seismic change in skill requirements, prompting companies to reassess their hiring needs, as highlighted in a report by EY. In a poll conducted by EY in March, involving over 250 industry leaders, half of the respondents anticipate a trend of simultaneous layoffs and recruitment, reflecting the evolving skillsets demanded in the AI era.
The report underscores the critical challenge faced by technology organizations: while AI offers unprecedented innovation and productivity, it simultaneously reshapes workforce dynamics, necessitating a delicate balance between hiring and layoffs. Vamsi Duvvuri, EY’s Technology, Media, and Telecommunications AI leader, emphasizes the need for companies to reshape their workforce to be more AI-savvy, anticipating a continuous cycle of strategic realignment.
Despite these challenges, there is enthusiasm surrounding AI’s potential to enhance competitiveness and scalability. However, sourcing top tech talent remains a daunting task, with 61% of technology leaders citing difficulties in acquiring talent, particularly in cybersecurity, data science, and AI engineering.
Ken Englund, EY Americas Technology, Media, and Telecommunications Leader, acknowledges AI’s transformative impact on work dynamics, creating both opportunities and challenges. Despite concerns about AI and the need for increased regulation, most technology business leaders (85%) perceive emerging technology as having a positive impact on workplace culture.
The report also indicates sustained AI investment, with 82% of tech leaders planning to increase AI funding next year. Cybersecurity analysts, data scientists, and AI engineers are among the roles receiving increased focus in hiring initiatives.
To support employees in navigating the complexities of the AI landscape, businesses are implementing various strategies, including internal development programs and technical certifications. Nearly two-thirds of technology business leaders (64%) report the implementation of internal development programs to keep pace with rapidly changing generative AI (GenAI), while three out of four leaders (76%) have introduced internal technical certifications to support employees in adapting to GenAI.
Source: cio.com

The post AI is set to transform hiring requirements: Report appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

AI should be trained to respect a regulatory ‘constitution’ says BofE policy maker

Published

on

ai-should-be-trained-to-respect-a-regulatory-‘constitution’-says-bofe-policy-maker

 
Innovative AI models should be trained to respect a ‘constitution’ or a set of regulatory rules that would reduce the risk of harmful behaviour, argues a senior Bank of England policy maker.
In a speech at CityWeek in London, Randall Kroszner, an external member of the Bank of England’s financial policy committee, outlined the distinction between fundamentally disruptive versus more incremental innovation and the different regulatory challenges posed.
“When innovation is incremental it is easier for regulators to understand the consequences of their actions and to do a reasonable job of undertaking regulatory actions that align with achieving their financial stability goals,” he says.
However, in the case of AI, innovation comes thick and fast, and is more likely to be a disruptive force, making it “much more difficult for regulators to know what actions to take to achieve their financial stability goals and what the unintended consequences could be for both stability and for growth and innovation.”
Kroszner suggests that the central bank’s up-and-coming Digital Securities Sandbox, that will allow firms to use developing technology, such as distributed ledger technology, in the issuance, trading and settlement of securities such as shares and bonds, may no longer be an applicable tool for dealing with artifical intelligence technology.
“Fundamentally disruptive innovations – such as ChatGPT and subsequent AI tools – often involve the potential for extraordinarily rapid scaling that test the limits of regulatory tools,” he notes. “In such a circumstance, a sandbox approach may not be applicable, and policymakers may themselves need to innovate further in the face of disruptive change.”
He points to a recent speech by FPC colleague Jon Hall that highlighted the potential risks emerging from neural networks becoming what he referred to as ‘deep trading agents’ and the potential for their incentives to become misaligned with that of regulators and the public good. This, he argued, could help amplify shocks and reduce market stability.
One proposal to mitigate this risk was to train neural networks to respect a ‘constitution’ or a set of regulatory rules.
Kroszner suggests that the idea of a ‘constitution’ could be combined with, and tested in, a sandbox as way of shepherding new innovation in a way that supports financial stability.
“In the cases where fundamentally disruptive change scales so rapidly that a sandbox approach may not be applicable, a ‘constitutional’ approach may be the most appropriate one to take,” he says.
Source: finextra.com
 
The post AI should be trained to respect a regulatory ‘constitution’ says BofE policy maker appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

OpenAI apologizes to Johansson, denies voice based on her

Published

on

openai-apologizes-to-johansson,-denies-voice-based-on-her

 
OpenAI chief Sam Altman apologized Tuesday to Scarlett Johansson after the movie star said she was “shocked” by a new synthetic voice released by the ChatGPT-maker, but he insisted the voice was not based on hers.
At issue is “Sky,” a voice OpenAI featured last week in the release of its more humanlike GPT-4o artificial intelligence technology.
In a demo, Sky was at times flirtatious and funny, seamlessly jumping from one topic to the next, unlike most existing chatbots.
The technology — and sound of the voice — quickly drew similarities to the Johansson-voiced AI character in the 2013 film “Her.”
Altman has previously pointed to the Spike Jonze-directed movie — a cautionary tale about the future in which a man falls in love with an AI chatbot — as inspiration for where he would like AI interactions to go.
He furthered speculation last week with a single-word post on X, formerly Twitter, saying “her.”
“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Altman said in a statement on Tuesday in a response to the controversy.
“We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson.
“Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
The statement came after Johansson on Monday expressed outrage, saying she was “shocked, angered, and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets couldn’t tell the difference.”
She said Altman had offered in September to hire her to work with OpenAI to create a synthetic voice, saying it might help people engaging with AI, but she declined.
Risk team disbanded
In a blogpost, the company explained that it began working to cast the voice actors in early 2023, “carefully considering the unique personality of each voice and their appeal to global audiences.”
Some of the characteristics sought were “a voice that feel timeless” and “an approachable voice that inspires trust,” the company said.
The five final actors were flown to San Francisco to record in June and July, it said, with their voices launched into ChatGPT last September.
“To protect their privacy, we cannot share the names of our voice talents,” OpenAI said.
“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice.”
So far in the AI frenzy, most tech giants have been reluctant to overly humanize chatbots and some observers expressed concern that OpenAI’s demo last week had gone too far.
Microsoft Vice President Yusuf Mehdi cautioned that AI “should not be human.”
“It shouldn’t breathe. You should be able to…understand (it) is AI,” he told AFP.
The Johansson dispute came just days after OpenAI admitted it disbanded a team devoted to mitigating the long-term dangers of artificial intelligence.
OpenAI began dissolving the so-called “superalignment” group weeks ago, integrating members into other projects and research.
Source: france24.com
The post OpenAI apologizes to Johansson, denies voice based on her appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

India’s next big focus: Artificial Intelligence

Published

on

india’s-next-big-focus:-artificial-intelligence

 
Have you heard about ChatGPT? It’s an AI software that has become quite famous. Chances are, if you’ve been reading stories online, it has been involved in some way.
In today’s fast-changing world, where AI plays a big role, India has a great opportunity to grow. Rajeev Chandrashekhar, Minister of State for the Ministry of Information and Technology, believes India is ready to boost its tech economy. The plan is to invest in both public and private capital flows into the digital space in the real economy space. Chandrashekhar highlights key areas of focus for the next 5-7 years: electronics and microelectronics, telecom, high-performance computing semiconductors, cybersecurity, the future of the internet, automotive and EVs.
Investing in Innovation: Fueling the IndiaAI Mission
Prime Minister Narendra Modi has allocated $12 billion as seed capital into the research and innovation fund that will finance R&D and invest in the next wave of startups, including deep tech, AI, and other similar endeavors.
The government has recently approved over Rs 10,300 crore for the IndiaAI Mission, set to be invested over the next five years. This investment aims to drive various initiatives like building AI computing capacity, establishing innovation centers, creating datasets platforms, and supporting AI startups. The goal is to build cutting-edge AI computing infrastructure, benefiting from collaborations with over 10,000 GPUs.
Tailored Solutions for India: The IndiaAI Approach
India’s approach to AI is tailored to its specific needs. The IndiaAI mission aims to empower states like Kerala, which have untapped potential in the tech sector. By investing in such regions, the government hopes to unlock opportunities for young Indians and increase economic growth.
S Krishnan, secretary of the Ministry of Electronics and Information Technology (MeitY), notes the importance of developing AI models specific to India. While foreign models like ChatGPT 4 can handle Indian languages, they may carry biases due to the data they’re trained on.
Safe & Secure India: Learning from Global Experiences
India’s stance on AI regulation is practical. By observing and learning from other countries’ experiences, India aims to develop effective regulations without hindering innovation. Krishnan also said that India might hold an advantage over other nations by entering AI regulations later, as it can study and learn from the mistakes made by other countries.
The MeitY secretary also raised concerns regarding the potential job losses due to AI. However, he pointed out that India might not be as heavily affected due to its substantial pool of engineers already familiar with the technology. Nonetheless, he underscored the necessity for significant efforts in retraining and upskilling. India’s prioritisation of technology, particularly AI, reflects its ambition to drive progress and prosperity in the digital age.
Source: ddnews.gov.in
The post India’s next big focus: Artificial Intelligence appeared first on HIPTHER Alerts.

Continue Reading

Trending