Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Melanie Mitchell: ‘The big leap in artificial intelligence will come when it is inserted into robots that experience the world like a child’

Published

on

melanie-mitchell:-‘the-big-leap-in-artificial-intelligence-will-come-when-it-is-inserted-into-robots-that-experience-the-world-like-a-child’

 
In her latest book, the American researcher analyzes the real capabilities of this technology, which is unable of human reasoning: ‘There are a lot of things about knowledge that aren’t encoded in language’
Are we overstating the potential of artificial intelligence (AI)? How intelligent is it? Will it ever reach the level of human intelligence? These are some of the questions that Melanie Mitchell, 55, asks in her book Artificial Intelligence: A Guide for Thinking Humans. Her answer is clear: we are very far from creating a superintelligence, no matter how much some companies may say otherwise. And one of the fundamental reasons why is because machines do not reason like we do. They can do almost any task better than anyone else, but they understand the world worse than a one-year-old baby.
Mitchell provides a crucial context to gauge the phenomenon of AI, a technology that has dominated public discussion since tools like ChatGPT appeared two years ago. Politicians, business people and academics recently warned about the dangers of these systems, which have dazzled the world by generating elaborate texts and hyperrealistic images and videos.
In her book, Mitchell — Davis Professor of Complexity at the Santa Fe Institute and professor at Portland State University — describes how the most advanced AI systems work and contrasts them with human reasoning. Her conclusion: key aspects such as intuition or environment awareness are, at the moment, beyond the reach of any machine. Mitchell talks to EL PAÍS by video call from her home in Santa Fe, New Mexico.
Question. What is AI capable of today?
Answer. There was really a big jump in capabilities a couple of years ago with the advent of generative AI, which includes things like ChatGPT and Dall-E. But these systems, while they can do many things, do not have the same kind of understanding of the world as we do. They lack reliability, they have certain kinds of limitations that are often hard to predict. So I think that while these systems can be very useful, and I use them all the time, we have to be careful about how much we trust them, especially if there’s no human in the loop.
Q. Why?
A. They can make harmful mistakes. A clear example is self-driving cars. One of the reasons they are not with us yet is that they make mistakes that a human rarely would, such as not identifying a pedestrian or an obstacle. Another example is automatic facial recognition systems. The machines are extremely good at recognizing faces in images, but they are worse at identifying people with darker skin or women. With ChatGPT, we’ve seen countless cases where it’s made up what it said.

Professor Mitchell uses AI tools daily, but recognizes their limitations and always monitors her results.Kate Joyce
Q. Does the boom in generative AI help or harm the development of the discipline?
A. In a way, this hype raises people’s expectations, and that then causes disappointment. This is something that’s happened throughout the history of artificial intelligence. In the 1950s and 1960s, people were claiming that we’d have AI machines with human-level intelligence within a few years. That didn’t happen. The so-called AI winter arrived: funding for research dried up and companies went out of business. We are now in a period of great expectation. Is this really going to be the time that the optimistic predictors are right? Or is there going to be another sort of big disappointment? And it’s hard to predict.
Q. Just three years ago, the future was going to be the metaverse. Today, no one is talking about it anymore. Do you think something similar could happen with AI?
A. It happens all the time with great technological innovations: there is a kind of big hype bubble, then expectations are not met and people are disappointed, and finally the technology comes out ahead. The development turns out to be useful, but not as brilliant as people expected. That’s likely what’s going to happen with AI.
Q. You argue that AI systems lack semantic understanding or common sense and therefore cannot be truly intelligent. Do you think that will change at some point?
A. It’s possible. There is no reason why we couldn’t have such a machine. The question is, how do we get there? ChatGPT has been trained on all available digital books and texts, as well as all videos and images on the internet. But there are a lot of things about common sense and knowledge that aren’t encoded in language, that just come through experience. It may be that to get a machine to really have a human-like understanding, it will have to actually experience the world in the way we do. This is a subject of a big debate in the world of AI. I suspect the big leap will come when a machine is not just passively trained on language, but also actively experiences the world like a child does.
Q. When they are in robot form.
A. Yes. An AI inserted into a robot could have the same sort of education or development as a child. It is something that Alan Turing, one of the fathers of computing, speculated about in the 1950s. That idea makes more sense now.
Q. In the book, you describe how AI works and how little this process has to do with our own way of reasoning. Does the process matter if it fulfills its function?
A. It depends on what you want to use the system for. My car’s GPS can find a route to and from where I want to go. It doesn’t understand the concept of road or traffic, but it does a fantastic job. The question is, if we really want systems to interact more generally with our human world, to what extent will they need to understand it? There was a case where a self-driving car slammed on the brakes at a certain moment, and the driver didn’t know why. It turned out that there was a billboard with an ad that had a stop sign on it. Can you avoid mistakes like that? Only when you understand the world like we do.
Q. How far do you think AI can go?
A. I don’t think there’s any reason why we can’t have machines with human-level intelligence. But it’s going to be difficult to get there, I don’t think we’re that close right now. Back in the 1970s, people thought that if a machine could play chess at a grand master level, that would require human-level intelligence. It turned out that it didn’t. Then it was said that translating texts or maintaining conversations would require it. But it didn’t either. The whole history of AI has showed that our intuitions about our own intelligence are often wrong, that it’s actually a lot more complex than we thought. And I think that will continue to be the case. We’re going to learn a lot more about what it really means to be intelligent.

Saying that AI systems could get out of control and destroy humanity is, at the very least, a highly improbable and speculative claim.

Q. Then it will have been worth it.
A. One of the goals of AI is to help shed insight into what we mean by intelligence. And, when we try to implement it in machines, we often realize that intelligence really included a lot of things that we never thought of.
Q. Some AI pioneers, such as Geoffrey Hinton, believe that the technology may become difficult to control. What do you think?
A. There are a lot of kinds of dangers with AI. It can be used to produce disinformation and deepfakes. There are algorithmic biases, like the one I mentioned in the case of facial recognition. Hinton and others go further and say these systems could actually get out of control and destroy humanity. This claim is, to say the least, is very unlikely and speculative. If we develop a superintelligent system, I don’t believe that it wouldn’t care about our values, like killing all humans is not right. Putting all the focus on this dramatic idea of existential threats to humanity only takes the focus away from things that are really important right now.
Q. Do you think that, as a society, we are adequately addressing those threats we face today?
A. Yes, although it’s hard for regulation and legislation to keep up with technology. The EU has taken a first step with the European AI Act. One of the things we are seeing in the U.S. are a lot of lawsuits for copyright infringement. All of these systems are trained on huge amounts of text and images. If they have not paid for its use, is this copyright infringement? The law is unclear because the technology didn’t exist when it was enacted. We’ll see how this is resolved.
Q. What is the most impressive AI application you have seen lately?
A. What excites me most is the application of these systems to scientific problems. DeepMind, for instance, is working on using AI to predict the structure of proteins. It is also being used to develop new bioengineering and medicines. We are in a sort of new era of science, perhaps as important as the one we saw when computers were invented.
Q. You say in your book that those who calibrate deep learning systems, the most advanced AI technique, seem more like alchemists than scientists, because they adjust parameters in the machines without knowing exactly what they are doing.
A. Shortly after writing the book, people began to talk about prompt engineers [the instructions given to generative AI tools]. Their job is to try to make the system perform as well as possible. It turns out that there are people who are making a lot of money doing that work. But it’s pure alchemy, there is no science behind it. It’s just about trying stuff. Some stuffs work and some doesn’t, and we have no idea why.
Q. It is ironic that the people who are trying to optimize one of the most sophisticated technologies in the history of humanity are doing so blindly.
A. These systems are in some sense black boxes. They’re enormously complex software systems that have not been programmed explicitly to do things, but rather have been trained, have learned from data, and nobody can figure out why they work the way they do. Neuroscientists also don’t understand how the brain works, and they do experiments to try to make sense of it. That’s what’s happening now with generative AI.
Source: english.elpais.com
The post Melanie Mitchell: ‘The big leap in artificial intelligence will come when it is inserted into robots that experience the world like a child’ appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Demystifying the EU AI Act for IT Leaders

Published

on

demystifying-the-eu-ai-act-for-it-leaders

 

As the EU AI Act approaches its final passage, organizations involved in both developing and deploying AI technologies will face new transparency and risk assessment requirements, although the exact rules are yet to be finalized.
The European Parliament’s mid-March vote to approve the EU AI Act marks a significant milestone as the world’s first major legislation aimed at regulating the use and implementation of artificial intelligence applications.
While the vote does not signify the law’s definitive enactment, it does signal forthcoming regulatory changes that will impact many Chief Information Officers (CIOs) overseeing AI tool usage within their organizations. The legislation will not only affect entities directly engaged in AI development but also those simply utilizing AI technologies. Furthermore, these regulations will extend beyond the EU’s borders, impacting any organization interacting with EU residents.
The journey toward AI legislation has been years in the making, with the EU initially proposing the legislation in April 2021. Despite some advocacy for AI regulation from prominent figures like Elon Musk and Sam Altman, the EU AI Act also faces criticism.
The legislation will impose new obligations on organizations to validate, monitor, and audit the entire AI lifecycle. Kjell Carlsson, head of AI strategy at Domino Data Lab, expresses concern about the potential chilling effect of the law on AI research and adoption due to hefty fines and unclear definitions. However, ignoring the AI revolution to evade regulations is not a viable option, Carlsson emphasizes, as AI adoption is essential for organizational survival and growth.
The EU AI Act covers three main areas:

Banned uses of AI: Prohibitions include AI applications threatening human rights, such as biometric categorization systems based on sensitive physical features. Monitoring of employee or student emotions, social scoring, predictive policing based on personal profiles, and manipulation of human behavior are also banned.
Obligations for high-risk AI systems: Organizations utilizing high-risk AI tools must conduct risk assessments, mitigate risks, maintain use logs, ensure transparency, and provide human oversight. Examples of high-risk systems include those used in critical infrastructure, education, employment decisions, healthcare, and banking.
Transparency requirements: General-purpose AI systems must comply with transparency standards, including publishing detailed training data summaries. Additionally, deepfakes must be clearly labeled.

However, some challenges lie ahead, particularly regarding compliance with transparency rules and the impending regulations’ details. Organizations may struggle to meet transparency requirements, especially if they lack extensive documentation or robust data management practices. While the law isn’t retroactive, it will apply to existing AI systems, necessitating documentation of processes and data use.
EU regulators have up to 18 months from the law’s final passage to finalize specific definitions and rules, presenting additional uncertainties and challenges for compliance. The legislation’s focus on AI system effects rather than the systems themselves could pose difficulties given AI’s rapid evolution and unpredictability. As such, continued regulatory input and guidance will be essential for navigating the complexities of AI governance effectively.
Source: cio.com

 
The post Demystifying the EU AI Act for IT Leaders appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

How AI can drive career growth for mortgage professionals

Published

on

how-ai-can-drive-career-growth-for-mortgage-professionals

 

Artificial Intelligence Reshapes Mortgage Industry Dynamics
The mortgage industry is undergoing a profound transformation driven by the adoption of artificial intelligence (AI). While some employees express concerns about potential job displacement, executives are assuring them that AI will primarily automate routine tasks, allowing for more focus on other areas of their roles.
Generative AI has emerged as a valuable tool for lenders, aiding in tasks such as content creation, marketing material development, and email responses. However, there’s recognition that AI’s output requires human oversight and refinement, especially in critical areas like marketing copy.
Companies are cautious about deploying AI in customer-facing roles due to regulatory uncertainties, but some are exploring compliant AI chatbot solutions. Despite regulatory challenges, some lenders have begun experimenting with AI chatbots, while others are still evaluating their potential applications.
Katherine Campbell, founder of consulting firm Leopard Job, believes AI can enhance employee satisfaction by automating mundane tasks, allowing humans to focus on higher-value activities. She emphasizes that AI’s role is to complement human expertise, not replace it.
For example, Mr. Cooper has integrated AI into fulfillment and due diligence roles but takes a cautious approach in front-office functions. Underwriters at Mr. Cooper work alongside AI in a co-pilot mode, reviewing AI-generated decisions before proceeding.
Executives see AI as an opportunity to enhance productivity rather than replace jobs. For instance, Mr. Cooper has significantly increased its mortgage servicing portfolio while maintaining a similar headcount, leveraging technology to handle a larger volume of loans.
Despite uncertainties, AI is expected to continue its growth trajectory in the mortgage industry. Companies are increasingly leveraging AI for internal functions like staff education and customer interactions. Tools powered by generative and machine learning models are already in use at companies like Blend and Rocket Mortgage, streamlining workflows and providing assistance to loan officers.
Source: nationalmortgagenews.com

The post How AI can drive career growth for mortgage professionals appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Could a better understanding of how infants acquire language help us build smarter A.I. models?

Published

on

could-a-better-understanding-of-how-infants-acquire-language-help-us-build-smarter-ai.-models?

 

From Baby Talk to Baby A.I.: Exploring the Connection Between Infant Language Acquisition and Artificial Intelligence
The journey from babbling babies to sophisticated artificial intelligence (A.I.) systems may seem worlds apart, but researchers are increasingly finding intriguing parallels between these seemingly disparate domains. Could a deeper understanding of how infants learn language pave the way for more intelligent A.I. models? Let’s delve into this fascinating intersection of neuroscience and machine learning.
Infant language acquisition is a remarkable process that unfolds rapidly during the first few years of life. Babies are born with an innate capacity for language, but they must learn to understand and produce speech through exposure to linguistic input from their caregivers and environment. This process involves complex cognitive abilities, such as pattern recognition, statistical learning, and social interaction.
Similarly, A.I. systems learn from data, albeit in a vastly different manner. Machine learning algorithms process vast amounts of information to identify patterns and make predictions, much like the way infants learn from exposure to language input. However, while A.I. models excel at tasks like language translation and speech recognition, they often struggle with understanding context, ambiguity, and nuance—areas where human language learners excel.
By studying the mechanisms underlying infant language acquisition, researchers hope to uncover insights that could inform the development of more intelligent A.I. systems. One key area of focus is statistical learning, the ability to extract regularities and patterns from the input data. Infants demonstrate remarkable statistical learning abilities, enabling them to discern the structure of their native language from the stream of auditory input.
Researchers believe that incorporating principles of statistical learning into A.I. algorithms could improve their ability to understand and generate natural language. By analyzing large datasets of text and speech, A.I. systems could learn to identify linguistic patterns and relationships, leading to more accurate language processing and generation.
Social interaction also plays a crucial role in infant language development, as babies learn from their caregivers through joint attention, imitation, and feedback. Similarly, A.I. systems could benefit from interactive learning paradigms that involve human interaction and feedback. By engaging in dialogue with users, A.I. agents could refine their language skills and adapt to individual preferences and contexts.
Moreover, insights from cognitive neuroscience could inspire novel architectures and algorithms for A.I. models. For example, neuroscientists have identified specialized brain regions involved in language processing, such as Broca’s area and Wernicke’s area. Mimicking these neural circuits in artificial neural networks could lead to more biologically inspired A.I. systems capable of robust language understanding and production.
In summary, the study of infant language acquisition offers valuable insights that could inform the development of more intelligent A.I. models. By understanding the cognitive mechanisms underlying language learning in infants, researchers hope to design A.I. systems that exhibit human-like language abilities, unlocking new possibilities for natural language understanding, communication, and interaction. As we continue to unravel the mysteries of the human mind, we may find that the key to smarter A.I. lies in the babbling of babies.
Source: nytimes.com

The post Could a better understanding of how infants acquire language help us build smarter A.I. models? appeared first on HIPTHER Alerts.

Continue Reading

Trending