Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Do Not Fear Artificial Intelligence, Take Care of It

Published

on

do-not-fear-artificial-intelligence,-take-care-of-it

 
“We must take special care of what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words, when using such tools, we humans should stand in truth and ’feed’ […]
“We must take special care of what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words, when using such tools, we humans should stand in truth and ’feed’ the systems with truth,” says Artur Olejniczak, a manager, certified business trainer, and lecturer at the School of Logistics in Poznań.
Dariusz Rostkowski: We connected via a messenger. How can you be sure you’re talking to a human and not to artificial intelligence?
Artur Olejniczak: Of course, theoretically, you could be a machine, or rather it could take your place. But I verified the contact earlier, asked questions, and assumed that I was talking to a human, not an artificial creation.
Yes, I am human. However, I differ from artificial intelligence, with its ability to exhibit human capabilities such as reasoning, learning, planning, creativity, etc.
Artificial intelligence enables devices equipped with it to perceive their surroundings and react appropriately to changes occurring in them. Consequently, they can solve problems, by being focused on a set goal. AI systems are increasingly adapting their behavior to circumstances and acting more autonomously. The American research lab – OpenAI, which enabled conversations with GPT-3 (Generative Pre-trained Transformer 3) showed the immense potential in current computer systems. Generative artificial intelligence, or rather machine learning algorithms based on neural networks and superfast computers, allow for aggregating data they have access to. Colloquially speaking: it utilizes publicly available internet resources, which is, all the content we create and place online. So, what is today’s big “wow” and determines the significance of AI is simply the ability to aggregate information very quickly and provide answers to questions based on it.
And what if the question is poorly phrased, like being imprecise or ambiguous?
Exactly! The key issue is whether we can appropriately question the system! Misunderstandings happen. Once, while preparing for a lecture, I asked ChatGPT-4, the latest version, to conduct research on lithium-ion batteries. It spits out information that the best expert in this field is the famous chess player Garry Kasparov. Of course, this was an outright lie, and GPT-4 was probably misled because Kasparov used some expression related to lithium-ion batteries in one of his statements.
So, in summary: a huge advantage of AI systems is that they acquire a complete set of information we need in real-time, which we can then freely use. On the other hand, the correctness of the input data is extremely important. It’s commonly known that the internet is currently full of irrelevant, junk, and even false information. During discussions with my students or in business meetings, I often emphasize that we cannot pursue the so-called SISO (shit in, shit out) strategy, meaning we must particularly care about what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words: when using such tools, we humans should stand in truth, and “feed” the systems with truth.
The problem of properly using artificial intelligence is becoming increasingly pressing. The European Parliament dealt with these issues in June 2023, adopting a position on the so-called Artificial Intelligence Act, the first set of comprehensive rules to manage the risks associated with AI. It identified the greatest opportunities and also the threats resulting from the use of AI. Among the benefits for citizens was an improvement in healthcare. How could this be achieved?
For example, through faster and more efficient data analysis. Nowadays, blood tests are conducted quite differently than before – without descriptions, tedious observations under the microscope, etc. Simply put: the lab technician takes a sample and labels it with a barcode containing our data. Then the blood is placed in the analysis center, where computers do the work. Usually, we can retrieve the test results on the same day via the Internet. Since computers have a vast amount of data on similar tests, norms, etc., they quickly detect deviations from the norm, thereby reducing the risk of error.
Indeed, that’s convincing. MEPs also write that in the future, products will be more tailored to user needs, and additionally cheaper and more durable. Isn’t this wishful thinking? In recent decades, we often felt a decline in quality, at the expense of rising prices…
Competition forces various actions on companies, and as we know, almost all are interested in maximizing profit. This is a key point of business activity. The use of AI should impact the competitiveness of products – benefiting consumers in terms of quality and price.
At the same time, I wanted to point out that significant changes in habits will probably occur – the economy of sharing will step in more boldly. We see this in the case of automotive companies, which, instead of maximizing sales or profits from service, are moving towards subscriptions. Soon, we won’t be buying cars to own, but renting them for the duration of use. Think about how much time we drive our car. Two hours a day? And for the rest of the time, this so-called resource is unused, simultaneously generating costs in terms of parking fees, services, insurance, taxes, etc. Thanks to AI algorithms, the sharing system can be optimized, making a car or other tool used almost continuously. Thus, we will be buying access to a car, without owning it.
Another benefit of AI, mentioned by the European Parliament, is work safety. Really? I would say people are more afraid of losing their jobs…
This is more about robots, about sending them to dangerous places – to protect human life and health. Today in industry or distribution, machines, bots, and drones are used that operate more and more autonomously. In a sense, there’s a dehumanization of these processes. And it’s a good thing. It’s not even that people are naturally lazy. Rather, it’s that we are exhausted by repetitive work and its harmful conditions. Therefore, we are trying, quite successfully, to implement automatic or autonomous systems. This is another step in this industrial revolution – developing machines that work without human intervention. They operate in binary, without unnecessary fluctuations and mood changes, no risk of “off” days, no sick leave, etc.
Yes, automation is necessary. But what about the loss of jobs?
Research by American and European scientists shows that by 2040, 70% of the jobs we know today will disappear. These are very different professions. Today, a robot from the well-known company KUKA can replace a bartender, other machines replace painters, welders, etc. The American army, along with several medical centers, constructed a medical robot that operates on people remotely. It doesn’t put the doctor’s life at risk in an armed conflict, where they would be under fire. They can perform their work in safe conditions. In this field, the Poznań Supercomputing and Networking Center successfully implemented advanced telemedical systems, enabling live transmission of medical procedures from several places simultaneously.
On the other hand, we have, for example, the war in the Gaza Strip, where the active involvement of the latest technological achievements – including AI in killing, not saving lives, is rather visible.
It’s true. Often what is an opportunity on one hand, is a threat on the other. It’s like a regular knife: we can use it to peel potatoes, and vegetables, and prepare a delicious dinner, but we can also kill a person with it. The same happens with artificial intelligence and algorithms. Let’s stick with the example of war: the conflict in Ukraine is a perfect, live testing ground for innovative types of weapons, including those that use AI. It’s probably the first war in which unmanned drones are used on such a large scale. So, in part, it’s a conflict of machines; the one who introduces more effective methods of combat wins.
But won’t there come a time when autonomous machines turn against their creators – humans?
This is a critically important aspect. Machines operate based on code – a program. Code is a record consisting of zeros and ones, denoting false or true. Humans don’t act so unequivocally extreme. I recently watched a documentary film that practically showed the dilemma of dual use. One European scientist, along with a research team, wrote an algorithm that enabled the development of drugs based on known chemical compounds, potentially applicable in the treatment of rare diseases.
The team, supported by the algorithm, did fantastic work, with several spectacular successes. Out of curiosity, the scientist was asked to reprogram the algorithm to form substances that would quickly and effectively kill a person. Curious about the effects, he did it. When he came to the office the next day, he destroyed the work generated by his system. He didn’t want the results to be used in a deadly way.
Returning to the positive ways of using AI, Members of the European Parliament highlight business benefits for the tourism sector. How is this supposed to work?
The key here is data analysis and tailoring travel agency offers to our needs. For example, if we fear tsunamis or earthquakes, the system will bypass potentially dangerous places for us. Easy multi-criteria analysis will be used. It will also consider our budget and preferences regarding cuisine, climate, etc.
Another benefit of artificial intelligence is various applications that enable communication in foreign languages. This will significantly encourage people who do not know these languages. There are apps that, when activated, let us speak into a smartphone, and translate the phrases in real-time. Then it works the other way around… Sometimes misunderstandings occur, but surely there are more pros than cons.
Politicians also talk about social benefits – including strengthening democracy. But really? Isn’t AI more often on the dark side of power? Doesn’t it create fake videos with untrue statements or spread fake news?
I fully share the concern about the threats, and we already have examples from the elections in the United States or the referendum in the United Kingdom, where the use of artificial intelligence led to manipulation. Therefore, today’s key issue is to separate true information from fake ones. This is not easy with the influx of ’news’, as statistically, in one day, we absorb about 100,000 pieces of information from various sources. For comparison – our grandparents absorbed a similar amount throughout their entire lives. In conclusion: we are physically unable to analyze each of these pieces of information and make the right decision. Hence, we face a huge challenge: to catch only the true and reliable ones. Currently, this is not fully achievable.
With the unimaginable amount of data on the internet, how can we sort it?
Data is now the fuel for the economy. Figuratively speaking: it’s more significant than coal or other fuels. Fortunes are made by trading data. They are so valuable because companies make strategic decisions based on them. And we, the consumers, often share this data unknowingly, for example, by signing up for various loyalty programs, and then we are somewhat controlled, and manipulated, succumbing to trends and fashions.
There’s little talk about the impact of AI on our emotional, and social life. Yet, it’s algorithms that match us in pairs and find friends on popular portals…
Yes. (laughs) I had a situation last week where an acquaintance contacted me after several years, saying that Facebook suggested me as a person to add to her friends. And she concluded that I must have been looking for her, hence the suggestion. But that wasn’t the case at all; the system just suggested me as a friend candidate for some reason. And that’s it.
It’s not an exaggeration to say that artificial intelligence has a huge and growing impact on interpersonal relationships. Real relationships are forming, children are being born…
This is quite a complicated matter. In the case of dating websites, on one hand, people are forming relationships, though without them it probably would have happened anyway, just in a different way. On the other hand, about 60 percent of profiles on these sites are bots – creations of artificial intelligence. They are activated to give us the feeling that there is someone who thinks warmly and closely about us. Meanwhile, the goal is to maintain interest resulting in paying for the subscription to the app. So, in my opinion, it’s largely business.
In terms of human relationships, the issue isn’t as complicated as forming emotional relationships between humans and machines…
Humanity is a very important issue. It’s about building relationships, hugging, physical contact, conversations, in other words, responding to our needs. If this AI creation fills a gap in someone’s life or satisfies some emotional need… maybe there’s nothing wrong with that. In fact, some people already function in such relationships – like the Japanese, who marry humanoid robots. I find it hard to see the positives in these relationships, but for those people, there must be some since they decide on such a model. In the end, they can turn off such a machine at any moment. The robot certainly won’t act against their interests, as it doesn’t receive negative “human” feedback like “you didn’t take out the trash” or “you didn’t make dinner again”. I think that in these relationships, only and exclusively the desired, good needs are fulfilled. Robots operate in binary, and some people might like that.
Returning to the advantages of AI listed by experts, there’s often an argument about protection and safety, including crime prevention…
Here again, we face the dilemma of dual use. On the one hand, algorithms indeed help in tracking and catching criminals. On the other hand, we must remember that we are constantly monitored. In our space, there are about 50 billion sensors, detectors, and devices that continuously track what’s happening. Think about cameras monitoring traffic, detectors for pollution or noise, microphones, etc. We carry around six sensors on average in smartphones, watches, etc., which constantly observe us and our behaviors. And on this basis, a tool was developed, intended to counteract terrorism (Pegasus), but it was used for nefarious political purposes. When installing another app on your phone, take a look at its access range (camera, microphone, contacts, location) and ask yourself: why does it need this data?
We can forget about anonymity…
Yes. Because if information about us, including locations, call records, SMS, etc., is stored for several years by telecommunications companies, and state authorities can access these data with court approval, then there’s no question of anonymity. Currently, every email, message in a messenger, or social media post is monitored and filtered by machine learning algorithms. They have coded to search for behaviors based on keywords that could be dangerous to society.
AI significantly changes transport, but there we see both opportunities and threats…
In transportation, there’s a shortage of workers, like drivers. It’s a demanding, monotonous job, a bit like being a sailor. People are away for many weeks, sometimes leading to disturbances, and extending working hours as recently happened with the blockade at the border with Ukraine. Drivers sat there for 9-10 days in unsuitable conditions because the system was disrupted. In this context, replacing people with machines would be very helpful. Especially since few are eager for this type of work. However, the other side of the coin is that we’re dealing with a mismatch between legislation and the movement of autonomous vehicles. What happens in an accident? Who is responsible? The company that constructed the car, the entity that provided the built-in IT system, or the telecommunications company that operated the car autonomously? Insurers have a huge problem with this.
What about logistics, in which you specialize?
Artificial intelligence will certainly revolutionize many areas of human activity, including logistics. Today, as logisticians, we try to find positive applications. I’m talking about data analysis, concluding, and proper resource management. AI currently helps in the area of appropriate production and distribution planning. Take, for example, a network of stores like Żabka, which carries out over one and a half million deliveries a year to its network. Matching the right vehicle, considering, for example, green zones in cities, is a problem. AI will manage this. It will select the appropriate means of transport to deliver goods to the right place.
AI algorithms are also very helpful in managing global supply chains. If goods are produced in Cambodia, China, Vietnam, or India and need to be delivered to Europe, the issue of redistribution requires many analyses, which are now performed by machine learning algorithms. Such systems greatly assist and streamline this work and make these flows more efficient both in terms of price and environmental impact.
Another issue is the involvement of undemocratic states in business and influencing supply chains. And this is a real threat, as political considerations often outweigh economic ones in such situations.
We may have an even bigger problem with insurers if they obtain detailed data about our health…
One can imagine that if increasingly advanced IT systems start “talking” to each other and exchanging data at a higher level, then… an algorithm will evaluate us. It will calculate how much we can contribute to the economy, assess our creativity, usefulness to society, and how long we will live. And that is terrifying!
To some extent, such actions are already being taken in China: citizens are evaluated in terms of their usefulness to society and eliminated from certain activities. A minor insubordination, such as crossing the street at a red light several times, and we are already under scrutiny. If I commit such an offense two or three times, I will automatically receive a fine. If this behavior continues, I will even be eliminated from theWe recommend: Pragmatists vs. Idealists. Who Will Be Happy?
At the political and social level, AI can support the enemies of democracy – warn Members of the European Parliament.
Indeed, AI can assist in “producing” and distributing fake news. Data collected in the system can be used tendentiously, and the image formed after their analysis becomes distorted. An algorithm can also be programmed to show only the negative sides of a particular phenomenon. A less educated audience will “buy” it. Besides, it seems to me that with the development of IT tools, we are becoming, unfortunately, more and more numbed. I recently wrote an article about the development of systems for locating a person in space. Long ago, it was a word, a story, about where and what is located. There were also stars and the moon, helping to find one’s way. Moreover, we better notice and interpret changes related to the seasons. We simply oriented ourselves better in space. Today we have global positioning systems (GPS) and… we get lost. We are convinced that what our smartphone shows us is true and often ignore proper signals. A huge percentage of people are easily misled.
Many people fear losing their jobs due to artificial intelligence. They worry that machines will take away their ability to earn. Will this really happen and which professions are most at risk?
Certainly, those positions where we deal with production – will be completely automated. Similarly, distribution centers, warehouses, etc. Today, we have examples of very large, global players in this industry who use almost exclusively robots. For example, near Poznań, there is a large, traditional distribution center where three to six thousand people work. A few kilometers away is a similar center of the same company, where three thousand robots and about thirty people work. And machines do the job more efficiently, effectively, and faster than humans. But looking from another perspective: robots also need supervision. When a very large retail chain in France decided to automate its distribution center, forklift operators were laid off. However, some of these people moved to a neighboring building, from where they supervised the work of the automated system, reacting in advance to possible failures, etc. – after all, machines wear out over time. Therefore, some workers will need to be retrained.
The education system will evolve, which we began to observe after the announcement of the Covid-19 pandemic. We have already become accustomed to hybrid learning, in which students participate in online lectures and can replay video files whenever they want. Young people themselves say that they don’t want to come to the university according to the schedule, they would prefer to use materials like Netflix and play them at any time. Then, at a convenient time, they would contact the professor to discuss some issues. This means that there will be less and less work for teachers and lecturers. Especially since there is a lot of knowledge on the internet that we have at our fingertips.
A side effect of these trends is the increasing alienation and loneliness of people, which will mean that many more psychologists, therapists, or psychiatrists will be needed. These are professions of the future. Similarly, like a caregiver-assistant for the elderly. Or a spiritual caregiver, a priest. I wonder and joke with my partner that if celibacy is abolished, I will retrain as a clergyman.
Source: holistic.news
The post Do Not Fear Artificial Intelligence, Take Care of It appeared first on HIPTHER Alerts.

Continue Reading
Advertisement
Stake.com

Uncategorized

EU’s new AI rules: Industry opposed to revealing guarded trade secrets

Published

on

eu’s-new-ai-rules:-industry-opposed-to-revealing-guarded-trade-secrets

 
New regulations in the European Union (EU) are set to compel companies to increase transparency regarding the data they use to train artificial intelligence (AI) systems, potentially unveiling closely guarded industry practices, reports the Times of India.
Since OpenAI, supported by Microsoft, introduced ChatGPT to the public 18 months ago, there has been a surge in public interest and investment in generative AI. This technology enables rapid generation of text, images, and audio content.
As the AI industry expands, concerns have emerged regarding how companies source data for training their models, particularly whether using content from popular books and movies without creators’ consent constitutes a breach of copyright.
The EU’s new AI Act, phased in over the next two years, mandates stricter regulations while allowing time for businesses to adjust to new requirements. Nevertheless, the practical implementation of these rules remains uncertain, notes the report.
Mandating “detailed summaries”
A contentious provision of the AI Act requires organizations deploying general-purpose AI models like ChatGPT to provide “detailed summaries” of the training data. The newly established AI Office plans to release a template for these summaries by early 2025 after consulting stakeholders. However, AI companies oppose disclosing their training data, arguing it as a trade secret that could unfairly benefit competitors if made public, the report reveals.
In the past year, major tech firms including Google, OpenAI, and Stability AI have faced lawsuits alleging unauthorized use of content for AI training. Despite US President Joe Biden’s executive orders addressing AI security risks, legal challenges regarding copyright remain largely untested, the report adds.
Backlash against OpenAI
Amid heightened scrutiny, tech companies have struck content-licensing deals with media outlets and websites. OpenAI, for instance, has partnered with the Financial Times and The Atlantic, while Google has collaborated with NewsCorp and Reddit.
Despite these efforts, OpenAI drew criticism in March when Chief Technology Officer Mira Murati declined to confirm whether YouTube videos were used to train its video-generating tool, Sora, citing potential violations of company terms and conditions.
Source: business-standard.com
The post EU’s new AI rules: Industry opposed to revealing guarded trade secrets appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Picsart teams up with Getty to take on Adobe’s ‘commercially-safe’ AI

Published

on

picsart-teams-up-with-getty-to-take-on-adobe’s-‘commercially-safe’-ai

 
Picsart and Getty Images are joining forces to develop an AI image generator exclusively trained on Getty’s licensed stock content.
According to Picsart, their AI lab is constructing a bespoke model from the ground up to power this tool. It aims to provide the platform’s paying subscribers with the ability to generate images that come with full commercial rights. This initiative seeks to address persistent concerns about potential copyright violations associated with AI-generated content. The Picsart / Getty Images generator is slated for launch later this year and will be accessible through Picsart’s API services.
This collaboration bears similarities to Adobe’s Firefly AI model, initially introduced as a prompt-based image generation tool within Photoshop last year. Adobe has since expanded its integration across various Creative Cloud applications. Adobe’s model also emphasizes commercial safety by training on stock images from Adobe’s own library, along with openly licensed or out-of-copyright content. However, questions remain about the integrity of the training data and user trust in Adobe’s approach.
Getty Images has previously ventured into commercially-focused AI products through partnerships with Bria AI and Runway, and by teaming up with Nvidia to introduce “Generative AI by Getty Images,” leveraging its extensive catalog of licensed images. Adobe’s widespread integration of the Firefly model into popular applications like Photoshop, Illustrator, Lightroom, and Express may pose a challenge for Picsart’s new offering in terms of attracting creatives away from Adobe’s established ecosystem.
Source: theverge.com
The post Picsart teams up with Getty to take on Adobe’s ‘commercially-safe’ AI appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Microsoft to delay release of Recall AI feature on security concerns

Published

on

microsoft-to-delay-release-of-recall-ai-feature-on-security-concerns

 

On June 13, Microsoft announced that it will postpone the rollout of its AI-powered feature “Recall” with new computers next week due to privacy concerns. Instead, the tech giant plans to offer Recall for preview to a smaller group later, following feedback and additional testing.
Recall is designed to track various activities from web browsing to voice chats, compiling a searchable history stored on the user’s computer. This allows users to easily retrieve past actions, even months later.
Originally slated for broad availability on June 18 for Copilot+ PC users, Recall will now undergo a preview phase exclusively within Microsoft’s Windows Insider Program (WIP) in the coming weeks. This decision, as stated in a blog post by the Redmond, Washington-based company, underscores their commitment to ensuring a trusted, secure, and reliable experience for all customers.
Copilot+ PCs, introduced in May, feature advanced AI capabilities aimed at enhancing user interactions and productivity. The WIP, a platform for software testing, enables enthusiasts to preview upcoming Windows operating system features.
Microsoft intends to incorporate feedback from the WIP community before extending the Recall preview to all Copilot+ PC users in the near future.
Following the feature’s announcement, concerns over privacy were swiftly voiced on social media, with some users fearing potential surveillance implications. Elon Musk, prominent technologist and billionaire, likened Recall to a scenario from the dystopian series “Black Mirror,” highlighting societal apprehensions about the impact of advanced technologies.
Source: reuters.com

The post Microsoft to delay release of Recall AI feature on security concerns appeared first on HIPTHER Alerts.

Continue Reading

Trending