Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

NaaS Technology and CCTEB Form Strategic Partnership to Accelerate Digitalization of Charging Infrastructure Network

Published

on

naas-technology-and-ccteb-form-strategic-partnership-to-accelerate-digitalization-of-charging-infrastructure-network

 
On March 11, NaaS Technology (NASDAQ: NAAS), the first US listed EV charging service company in China, announced a strategic cooperation agreement with China Construction Third Engineering Bureau (CCTEB) in Wuhan. The collaboration will integrate NaaS’ competence in energy assets addressing and digital operation with CCTEB’s construction practices, to jointly expand the charging infrastructure networks as well as enhance the operational efficiency.
According to the agreement, CCTEB will work together with NaaS to implement projects in PV-charging-storage integration, energy ports, etc. Both parties will also seek opportunities in energy assets investment and operation, and make joint efforts on technology research to further accelerate the digitalization of charging infrastructure network.
Alex Wu, Co-founder, President and CFO of NaaS, stated on the signing ceremony, “As a new energy asset operator, NaaS is committed to improving industrial efficiency through digital technology and artificial intelligence. Our cooperation with CCTEB will bring us to better integrate resources and jointly promote the construction of new energy charging infrastructure, providing more convenient and efficient services for EV users.”
Zheng Cheng, General Manager of Environmental Energy at CCTEB, emphasized the industry significance of the collaboration, saying, “CCTEB has deep technical expertise and rich project experience in the construction and infrastructure fields. In the era of new energy, we see tremendous potential in collaborating with NaaS. Through this partnership, we will explore new models for the construction of charging piles and integrated energy ports, not only improving the coverage and service quality of charging networks but also providing strong support for the development of the new energy industry.”
This strategic partnership leverages the complementary advantages and integrated resources of both parties. NaaS will benefit from CCTEB’s construction technology expertise, enhancing its engineering capabilities and construction quality of charging facilities. At the same time, CCTEB will gain experience in the new energy industry and digital asset operation capabilities. Together, they will accelerate charging networks deployment, to meet the growing market demand and provide more convenient and efficient charging services for the industry.
The post NaaS Technology and CCTEB Form Strategic Partnership to Accelerate Digitalization of Charging Infrastructure Network appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

16 top AI firms make new safety commitments at Seoul summit

Published

on

16-top-ai-firms-make-new-safety-commitments-at-seoul-summit

 
More than a dozen of the world’s leading artificial intelligence firms made fresh safety commitments at a global summit in Seoul on Tuesday, the British government said in a statement.
The agreement with 16 tech firms — which include ChatGPT-maker OpenAI, Google DeepMind and Anthropic — builds on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” UK Prime Minister Rishi Sunak said in a statement released by Britain’s Department for
Science, Innovation and Technology.
Under the agreement, the AI firms that have not already shared how they assess the risks of their technology will publish those frameworks, according to the statement.
These will include what risks are “deemed intolerable” and what the firms will do to ensure that these thresholds are not crossed.
“In the most extreme circumstances, the companies have also committed to ‘not develop or deploy a model or system at all’ if mitigations cannot keep risks below the thresholds,” the statement added.
The definition of these thresholds will be decided ahead of the next AI summit, due to be hosted by France in 2025.
The firms that have agreed on the safety rules also include US tech titans Microsoft, Amazon, IBM and Instagram parent Meta; France’s Mistral AI; and Zhipu.ai from China.
In his opening remarks, South Korea’s President Yoon Suk Yeol flagged “growing concerns over potential risks and negative impacts of AI, including fake news through deepfake and the digital divide.”
“Since the digital space is hyper-connected and transcends national borders, we need digital norms at the global level,” he added.
Danger of ‘deepfakes’
The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.
Generative AI models can generate text, photos, audio and even video from simple prompts, and its proponents have heralded them as a breakthrough that will improve lives and businesses around the world.
But critics, rights activists and governments have warned that they can be misused in a wide variety of situations, including the manipulation of voters through fake news stories or so-called “deepfake” pictures and videos of politicians.
Many have called for international standards to govern the development and use of AI, and have called for action at summits such as the two-day gathering in Seoul this week.
In addition to safety, the Seoul summit will discuss how governments can help spur innovation, including into AI research at universities.
Participants will also consider ways to ensure the technology is open to all and can aid in tackling issues such as climate change and poverty.
The Seoul summit comes days after OpenAI confirmed that it had disbanded a team devoted to mitigating the long-term dangers of advanced AI.
“The field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science,” Anna Makanju, OpenAI’s vice president of global affairs, said in the statement announcing the new commitments on Tuesday.
The two-day summit will be partly virtual, with a mix of closed-door sessions and some open to the public in Seoul.
Source: france24.com
The post 16 top AI firms make new safety commitments at Seoul summit appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Microsoft unveils new Copilot+ PCs featuring advanced AI capabilities

Published

on

microsoft-unveils-new-copilot+-pcs-featuring-advanced-ai-capabilities

 
Microsoft has launched a new type of personal computer designed with AI capabilities, underscoring the company’s commitment to integrating cutting-edge technology across its product lineup.
This new category not only boosts Microsoft’s ventures but also positions it to compete directly with AI-led advancements from tech giants such as Alphabet and Apple. During an event at Microsoft’s campus in Redmond, Washington, CEO Satya Nadella unveiled the Copilot+ PCs. These computers, developed in collaboration with manufacturers like Acer and Asustek Computer, align with Microsoft’s stock prices reaching near all-time highs amid Wall Street’s expectations that AI will significantly boost earnings for major tech firms.
These PCs are engineered to manage a greater share of AI tasks locally, reducing reliance on cloud data centres. Priced from $1,000, the devices are slated to begin shipping on June 18. The lineup includes laptops from established manufacturers such as Dell Technologies, HP, and Samsung Electronics, complemented by two models from Microsoft itself.
Technical innovations and product range
The Surface Laptop, Microsoft’s inaugural Copilot+ device, features a Qualcomm Snapdragon X series system-on-chip. This chip includes a built-in AI accelerator capable of delivering up to 45 tera operations per second (TOPS) of computing power. Available in 13.5-inch and 15-inch models, these laptops offer up to 64 gigabytes of RAM and 1 terabyte of flash storage.
Joining the Surface Laptop is the new Copilot+ version of the Surface Pro, a versatile two-in-one PC. This 10.6-inch tablet converts into a laptop with a detachable keyboard, known as the Flex Keyboard, which also functions as a wireless connector and stylus charger for the Surface Pro.
Both the Surface Laptop and Surface Pro are equipped with the Snapdragon X processor, available with up to 32 gigabytes of RAM and one terabyte of storage. The high-end models feature an OLED display, enhancing visual quality significantly.
A distinctive feature of the Copilot+ series is “Recall,” which allows the device to log all user activities—from web browsing to voice chats—and store this data locally. This functionality creates a searchable history that users can access to revisit past activities, even months later.
Yusuf Mehdi, head of consumer marketing at Microsoft, announced at the press event that the company anticipates selling 50 million AI PCs in the coming year. He emphasised that the enhanced capabilities of AI assistants, now operating directly on these PCs, will make a compelling case for consumers considering upgrades.
Despite a 15% decline in global PC shipments last year, according to Gartner, Microsoft is optimistic about the potential market share for its Copilot+ PCs.
Analyst Ben Bajarin from Creative Strategies remarked: “People just need to be convinced that the device experience alone justifies this entirely new category of Copilot+ machines.”
Echoing Intel’s 2011 strategy with the “Ultrabook,” which aimed to compete against Apple’s MacBook Air, Microsoft’s introduction of the “Copilot+” category is poised to redefine the slim-form Windows laptop market.
Microsoft also revealed that GPT-4o, the latest advancement from OpenAI’s ChatGPT, will soon be integrated into Copilot. Additionally, the new series of Surface Pro tablets and Surface Laptops will incorporate Qualcomm chips based on Arm Holdings’ designs, utilising the innovative Prism technology to adapt software originally designed for Intel and AMD processors for use on Arm-based systems.
At a recent demonstration, Microsoft showcased how its new devices, running Adobe photo editing software, outperformed an Apple device, just weeks after Apple introduced a new AI-focused chip expected to enhance future laptops.
Challenges and opportunities in the evolving PC market
Facing strong competition in the traditional PC market, Microsoft continues to lead in integrating the latest AI trends into consumer products. This strategic initiative, fueled by its partnership with OpenAI, aims to secure a leadership position in the fiercely competitive AI industry, challenging rivals like Alphabet.
Just last week, both OpenAI and Alphabet’s Google introduced new AI technologies that support real-time, voice-responsive interactions, establishing new benchmarks for AI voice assistants. Meanwhile, Windows PC manufacturers are contending with increased pressure from Apple, whose custom Arm-based chips have set new standards in battery life and performance.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Source: artificialintelligence-news.com
The post Microsoft unveils new Copilot+ PCs featuring advanced AI capabilities appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Do Not Fear Artificial Intelligence, Take Care of It

Published

on

do-not-fear-artificial-intelligence,-take-care-of-it

 
“We must take special care of what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words, when using such tools, we humans should stand in truth and ’feed’ […]
“We must take special care of what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words, when using such tools, we humans should stand in truth and ’feed’ the systems with truth,” says Artur Olejniczak, a manager, certified business trainer, and lecturer at the School of Logistics in Poznań.
Dariusz Rostkowski: We connected via a messenger. How can you be sure you’re talking to a human and not to artificial intelligence?
Artur Olejniczak: Of course, theoretically, you could be a machine, or rather it could take your place. But I verified the contact earlier, asked questions, and assumed that I was talking to a human, not an artificial creation.
Yes, I am human. However, I differ from artificial intelligence, with its ability to exhibit human capabilities such as reasoning, learning, planning, creativity, etc.
Artificial intelligence enables devices equipped with it to perceive their surroundings and react appropriately to changes occurring in them. Consequently, they can solve problems, by being focused on a set goal. AI systems are increasingly adapting their behavior to circumstances and acting more autonomously. The American research lab – OpenAI, which enabled conversations with GPT-3 (Generative Pre-trained Transformer 3) showed the immense potential in current computer systems. Generative artificial intelligence, or rather machine learning algorithms based on neural networks and superfast computers, allow for aggregating data they have access to. Colloquially speaking: it utilizes publicly available internet resources, which is, all the content we create and place online. So, what is today’s big “wow” and determines the significance of AI is simply the ability to aggregate information very quickly and provide answers to questions based on it.
And what if the question is poorly phrased, like being imprecise or ambiguous?
Exactly! The key issue is whether we can appropriately question the system! Misunderstandings happen. Once, while preparing for a lecture, I asked ChatGPT-4, the latest version, to conduct research on lithium-ion batteries. It spits out information that the best expert in this field is the famous chess player Garry Kasparov. Of course, this was an outright lie, and GPT-4 was probably misled because Kasparov used some expression related to lithium-ion batteries in one of his statements.
So, in summary: a huge advantage of AI systems is that they acquire a complete set of information we need in real-time, which we can then freely use. On the other hand, the correctness of the input data is extremely important. It’s commonly known that the internet is currently full of irrelevant, junk, and even false information. During discussions with my students or in business meetings, I often emphasize that we cannot pursue the so-called SISO (shit in, shit out) strategy, meaning we must particularly care about what we input into AI systems. Otherwise, they will start to fabricate: building one untruth upon another, and at the end of this process, we are likely to end up with an amplified lie. In other words: when using such tools, we humans should stand in truth, and “feed” the systems with truth.
The problem of properly using artificial intelligence is becoming increasingly pressing. The European Parliament dealt with these issues in June 2023, adopting a position on the so-called Artificial Intelligence Act, the first set of comprehensive rules to manage the risks associated with AI. It identified the greatest opportunities and also the threats resulting from the use of AI. Among the benefits for citizens was an improvement in healthcare. How could this be achieved?
For example, through faster and more efficient data analysis. Nowadays, blood tests are conducted quite differently than before – without descriptions, tedious observations under the microscope, etc. Simply put: the lab technician takes a sample and labels it with a barcode containing our data. Then the blood is placed in the analysis center, where computers do the work. Usually, we can retrieve the test results on the same day via the Internet. Since computers have a vast amount of data on similar tests, norms, etc., they quickly detect deviations from the norm, thereby reducing the risk of error.
Indeed, that’s convincing. MEPs also write that in the future, products will be more tailored to user needs, and additionally cheaper and more durable. Isn’t this wishful thinking? In recent decades, we often felt a decline in quality, at the expense of rising prices…
Competition forces various actions on companies, and as we know, almost all are interested in maximizing profit. This is a key point of business activity. The use of AI should impact the competitiveness of products – benefiting consumers in terms of quality and price.
At the same time, I wanted to point out that significant changes in habits will probably occur – the economy of sharing will step in more boldly. We see this in the case of automotive companies, which, instead of maximizing sales or profits from service, are moving towards subscriptions. Soon, we won’t be buying cars to own, but renting them for the duration of use. Think about how much time we drive our car. Two hours a day? And for the rest of the time, this so-called resource is unused, simultaneously generating costs in terms of parking fees, services, insurance, taxes, etc. Thanks to AI algorithms, the sharing system can be optimized, making a car or other tool used almost continuously. Thus, we will be buying access to a car, without owning it.
Another benefit of AI, mentioned by the European Parliament, is work safety. Really? I would say people are more afraid of losing their jobs…
This is more about robots, about sending them to dangerous places – to protect human life and health. Today in industry or distribution, machines, bots, and drones are used that operate more and more autonomously. In a sense, there’s a dehumanization of these processes. And it’s a good thing. It’s not even that people are naturally lazy. Rather, it’s that we are exhausted by repetitive work and its harmful conditions. Therefore, we are trying, quite successfully, to implement automatic or autonomous systems. This is another step in this industrial revolution – developing machines that work without human intervention. They operate in binary, without unnecessary fluctuations and mood changes, no risk of “off” days, no sick leave, etc.
Yes, automation is necessary. But what about the loss of jobs?
Research by American and European scientists shows that by 2040, 70% of the jobs we know today will disappear. These are very different professions. Today, a robot from the well-known company KUKA can replace a bartender, other machines replace painters, welders, etc. The American army, along with several medical centers, constructed a medical robot that operates on people remotely. It doesn’t put the doctor’s life at risk in an armed conflict, where they would be under fire. They can perform their work in safe conditions. In this field, the Poznań Supercomputing and Networking Center successfully implemented advanced telemedical systems, enabling live transmission of medical procedures from several places simultaneously.
On the other hand, we have, for example, the war in the Gaza Strip, where the active involvement of the latest technological achievements – including AI in killing, not saving lives, is rather visible.
It’s true. Often what is an opportunity on one hand, is a threat on the other. It’s like a regular knife: we can use it to peel potatoes, and vegetables, and prepare a delicious dinner, but we can also kill a person with it. The same happens with artificial intelligence and algorithms. Let’s stick with the example of war: the conflict in Ukraine is a perfect, live testing ground for innovative types of weapons, including those that use AI. It’s probably the first war in which unmanned drones are used on such a large scale. So, in part, it’s a conflict of machines; the one who introduces more effective methods of combat wins.
But won’t there come a time when autonomous machines turn against their creators – humans?
This is a critically important aspect. Machines operate based on code – a program. Code is a record consisting of zeros and ones, denoting false or true. Humans don’t act so unequivocally extreme. I recently watched a documentary film that practically showed the dilemma of dual use. One European scientist, along with a research team, wrote an algorithm that enabled the development of drugs based on known chemical compounds, potentially applicable in the treatment of rare diseases.
The team, supported by the algorithm, did fantastic work, with several spectacular successes. Out of curiosity, the scientist was asked to reprogram the algorithm to form substances that would quickly and effectively kill a person. Curious about the effects, he did it. When he came to the office the next day, he destroyed the work generated by his system. He didn’t want the results to be used in a deadly way.
Returning to the positive ways of using AI, Members of the European Parliament highlight business benefits for the tourism sector. How is this supposed to work?
The key here is data analysis and tailoring travel agency offers to our needs. For example, if we fear tsunamis or earthquakes, the system will bypass potentially dangerous places for us. Easy multi-criteria analysis will be used. It will also consider our budget and preferences regarding cuisine, climate, etc.
Another benefit of artificial intelligence is various applications that enable communication in foreign languages. This will significantly encourage people who do not know these languages. There are apps that, when activated, let us speak into a smartphone, and translate the phrases in real-time. Then it works the other way around… Sometimes misunderstandings occur, but surely there are more pros than cons.
Politicians also talk about social benefits – including strengthening democracy. But really? Isn’t AI more often on the dark side of power? Doesn’t it create fake videos with untrue statements or spread fake news?
I fully share the concern about the threats, and we already have examples from the elections in the United States or the referendum in the United Kingdom, where the use of artificial intelligence led to manipulation. Therefore, today’s key issue is to separate true information from fake ones. This is not easy with the influx of ’news’, as statistically, in one day, we absorb about 100,000 pieces of information from various sources. For comparison – our grandparents absorbed a similar amount throughout their entire lives. In conclusion: we are physically unable to analyze each of these pieces of information and make the right decision. Hence, we face a huge challenge: to catch only the true and reliable ones. Currently, this is not fully achievable.
With the unimaginable amount of data on the internet, how can we sort it?
Data is now the fuel for the economy. Figuratively speaking: it’s more significant than coal or other fuels. Fortunes are made by trading data. They are so valuable because companies make strategic decisions based on them. And we, the consumers, often share this data unknowingly, for example, by signing up for various loyalty programs, and then we are somewhat controlled, and manipulated, succumbing to trends and fashions.
There’s little talk about the impact of AI on our emotional, and social life. Yet, it’s algorithms that match us in pairs and find friends on popular portals…
Yes. (laughs) I had a situation last week where an acquaintance contacted me after several years, saying that Facebook suggested me as a person to add to her friends. And she concluded that I must have been looking for her, hence the suggestion. But that wasn’t the case at all; the system just suggested me as a friend candidate for some reason. And that’s it.
It’s not an exaggeration to say that artificial intelligence has a huge and growing impact on interpersonal relationships. Real relationships are forming, children are being born…
This is quite a complicated matter. In the case of dating websites, on one hand, people are forming relationships, though without them it probably would have happened anyway, just in a different way. On the other hand, about 60 percent of profiles on these sites are bots – creations of artificial intelligence. They are activated to give us the feeling that there is someone who thinks warmly and closely about us. Meanwhile, the goal is to maintain interest resulting in paying for the subscription to the app. So, in my opinion, it’s largely business.
In terms of human relationships, the issue isn’t as complicated as forming emotional relationships between humans and machines…
Humanity is a very important issue. It’s about building relationships, hugging, physical contact, conversations, in other words, responding to our needs. If this AI creation fills a gap in someone’s life or satisfies some emotional need… maybe there’s nothing wrong with that. In fact, some people already function in such relationships – like the Japanese, who marry humanoid robots. I find it hard to see the positives in these relationships, but for those people, there must be some since they decide on such a model. In the end, they can turn off such a machine at any moment. The robot certainly won’t act against their interests, as it doesn’t receive negative “human” feedback like “you didn’t take out the trash” or “you didn’t make dinner again”. I think that in these relationships, only and exclusively the desired, good needs are fulfilled. Robots operate in binary, and some people might like that.
Returning to the advantages of AI listed by experts, there’s often an argument about protection and safety, including crime prevention…
Here again, we face the dilemma of dual use. On the one hand, algorithms indeed help in tracking and catching criminals. On the other hand, we must remember that we are constantly monitored. In our space, there are about 50 billion sensors, detectors, and devices that continuously track what’s happening. Think about cameras monitoring traffic, detectors for pollution or noise, microphones, etc. We carry around six sensors on average in smartphones, watches, etc., which constantly observe us and our behaviors. And on this basis, a tool was developed, intended to counteract terrorism (Pegasus), but it was used for nefarious political purposes. When installing another app on your phone, take a look at its access range (camera, microphone, contacts, location) and ask yourself: why does it need this data?
We can forget about anonymity…
Yes. Because if information about us, including locations, call records, SMS, etc., is stored for several years by telecommunications companies, and state authorities can access these data with court approval, then there’s no question of anonymity. Currently, every email, message in a messenger, or social media post is monitored and filtered by machine learning algorithms. They have coded to search for behaviors based on keywords that could be dangerous to society.
AI significantly changes transport, but there we see both opportunities and threats…
In transportation, there’s a shortage of workers, like drivers. It’s a demanding, monotonous job, a bit like being a sailor. People are away for many weeks, sometimes leading to disturbances, and extending working hours as recently happened with the blockade at the border with Ukraine. Drivers sat there for 9-10 days in unsuitable conditions because the system was disrupted. In this context, replacing people with machines would be very helpful. Especially since few are eager for this type of work. However, the other side of the coin is that we’re dealing with a mismatch between legislation and the movement of autonomous vehicles. What happens in an accident? Who is responsible? The company that constructed the car, the entity that provided the built-in IT system, or the telecommunications company that operated the car autonomously? Insurers have a huge problem with this.
What about logistics, in which you specialize?
Artificial intelligence will certainly revolutionize many areas of human activity, including logistics. Today, as logisticians, we try to find positive applications. I’m talking about data analysis, concluding, and proper resource management. AI currently helps in the area of appropriate production and distribution planning. Take, for example, a network of stores like Żabka, which carries out over one and a half million deliveries a year to its network. Matching the right vehicle, considering, for example, green zones in cities, is a problem. AI will manage this. It will select the appropriate means of transport to deliver goods to the right place.
AI algorithms are also very helpful in managing global supply chains. If goods are produced in Cambodia, China, Vietnam, or India and need to be delivered to Europe, the issue of redistribution requires many analyses, which are now performed by machine learning algorithms. Such systems greatly assist and streamline this work and make these flows more efficient both in terms of price and environmental impact.
Another issue is the involvement of undemocratic states in business and influencing supply chains. And this is a real threat, as political considerations often outweigh economic ones in such situations.
We may have an even bigger problem with insurers if they obtain detailed data about our health…
One can imagine that if increasingly advanced IT systems start “talking” to each other and exchanging data at a higher level, then… an algorithm will evaluate us. It will calculate how much we can contribute to the economy, assess our creativity, usefulness to society, and how long we will live. And that is terrifying!
To some extent, such actions are already being taken in China: citizens are evaluated in terms of their usefulness to society and eliminated from certain activities. A minor insubordination, such as crossing the street at a red light several times, and we are already under scrutiny. If I commit such an offense two or three times, I will automatically receive a fine. If this behavior continues, I will even be eliminated from theWe recommend: Pragmatists vs. Idealists. Who Will Be Happy?
At the political and social level, AI can support the enemies of democracy – warn Members of the European Parliament.
Indeed, AI can assist in “producing” and distributing fake news. Data collected in the system can be used tendentiously, and the image formed after their analysis becomes distorted. An algorithm can also be programmed to show only the negative sides of a particular phenomenon. A less educated audience will “buy” it. Besides, it seems to me that with the development of IT tools, we are becoming, unfortunately, more and more numbed. I recently wrote an article about the development of systems for locating a person in space. Long ago, it was a word, a story, about where and what is located. There were also stars and the moon, helping to find one’s way. Moreover, we better notice and interpret changes related to the seasons. We simply oriented ourselves better in space. Today we have global positioning systems (GPS) and… we get lost. We are convinced that what our smartphone shows us is true and often ignore proper signals. A huge percentage of people are easily misled.
Many people fear losing their jobs due to artificial intelligence. They worry that machines will take away their ability to earn. Will this really happen and which professions are most at risk?
Certainly, those positions where we deal with production – will be completely automated. Similarly, distribution centers, warehouses, etc. Today, we have examples of very large, global players in this industry who use almost exclusively robots. For example, near Poznań, there is a large, traditional distribution center where three to six thousand people work. A few kilometers away is a similar center of the same company, where three thousand robots and about thirty people work. And machines do the job more efficiently, effectively, and faster than humans. But looking from another perspective: robots also need supervision. When a very large retail chain in France decided to automate its distribution center, forklift operators were laid off. However, some of these people moved to a neighboring building, from where they supervised the work of the automated system, reacting in advance to possible failures, etc. – after all, machines wear out over time. Therefore, some workers will need to be retrained.
The education system will evolve, which we began to observe after the announcement of the Covid-19 pandemic. We have already become accustomed to hybrid learning, in which students participate in online lectures and can replay video files whenever they want. Young people themselves say that they don’t want to come to the university according to the schedule, they would prefer to use materials like Netflix and play them at any time. Then, at a convenient time, they would contact the professor to discuss some issues. This means that there will be less and less work for teachers and lecturers. Especially since there is a lot of knowledge on the internet that we have at our fingertips.
A side effect of these trends is the increasing alienation and loneliness of people, which will mean that many more psychologists, therapists, or psychiatrists will be needed. These are professions of the future. Similarly, like a caregiver-assistant for the elderly. Or a spiritual caregiver, a priest. I wonder and joke with my partner that if celibacy is abolished, I will retrain as a clergyman.
Source: holistic.news
The post Do Not Fear Artificial Intelligence, Take Care of It appeared first on HIPTHER Alerts.

Continue Reading
Advertisement
Advertisement

Latest News

Trending