Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Artificial Intelligence

Huawei Cloud at KubeCon EU 2024: Unleashing the Intelligent Era with Continuous Open Source Innovation

Published

on

huawei-cloud-at-kubecon-eu-2024:-unleashing-the-intelligent-era-with-continuous-open-source-innovation

PARIS, March 25, 2024 /PRNewswire/ — At KubeCon + CloudNativeCon Europe 2024, held in Paris on March 21, Dennis Gu, Chief Architect of Huawei Cloud, pointed out, in a keynote speech titled “Cloud Native x AI: Unleashing the Intelligent Era with Continuous Open Source Innovation”, that the integration of cloud native and AI technologies is crucial for driving industry transformation. Huawei Cloud plans to keep innovating open source projects and collaborating with developers to bring about an intelligent era.

AI poses key challenges to the cloud native paradigm.
In recent years, cloud native technologies have revolutionized traditional IT systems and accelerated digital advancements in areas such as the Internet and government services. Cloud native has introduced new possibilities, such as lightning-fast sales and agile operations, like DevOps, through microservice governance. These changes have had a significant impact on people’s lives, and the rapid growth and widespread adoption of AI, including large-scale models, have become core to industry intelligence.
According to an Epoch survey in 2023, the compute required for basic models has been increasing by 10 fold every 18 months, which is five times faster than the growth rate predicted by Moore’s Law for general compute. The emergence of this “New Moore’s Law” due to AI and the prevalence of large-scale AI models presents challenges for cloud native technologies. In his speech, Dennis Gu outlined the following key points:
Low average GPU/NPU utilization drives up the cost of AI training and AI inference.Frequent failures of large model training clusters decrease training efficiency.The complex configuration of large-scale models results in demanding high AI development requirements.Deploying large-scale AI inference carries the risk of unpredictable end-user access delays and involves potential data privacy issues.Huawei Cloud AI innovation offers developers ideas to tackle challenges.
The increasing sizes of AI models demand more compute, which creates challenges for cloud native technologies but also creates opportunities for innovation in the industry. Dennis Gu shared stories about Huawei Cloud’s AI innovation, offering developers a reference point to tackle the challenges.
Huawei Cloud used KubeEdge, a cloud native edge computing platform, to create a multi-robot scheduling and management platform. With this platform, users can use natural language commands to tell the platform what to do, and the system will coordinate multiple robots at the edge to accomplish complex tasks. The system is designed with a three-part architecture (cloud, edge node, and robot) to address challenges such as natural language comprehension, efficient scheduling and management of multiple robots, and cross-type robot access management. It uses large models to execute natural language commands and performs traffic prediction, task assignment, and route planning. The three-part architecture greatly enhances the flexibility of the robot platform, enhances management efficiency by 25%, reduces the time required for system deployment by 30%, and cuts down the time needed to deploy new robots from months to days.
For one leading content sharing platform in China, which has over 100 million active users per month, its primary service is the recommendations on the homepage. This feature is powered by a model with almost 100 billion parameters. To train this model, the platform uses a training cluster with thousands of compute nodes, including hundreds of ps and workers for a single training task. So, there is a lot of demand for better topology scheduling, high performance, and high throughput. Volcano, an open source project, enhances the support for AI or machine learning workloads on Kubernetes and offers a range of job management and advanced scheduling policies. Volcano incorporates algorithms like topology-aware scheduling, bin packing, and Service Level Agreement (SLA)-aware scheduling, resulting in a 20% improvement in overall training performance and a significant reduction in O&M complexity for the platform.
Serverless AI is at the forefront of cloud native development.
Many enterprises and developers face the challenge of running AI applications efficiently and reliably while minimizing operation costs. Huawei Cloud has developed a solution to this problem by identifying the key requirements of cloud native AI platforms and introducing a new concept called Serverless AI.
During his speech, Dennis Gu explained that Serverless AI is designed to simplify complex training and inference tasks by intelligently recommending parallel policies, making it easier for developers to use. It also includes an adaptive GPU/NPU automatic expansion function that dynamically adjusts resource allocation based on real-time workload changes, ensuring efficient task execution. Additionally, there is a fault-free GPU/NPU cluster in Serverless AI, freeing developers from concerns that hardware faults may interrupt services. Most importantly, Serverless AI is compatible with mainstream AI frameworks, allowing developers to easily integrate their existing AI tools and models.
Serverless AI is also a very significant development for cloud service providers. Serverless AI provides multiple benefits like improved GPU/NPU utilization, more efficient hybrid workloads for training, inference, and development, and green computing through better energy efficiency, so you can save money on electricity. Furthermore, Serverless AI enables GPU/NPU sharing among multiple tenants in difference spaces or at different time, improving the resource reuse rate. The most significant aspect of Serverless AI is its ability to provide guaranteed Quality of Service (QoS) and SLAs for both training and inference tasks, ensuring stable and high-quality service.
Serverless AI uses a flexible resource scheduling layer that is built on a virtualized operating system. This layer encapsulates essential functions of application frameworks into the application resource mediation layer. Dennis Gu presented the reference architecture for Serverless AI. He thinks that this architecture design allows Serverless AI to automatically drive large-scale AI resources. This includes accurately analyzing resource usage patterns, sharing resources from heterogeneous hardware pools, and ensuring fault tolerance during AI training tasks through GPU/NPU virtualization and load live migration. Additionally, multi-dimensional scheduling and adaptive elastic scaling improve resource utilization.
At the sub-forum, technical experts from Huawei Cloud noted that the AI or machine learning workloads running on Kubernetes have been steadily increasing. As a result, numerous companies are constructing cloud native AI platforms over multiple Kubernetes clusters that spread across data centers and a diverse range of GPU types. Karmada and Volcano can intelligently schedule GPU workloads across multiple clusters, supporting fault transfer, and ensuring consistency and efficiency within and across clusters. They can also balance resource utilization across the entire system and the QoS of workloads with different priorities to address the challenges of managing large-scale and heterogeneous GPU environments.
Karmada offers immediate, reliable automatic application management in multi-cloud and hybrid cloud scenarios. An increasing number of users are using Karmada to create adaptable and effective solutions in production environments. Karmada was officially upgraded to the CNCF incubation project in 2023, and the community is looking forward to more partners and developers joining in.
Volcano Gang Scheduling is a solution for AI distributed training and big data scenarios and addresses the issues of endless waiting and deadlock in distributed training tasks. With task-topology and I/O-aware scheduling, the transmission delay of distributed training tasks is minimized, improving the training performance by 31%. Additionally, minResources resolves resource contention between the Spark driver and executor in high-concurrency scenarios, optimizes the degree of parallelism, and improves the performance by 39.9%.
Dennis Gu believes that the key to improving AI productivity lies in the agility of cloud native technologies and the innovation of heterogeneous AI computing platforms. Huawei Cloud is dedicated to open source innovation and aims to work with industry peers to usher in a new era of intelligence.
Photo – https://mma.prnewswire.com/media/2370741/Dennis_Gu_Chief_Architect_Huawei_Cloud.jpg 

View original content:https://www.prnewswire.co.uk/news-releases/huawei-cloud-at-kubecon-eu-2024-unleashing-the-intelligent-era-with-continuous-open-source-innovation-302098260.html

Continue Reading

Artificial Intelligence

Cybersecurity veteran Simon Church Joins CyXcel as Chief Strategy Officer

Published

on

cybersecurity-veteran-simon-church-joins-cyxcel-as-chief-strategy-officer

LONDON, May 9, 2024 /PRNewswire/ — CyXcel, a leading cybersecurity business with operations in the UK and North America, announces the appointment of Simon Church as Chief Strategy Officer. A 35-year veteran of the technology industry, Church brings a wealth of cyber expertise and commercial development to the role. Church’s appointment solidifies CyXcel as a pioneering force in combining security, regulatory, and legal expertise for established companies and those expanding into new markets, ensuring protection and recovery of their critical business assets.

As Chief Strategy Officer, Church will spearhead CyXcel’s strategic initiatives to drive innovation, foster partnerships, and accelerate growth opportunities. His appointment underscores CyXcel’s commitment to fortifying its strategic offering and enhancing its position as a leader at the nexus of risk analysis, response management and incident resolution. His vast experience in go-to-market strategies and M&A will be instrumental in driving CyXcel’s growth and expansion initiatives.
Church has held executive leadership positions at market-leading cybersecurity and technology companies such as Maxive Cyber Security (acquired by Thales), Optiv, Vodafone, NTT Security, Verisign, and NetIQ. His experience includes strategic roles  in identity management, networking, and managed services and he brings a strong track record of delivering commercial growth, including leading on commercial acquisitions. 
In addition to his role at CyXcel, Church serves as Chair of Xalient, a UK-headquartered converged cyber, identity, and networking managed services company. He is also a Board member and strategic advisor to Redshift, as well as a Board member at beqom. Furthermore, Church serves on the Advisory Board of Glasswall, a UK-based security technology company.
Ed Lewis, CyXcel Co-Founder and Managing Partner commented:
“Simon’s unparalleled expertise and proven track record will be invaluable as we continue to innovate and deliver pioneering solutions to our clients worldwide. With Simon’s strategic leadership, we are poised to achieve new heights in our mission to safeguard enterprises and navigate the complex geopolitical, regulatory and legal implications of ever-evolving cyber threats.”
Simon Church said:
“I’m thrilled to be joining such an experienced and ambitious team and the CyXcel offer is unlike any I’ve ever seen. CyXcel has already established itself as a trusted partner in empowering organisational awareness and responsiveness to the financial, existential, and strategic complexities of global cyber threats. I look forward to delivering rapid growth around the world by developing and executing initiatives that provide unparalleled protection and value to our clients.”
Photo – https://mma.prnewswire.com/media/2407915/Simon_Church.jpg

View original content:https://www.prnewswire.co.uk/news-releases/cybersecurity-veteran-simon-church-joins-cyxcel-as-chief-strategy-officer-302141008.html

Continue Reading

Artificial Intelligence

IBM Introduces New Microsoft Copilot Capabilities to Fuel AI-Powered Business Transformation

Published

on

ibm-introduces-new-microsoft-copilot-capabilities-to-fuel-ai-powered-business-transformation

ARMONK, N.Y., May 9, 2024 /PRNewswire/ — IBM (NYSE: IBM) today announced the availability of IBM Copilot Runway, a new offering from IBM Consulting designed to help enterprises create, customize, deploy and manage copilots including Copilot for Microsoft 365. With the new offering, clients will be able to seamlessly integrate copilots’ generative AI into their organizations in order to enhance productivity and drive business success. IBM Consulting has also formed a dedicated practice of consultants with Microsoft copilot skills, credentials and expertise to guide clients on their AI transformation journey.

As part of the new offering, IBM will work with clients to build custom copilots that can be tailored to fit the needs of specific business scenarios and efficiently deployed to help reduce the time and effort often required for implementations. IBM will initially focus on helping clients across priority use cases, including customer and field service, employee experience, and procurement and finance – as well as specific industries, like financial services, retail and CPG, government, and supply chain. These use cases will include:
The Procurement and Finance Contract Copilot which assists specialists in extracting valuable insights from contracts.The Customer Service and Field Service Copilot which gives agents and technicians access to self-service options and a time-saving generative AI search.The Employee Experience Copilot which is designed to enhance employee engagement.                                                                            IBM and Microsoft already serve clients across a variety of industries with a range of AI solutions and services. In fact, IBM Consulting worked with Virgin Money to develop and launch Redi, a conversational virtual assistant that helps credit card customers in the Virgin Money credit card app. Redi, powered by Virgin Money’s suite of Microsoft Copilots, is a testament to the power of partnership.
“Our customers tell us how much they enjoy interacting with Redi in the Credit Card app,” said Adam Paice, Head of Digital Proposition, Virgin Money. “Our partnership with IBM has helped us to get the most out of Microsoft Copilot to find a balance between innovation and control.”  
IBM Consulting is also scaling its Microsoft copilot capabilities and capacity across its network of Global Innovation Centers on each continent. For example, these capabilities are being used to co-create solutions with clients at IBM Consulting’s new IBM-Microsoft Experience Zone in Bangalore, India – a first of its kind in our collaboration.
At the Experience Zone, clients from around the world and across industries are working together with IBM Consulting in various technology stations to co-ideate and co-create generative AI-powered solutions – leveraging Microsoft technologies, including Copilot. Later this year, IBM Consulting plans to open additional IBM-Microsoft Experience Zones in Romania, U.K. and U.S., where clients in these regions can explore the power of Copilot. In addition IBM and Microsoft hosted a second client-focused hackathon with 800 participants, including clients and experts to build solutions leveraging Microsoft copilot – with winners to be announced this quarter.
“As IBM and Microsoft strengthen our partnership, we’re poised to empower more clients with Microsoft Copilot, supercharging productivity and boosting creativity through the power of generative AI,” said John Granger, Senior Vice President, IBM Consulting. “Our dedicated IBM Consulting Microsoft practice, along with Copilot-focused Experience Zones around the world, help us meet clients where they are and bring them the right generative-AI-enabled solutions for their businesses.”
“Clients need the right partners and technology to scale AI responsibly across the enterprise,” said Dinis Couto, General Manager of Global Partner Solutions for Microsoft. “With IBM’s dedicated group of Microsoft Copilot experts, we’re confident we can help more clients unlock the full potential of generative AI for their businesses.”
IBM purchased Copilot for Microsoft 365 for its practitioners. To continue building on the partnership, IBM has also invested in growing its team of experts and capabilities through acquisitions, like Neudesic and Bluetab.
IBM Consulting practitioners work with a range of leading AI software technologies and multiple models from both IBM and its strategic partners like Microsoft. IBM data and AI consultants typically employ multiple models, each applied to a specific use case. Different models can be optimized for specific tasks, enhancing performance and efficiency.
Microsoft is a trademark of Microsoft Corporation in the United States, other countries, or both.
About IBMIBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. More than 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM’s long-standing commitment to trust, transparency, responsibility, inclusivity and service.
Visit www.ibm.com for more information.
Media Contact:IBMCarolyn [email protected]
Logo – https://mma.prnewswire.com/media/2319830/4696227/IBM_LOGO_1.jpg

View original content:https://www.prnewswire.co.uk/news-releases/ibm-introduces-new-microsoft-copilot-capabilities-to-fuel-ai-powered-business-transformation-302141159.html

Continue Reading

Artificial Intelligence

Inceptio-Powered Autonomous Trucks Surpass 100 Million Kilometers in Safe Commercial Operations

Published

on

inceptio-powered-autonomous-trucks-surpass-100-million-kilometers-in-safe-commercial-operations

Major Milestone Advances Autonomous Heavy-Duty Truck Commercialization
SHANGHAI, May 9, 2024 /PRNewswire/ — Inceptio Technology (“Inceptio,” or the “Company”), an industry leading developer of autonomous driving technologies for heavy-duty trucks, today announced that heavy-duty trucks powered by the Inceptio Autonomous Driving System and its Truck Navigate-on-Autopilot (T-NOA) capabilities have surpassed the significant milestone of 100 million kilometers in safe commercial operations, reinforcing Inceptio’s global leadership in the commercialization of autonomous trucks.

This achievement underscores how L3 and L2+ autonomous heavy-duty trucks have been successfully deployed across the line-haul logistics sector, including express delivery, less-than-truckload (LTL) transportation, as well as contract logistics. This also reflects the significant value that autonomous trucks offer logistics operators. 
Accelerating the Adoption of Autonomous Trucks Across the Line-Haul Logistics Sector
Inceptio-powered trucks surpassed 50 million kilometers of safe commercial operations in August 2023. Building on this success, Inceptio rapidly expanded the number of compatible truck models and surpassed the 100-million-kilometer mark by the end of April 2024.
Inceptio’s Autonomous Driving System covers 83% of China’s national highways connecting 7 major economic zones. Over the course of the 100 million kilometers, a total of 1,864 drivers safely used Inceptio-powered L3 and L2+ trucks in their daily operations.
Current customers include all the top logistics companies in China such as ZTO Express (NYSE: ZTO and HKEX: 2057), YTO Express (HKEX: 6123), STO Express (SZSE: 002468), JD Logistics (HKEX: 2618), and SF Express (SHE: 002352). Inceptio has also established an extensive footprint across the contract logistics segment including cold chain, automotive, beverages, and fast-moving consumer goods among many others, serving global brands like Budweiser and Nestlé. Inceptio’s autonomous driving technology caters to a diverse user base—from big logistics companies to small fleets and individual operators.
Inceptio has partnered with several leading Chinese truck manufacturers to pre-load mass produced trucks with the Inceptio Autonomous Driving System. These partnerships have expanded the number of trucks Inceptio powers and include popular models from Dongfeng, Sinotruk, Foton and Liuqi that are available in both 4×2 and 6×4 axle configurations to meet the diverse needs of the line-haul logistics sector.
Paving the Way for Greater Commercialization with Safer, More Efficient, and Profitable Operations
Over the course of 100 million kilometers, Inceptio has demonstrated how its autonomous driving technology and its T-NOA capabilities are paving the way for greater commercial deployment across the line-haul logistics with safer, more efficient, and profitable operations.
The majority of the routes large express delivery companies in China use exceed 500 kilometers in length. Two drivers are commonly assigned to each traditional truck on these routes and take shifts driving in order to minimize fatigue and ensure safety when meeting tight shipping schedules. Inceptio’s solution makes driving much less physically and mentally exhausting as it handles more than 90% of the journey. Express delivery companies have been able to significantly reduce the number of drivers per truck and labor costs on these same routes as a result. On routes ranging from 500 to 1,200 kilometers, Inceptio has realized a direct shift from two drivers per truck to one, resulting in a significant 40% to 50% reduction in labor costs. On routes that exceed 1,200 kilometers where an autonomous truck relay model has been deployed, a traditional assignment of 6-8 drivers per three trucks has been reduced to 5. Likewise, a traditional assignment of 8-10 drivers per 4 trucks has been reduced to 6, resulting in a substantial decrease in labor costs and improved driver satisfaction.
The benefits are equally strong for contract logistic companies, both large and small. Huatai Logistics for example, a contract logistics company specializing in automotive parts transport on routes that average 1,500 kilometers, has seen its driver-to-truck ratio decrease from two to one by using Inceptio-powered trucks. Combined with a reduction of 3-5 liters in fuel consumption per 100 kilometers, total cost of ownership per kilometer decreased by 7-15%. The stellar safety record and enhanced driving comfort offered by autonomous trucks improved fleet-attendance rates significantly and increased monthly kilometers per truck by as much as 10%.
Some individual operators have also seen increases of 10-20% in monthly kilometers per truck and 2,500-5,500 RMB in monthly net income due to the fundamental improvement of safety and driving comfort offered by Inceptio-powered autonomous trucks. The fuel-saving benefits of autonomous trucks are particularly attractive for individual operators.
Leveraging Data Assets to Enhance Inceptio’s Autonomous Driving Technology
Inceptio leverages its powerful, data-driven R&D system to rapidly iterate and enhance its autonomous driving technology. This system, which incorporates accurate and efficient data capturing, automated cloud processing, advanced scenario mining, and automatic annotation, allows Inceptio to continuously refine its industry-leading T-NOA algorithm in real-time. This focus on real-world data is a key driver of Inceptio’s competitive edge in the autonomous driving technology landscape.
Julian Ma, founder and CEO of Inceptio Technology, commented, “Inceptio’s autonomous driving technology and its T-NOA capabilities are making significant progress in their commercialization, allowing us to rapidly surpass the 100-million-kilometer milestone after hitting 50 million kilometers only eight months ago. The impact our technology is having on the logistics industry is profound. The commercial deployment of Inceptio-powered autonomous trucks across the line-haul logistics sector is exciting, but what’s truly inspiring is the creativity and innovation our customers bring to the table. This user-driven approach is pushing the boundaries of how these autonomous trucks are used, opening up new ways to deploy our technology. The more data we gather, the faster we will be able to enhance our algorithms and improve our full-stack solution. We will continue working closely with our truck OEM partners to offer even greater safety, efficiency, and profitability to logistics customers.”
About Inceptio Technology
Inceptio Technology is an industry leading developer of autonomous driving technologies for heavy-duty trucks. Its flagship technology is the Inceptio Autonomous Driving System, a proprietary full-stack solution. Inceptio partnered with leading OEMs to roll out the industry’s first mass-produced L3 autonomous trucks in late 2021. These trucks are operated nationwide in China by customers across the line-haul logistics sector including express delivery, less-than-truckload (LTL) transportation, and contract logistics. Inceptio is at the cutting edge of developing fully driverless trucks. In 2022 it became the first company in China to receive a public road-testing permit for driverless autonomous heavy-duty trucks.
For more information on Inceptio Technology, visit https://en.inceptio.ai/ 

View original content:https://www.prnewswire.co.uk/news-releases/inceptio-powered-autonomous-trucks-surpass-100-million-kilometers-in-safe-commercial-operations-302141157.html

Continue Reading

Trending