Connect with us
Prague Gaming & TECH Summit 2024

Uncategorized

IDTechEx Reports on The Age of Artificial Intelligence: AI Chips to 2034

Published

on

idtechex-reports-on-the-age-of-artificial-intelligence:-ai-chips-to-2034

The Age of Artificial Intelligence: AI Chips to 2034
Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016 to the robust predictive abilities of OpenAI’s ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable – allowing for longevity as new algorithms are introduced while keeping operational overheads low – but is also able to handle increasingly complex models at a point close to the end-user.
Drawing from the “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” reports, IDTechEx predicts that the growth of AI, both for training and inference within the cloud and inference at the edge, is due to continue unabated over the next ten years, as our world and the devices that inhabit them become increasingly automated and interconnected.
The why and what of AI chips
The notion of designing hardware to fulfill a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor, is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs). The purpose was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations more efficiently, thereby freeing the CPU up to focus on other things.
As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).
Just as computer graphics required the need for a different type of chip architecture, the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. Machine learning is the process by which computer programs utilize data to make predictions based on a model and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation, therefore, involves two steps: Training and Inference.
The first stage of implementing an AI algorithm is the training stage, where data is fed into the model, and the model adjusts its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (not provided in the training stage) is classified in a manner consistent with the acquired data.
Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). As such, training takes place within cloud computing environments (i.e. data centers), where a large number of chips are used that can perform the type of parallel processing required for efficient algorithm training (CPUs process tasks in a serialized manner, where one execution thread starts once the previous execution thread has finished. In order to minimize latency, large and numerous memory caches are utilized so that most of the execution thread’s running time is dedicated to processing. By comparison, parallel processing involves multiple calculations occurring simultaneously, where lightweight execution threads are overlapped such that latency is effectively masked. Being able to compartmentalize and carry out multiple calculations simultaneously is a major benefit for training AI algorithms, as well as in many instances of inference). By contrast, the inference stage can take place within both cloud and edge computing environments. The aforementioned reports detail the differences between CPU, GPU, Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) architectures, and their relative effectiveness in handling machine learning workloads.
Within the cloud computing environment, GPUs currently dominate and are predicted to continue to do so over the next ten-year period, given Nvidia’s dominance in the AI training space. For AI at the edge, ASICs are preferred, given that chips are more commonly designed with specific problems in mind (such as for object detection within security camera systems, for example). As the below graph shows, Digital Signal Processors (DSPs) also account for a significant share of AI coprocessing at the edge, though it should be noted that this large figure is primarily due to Qualcomm’s Hexagon Tensor Processor (which is found in their modern Snapdragon products) being a DSP. Should Qualcomm redesign the HTP such that it strays from being a DSP, then the forecast would heavily skew in favour of ASICs.
AI as a driver for semiconductor manufacture
Chips for AI training are typically manufactured at the most leading-edge nodes (where nodes refer to the transistor technology used in semiconductor chip manufacture), given how computationally intensive the training stage of implementing an AI algorithm is. Intel, Samsung, and TSMC are the only companies that can produce 5 nm node chips. Out of these, TSMC is the furthest along with securing orders for 3 nm chips. TSMC has a global market share for semiconductor production that is currently hovering at around 60%. For the more advanced nodes, this is closer to 90%. Of TSMC’s six 12-inch fabs and six 8-inch fabs, only two are in China, and one is in the USA. The rest are in Taiwan. The semiconductor manufacture part of the global supply chain is therefore heavily concentrated in the APAC region, principally Taiwan.
Such a concentration comes with a great deal of risk should this part of the supply chain be threatened in some way. This is precisely what occurred in 2020 when a number of complementing factors (discussed further in the “AI Chips: 2023 – 2033” report) led to a global chip shortage. Since then, the largest stakeholders (excluding Taiwan) in the semiconductor value chain (the US, the EU, South Korea, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. This is shown by the government funding announced by these major stakeholders in the wake of the global chip shortage, represented below.
These government initiatives aim to spur additional private investment through the lure of tax breaks and part-funding in the way of grants and loans. While many of the private investments displayed pictorially below were made prior to the announcement of such government initiatives, other additional and/or new private investments have been announced in the wake of them, spurred on as they are by the incentives offered through these initiatives.
A major reason for these government initiatives and additional private spending is the potential of realizing advanced technology, of which AI can be considered. The manufacture of advanced semiconductor chips fuels national/regional AI capabilities, where the possibility for autonomous detection and analysis of objects, images, and speech are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance and security, that the development of AI hardware and software has now become a primary concern for government bodies that wish to be at the forefront of technological innovation and deployment.
Growth of AI chips over the next decade
Revenue generated from the sale of AI chips (including the sale of physical chips and the rental of chips via cloud services) is expected to rise to just shy of USD$300 billion by 2034, at a compound annual growth rate of 22% from 2024 to 2034. This revenue figure incorporates the use of chips for the acceleration of machine learning workloads at the edge of the network, for telecom edge, and within data centers in the cloud. As of 2024, chips for inference purposes (both at the edge and within the cloud) comprise 63% of revenue generated, with this share growing to more than two-thirds of the total revenue by 2034.
This is in large part due to significant growth at the edge and telecom edge, as AI capabilities are harnessed closer to the end-user. In terms of industry vertical, IT & Telecoms is expected to lead the way for AI chip usage over the next decade, with Banking, Financial Services & Insurance (BFSI) close behind, and Consumer Electronics behind that. Of these, the Consumer Electronics industry vertical is to generate the most revenue at the edge, given the further rollout of AI into consumer products for the home. More information regarding industry vertical breakout can be found in the relevant AI reports.
The “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. More generally, the report covers the global edge AI Chips market across six industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application). IDTechEx also offers expert-led data and analysis on these and other related topics through a market intelligence subscription.
This article is from “Technology Innovations Outlook 2024-2034”, a complimentary magazine of analyst-written articles by IDTechEx providing insights into a number of areas of technology innovation, assessing the landscape now and giving you the outlook for the next decade.
 
The post IDTechEx Reports on The Age of Artificial Intelligence: AI Chips to 2034 appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Huawei Launches a Wide Range of Net5.5G Solutions to Inspire New Growth of Carriers

Published

on

huawei-launches-a-wide-range-of-net5.5g-solutions-to-inspire-new-growth-of-carriers

 
During MWC Barcelona 2024, the Net5.5G-oriented products and solutions were launched by the President of Huawei’s Data Communication Product Line, Leon Wang. He noted that new ultra-broadband applications such as immersive terminals and AIGC have brought new opportunities to carriers. In addition, networks need to continuously evolve towards the future. In 2023, Huawei and industry partners defined the Net5.5G target network architecture and key capabilities such as ubiquitous 10 Gbps access, 400GE converged transport, hyper-converged data centers, and network autonomy and self-optimization. At this event, centering on “Bringing Net5.5G into Reality, Inspire New Growth”, Huawei has launched four solutions tailored to Net5.5G, covering mobile backhaul, enterprise campus, converged transport, and data center network, to fully support carriers’ business success.
Net5.5G Mobile Backhaul: Ultra-broadband is the foundation of ultimate experience. The access ring of mobile backhaul needs to be upgraded to 100GE to meet the traffic requirements of ultra-broadband applications. In addition, experience optimization is required to provide better mobile experience and improve the ARPU. Huawei’s base station routers provide the comprehensive access capabilities with 25/50/100/400GE to achieve network construction with optimal TCO. This solution is compatible with 4G, 5G, and 5G-A transport, protecting investment. Leveraging Network Digital Map, the solution provides industry leading application-level experience visualization and optimization, achieving zero congestion for VIP services and accelerating user experience monetization
Net5.5G High-Quality 10 Gbps Campus: Carriers are expanding their scope of services from traditional private lines to enterprise campus networks. This opens up a new blue ocean for digital managed network services, enriches B2B product portfolios, and increases revenue. Huawei has launched the world’s first all-scenario Wi-Fi 7 series products. And it helps carriers improve their competitive edge and avoid homogenized competition through high-density access, video experience assurance and intelligent O&M with Network Digital Map, accelerating their transformation from ISPs to MSPs.
Net5.5G Converged Transport: By providing more services on one network, carriers can realize new values of private lines and private networks. Huawei provides high density 400GE covering metro and core to carry numerous 10Gbps connections. Tenant-level slicing private lines, low-latency path selection, and application-level visualization are used to implement network servitization, accelerate network resource monetization, and increase revenue.
Net5.5G Hyper-Converged DCN: The computing power leasing service can help carriers achieve new growth in addition to IDC equipment room and private cloud services. Huawei uses the high density 800GE data center network (DCN) to provide efficient computing power and plug-and-play, significantly reduced deployment time. The industry leading NSLB algorithm increases the network throughput from 50% to 98% and improves the training efficiency by 20%.
Leon Wang wrapped up the launch with the following parting words: “Huawei will work with industry partners to promote network evolution from conceptualization to deployment, bring Net5.5G into reality, and inspire new growth for carriers.”
MWC Barcelona 2024 will be held from February 26 to February 29 in Barcelona, Spain. During the event, Huawei will showcase its latest products and solutions at stand 1H50 in Fira Gran Via Hall 1.
With the 2024 commercial launch of 5.5G, Huawei is collaborating with operators and partners around the world to pursue exciting new innovation in networks, cloud, and intelligence. Together, we will drive 5G business and foster a thriving industry ecosystem, creating a new era for intelligent digital transformation. For more information, please visit: https://carrier.huawei.com/en/events/mwc2024.
The post Huawei Launches a Wide Range of Net5.5G Solutions to Inspire New Growth of Carriers appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Exclusive Interview: ATFX Chairman Joe Li with CNBC Arabia

Published

on

exclusive-interview:-atfx-chairman-joe-li-with-cnbc-arabia

 
ATFX Chairman Joe Li was recently interviewed by CNBC Arabia, one of the leading business and financial news programs in the Middle East, to discuss the brand’s local development.  Throughout the interview, the host engaged in a comprehensive and professional dialogue that covered the brand’s local development progress, service philosophy, and market regulations.
ATFX Chairman Joe Li stated, “ATFX is dedicated to creating a platform that is convenient for investors, allowing them to trade quickly, safely, and efficiently. We prioritize the specific needs of our local clients by providing localized services. Our AT Premier service, launched in the Middle East, is continuously updated to provide high-quality investor education.”
The interview mainly focuses on the brand’s initiatives and measures in the local context. ATFX was ranked among the top four globally in MT4 trading volume in 2023, demonstrating its advanced financial technology tools, premium services, and extensive investor education content. ATFX’s exceptional achievements persist in enhancing its worldwide impact, attracting the interest of diverse international media establishments.
The interview conducted with CNBC Arabia marks a significant milestone in ATFX’s journey towards greater recognition this year. ATFX anticipates receiving further acknowledgement and media coverage as they continue to demonstrate unwavering dedication towards excellence in service.
The post Exclusive Interview: ATFX Chairman Joe Li with CNBC Arabia appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Smart Collaborates with 6D Technologies to Enhance Sales and Distribution Capabilities

Published

on

smart-collaborates-with-6d-technologies-to-enhance-sales-and-distribution-capabilities

 
6D Technologies, an innovative telecom solutions provider, announces a collaborative partnership with Smart Communications, Inc. (Smart), the wireless unit of PLDT, the Philippines’ largest, fully integrated telco network, to revolutionize its sales and distribution operations, offering comprehensive solutions to meet the evolving needs of the industry.
Through this collaboration, 6D Technologies and Smart will streamline key sales and distribution processes, automate inventory management, optimize resource utilization, and enhance overall dealer operational efficiency. By leveraging 6D Technologies’ Ventas, an artificial intelligence (AI)-driven Sales and Distribution Management platform, Smart aims to elevate its operational excellence further and reinforce its leadership position in the telecommunications sector, driving growth and delivering superior services to its customers.
“As part of our transformation journey and our quest to reinforce our leadership, we continue to create collaborations to further lift our operational excellence, making sure that these partnerships will naturally progress and translate to much improved user experience for the customers of Smart,” said John Y. Palanca, First Vice President and Head of Sales and Distribution at Smart.
6D Technologies’ Ventas Dealer Management System (DMS) is designed to boost operational efficiencies and elevate the overall customer journey for the distribution partners of Smart. With this new system, the user experience of Smart’s distribution partners will be significantly transformed, giving them tools to operate more efficiently.
“We will continue to push to enhance our operational efficiencies to ensure that we are able to deliver the highest level of quality of service, particularly to our distribution partners as our customers, who are the North Star of everything that we do,” said Jac N. Bocalan, Vice President and Head of Customer Development Strategies and Support at Smart.
Manish Arora, Executive Director & Chief Revenue Officer of 6D Technologies, expressed his enthusiasm about the collaboration, stating, “We are thrilled to partner with Smart to empower their sales and distribution networks with our innovative DMS solution. By combining our expertise with Smart’s extensive reach and market presence, we aim to redefine industry standards and deliver unparalleled value to customers.”
The post Smart Collaborates with 6D Technologies to Enhance Sales and Distribution Capabilities appeared first on HIPTHER Alerts.

Continue Reading

Trending