Connect with us
Prague Gaming & TECH Summit 2024

Artificial Intelligence

IDTechEx: The Age of Artificial Intelligence – AI Chips to 2034

Published

on

idtechex:-the-age-of-artificial-intelligence-–-ai-chips-to-2034

BOSTON, Jan. 29, 2024 /PRNewswire/ — Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016 to the robust predictive abilities of OpenAI’s ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable – allowing for longevity as new algorithms are introduced while keeping operational overheads low – but is also able to handle increasingly complex models at a point close to the end-user.

 
Drawing from the “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” reports, IDTechEx predicts that the growth of AI, both for training and inference within the cloud and inference at the edge, is due to continue unabated over the next ten years, as our world and the devices that inhabit them become increasingly automated and interconnected.
The why and what of AI chips
The notion of designing hardware to fulfill a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor, is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs). The purpose was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations more efficiently, thereby freeing the CPU up to focus on other things.
As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).
Just as computer graphics required the need for a different type of chip architecture, the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. Machine learning is the process by which computer programs utilize data to make predictions based on a model and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation, therefore, involves two steps: Training and Inference.
The first stage of implementing an AI algorithm is the training stage, where data is fed into the model, and the model adjusts its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (not provided in the training stage) is classified in a manner consistent with the acquired data.
Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). As such, training takes place within cloud computing environments (i.e. data centers), where a large number of chips are used that can perform the type of parallel processing required for efficient algorithm training (CPUs process tasks in a serialized manner, where one execution thread starts once the previous execution thread has finished. In order to minimize latency, large and numerous memory caches are utilized so that most of the execution thread’s running time is dedicated to processing. By comparison, parallel processing involves multiple calculations occurring simultaneously, where lightweight execution threads are overlapped such that latency is effectively masked. Being able to compartmentalize and carry out multiple calculations simultaneously is a major benefit for training AI algorithms, as well as in many instances of inference). By contrast, the inference stage can take place within both cloud and edge computing environments. The aforementioned reports detail the differences between CPU, GPU, Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) architectures, and their relative effectiveness in handling machine learning workloads.
Within the cloud computing environment, GPUs currently dominate and are predicted to continue to do so over the next ten-year period, given Nvidia’s dominance in the AI training space. For AI at the edge, ASICs are preferred, given that chips are more commonly designed with specific problems in mind (such as for object detection within security camera systems, for example). As the below graph shows, Digital Signal Processors (DSPs) also account for a significant share of AI coprocessing at the edge, though it should be noted that this large figure is primarily due to Qualcomm’s Hexagon Tensor Processor (which is found in their modern Snapdragon products) being a DSP. Should Qualcomm redesign the HTP such that it strays from being a DSP, then the forecast would heavily skew in favour of ASICs.
AI as a driver for semiconductor manufacture
Chips for AI training are typically manufactured at the most leading-edge nodes (where nodes refer to the transistor technology used in semiconductor chip manufacture), given how computationally intensive the training stage of implementing an AI algorithm is. Intel, Samsung, and TSMC are the only companies that can produce 5 nm node chips. Out of these, TSMC is the furthest along with securing orders for 3 nm chips. TSMC has a global market share for semiconductor production that is currently hovering at around 60%. For the more advanced nodes, this is closer to 90%. Of TSMC’s six 12-inch fabs and six 8-inch fabs, only two are in China, and one is in the USA. The rest are in Taiwan. The semiconductor manufacture part of the global supply chain is therefore heavily concentrated in the APAC region, principally Taiwan.
Such a concentration comes with a great deal of risk should this part of the supply chain be threatened in some way. This is precisely what occurred in 2020 when a number of complementing factors (discussed further in the “AI Chips: 2023 – 2033” report) led to a global chip shortage. Since then, the largest stakeholders (excluding Taiwan) in the semiconductor value chain (the US, the EU, South Korea, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. This is shown by the government funding announced by these major stakeholders in the wake of the global chip shortage, represented below.
These government initiatives aim to spur additional private investment through the lure of tax breaks and part-funding in the way of grants and loans. While many of the private investments displayed pictorially below were made prior to the announcement of such government initiatives, other additional and/or new private investments have been announced in the wake of them, spurred on as they are by the incentives offered through these initiatives.
A major reason for these government initiatives and additional private spending is the potential of realizing advanced technology, of which AI can be considered. The manufacture of advanced semiconductor chips fuels national/regional AI capabilities, where the possibility for autonomous detection and analysis of objects, images, and speech are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance and security, that the development of AI hardware and software has now become a primary concern for government bodies that wish to be at the forefront of technological innovation and deployment.
Growth of AI chips over the next decade
Revenue generated from the sale of AI chips (including the sale of physical chips and the rental of chips via cloud services) is expected to rise to just shy of USD$300 billion by 2034, at a compound annual growth rate of 22% from 2024 to 2034. This revenue figure incorporates the use of chips for the acceleration of machine learning workloads at the edge of the network, for telecom edge, and within data centers in the cloud. As of 2024, chips for inference purposes (both at the edge and within the cloud) comprise 63% of revenue generated, with this share growing to more than two-thirds of the total revenue by 2034.
This is in large part due to significant growth at the edge and telecom edge, as AI capabilities are harnessed closer to the end-user. In terms of industry vertical, IT & Telecoms is expected to lead the way for AI chip usage over the next decade, with Banking, Financial Services & Insurance (BFSI) close behind, and Consumer Electronics behind that. Of these, the Consumer Electronics industry vertical is to generate the most revenue at the edge, given the further rollout of AI into consumer products for the home. More information regarding industry vertical breakout can be found in the relevant AI reports.
For more information regarding key trends and market segmentations with regards AI chips over the next ten years, please refer to the two reports: “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge”.
The “AI Chips: 2023–2033” report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in seven different categories (such as by geography, by chip architecture, and by application). In addition to the revenue forecasts for AI chips, costs at each stage of the supply chain (design, manufacture, assembly, test & packaging, and operation) are quantified for a leading-edge AI chip. Rigorous calculations are provided, along with a customizable template for customer use, and analyses of comparative costs between leading and trailing edge node chips.
The “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. More generally, the report covers the global edge AI Chips market across six industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application).
IDTechEx also offers expert-led data and analysis on these and other related topics through a market intelligence subscription – find out more at www.IDTechEx.com/Subscriptions.
This article is from “Technology Innovations Outlook 2024-2034”, a complimentary magazine of analyst-written articles by IDTechEx providing insights into a number of areas of technology innovation, assessing the landscape now and giving you the outlook for the next decade. You can read the magazine in full at www.IDTechEx.com/Magazine.
About IDTechEx
IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact [email protected] or visit www.IDTechEx.com.
Images download:https://www.dropbox.com/scl/fo/aas11v4lo6qarxovxnrgu/h?rlkey=akdfjvu3948yi86hua4oe3mwa&dl=0 
Media Contact:
Lucy RogersSales and Marketing [email protected] +44(0)1223 812300
Social Media Links:
Twitter: www.twitter.com/IDTechExLinkedIn: www.linkedin.com/company/IDTechEx
Photo – https://mma.prnewswire.com/media/2327613/IDTechEx.jpgLogo – https://mma.prnewswire.com/media/478371/IDTechEx_Logo.jpg
 
 

View original content:https://www.prnewswire.co.uk/news-releases/idtechex-the-age-of-artificial-intelligence—ai-chips-to-2034-302045633.html

Continue Reading

Artificial Intelligence

ZTE set to bring brilliant 5G-A highlights to MWC 2024, unfolding the intelligent future

Published

on

zte-set-to-bring-brilliant-5g-a-highlights-to-mwc-2024,-unfolding-the-intelligent-future

BARCELONA, Spain, Feb. 23, 2024 /PRNewswire/ — ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of information and communication technology solutions, is proud to unveil the highlights of 5G-A during the Mobile World Congress 2024. This initiative aims to unlock the full potential of 5G technology and create infinite value.

Green Infrastructure Pursuing the Bit-Watt Perfect Curve
Industry’s first constant PA efficiency radio platform for optimal performanceThe latest radio platform achieves near-constant PA efficiency, ensuring optimal performance at all times. This enables 25-30 percentage points higher PA efficiency and 35% lower RRU power consumption than the industry average.  
0 bit 0 watt with industry-first hibernationThe industry’s first hibernation technology has been deployed at scale across over 300,000 AAUs to reduce the power consumption of AAU to as low as 5 watts when there is no traffic. This technology has also been applied to RRU, helping achieve power consumption as low as 3 watts when there is no traffic. For small cells, hibernation enables 0 watts for pRRUs when there is no traffic, and the cell can be awakened in seconds when there are any attempts of service access.
5G-A B2C: Seamless 10Gbps+ Experiences
Uni-Radio enables site simplification with 10 times fewer radios The Uni-Radio (A+P) achieves extremely simplified integration of 7-bands into 1, with dual-band AAU, the industry’s unique 5-band UBR and FDD full-band antenna, achieving a reduction of radio units from 10 to 1.
Industry-unique 12TR RRU goes extremely green, enabling superior energy efficiencyThe industry-unique dual-band 12TR RRU evolves based on a new platform, achieving near-constant PA efficiency. This enables a 35% lower RRU power consumption than the industry average. With the industry’s first RF pooling, the 12TR RRU enables inter-sector output power sharing. For any given sector, 4*120W can be used as 4x160W, achieving higher power efficiency.
Industry’s highest integration FDD Massive MIMO pushes the 4G capacity envelope even furtherThe tri-band FDD Massive MIMO supports all RATs from 2G to 5G and NB-IoT with an impressive 640W output power in only 115L, achieving up to 6 times throughput improvement. Additionally, AI beamforming enhances FDD Massive MIMO performance by 30%, while the industry-unique beam-level DSS optimizes spectrum utilization, achieving an additional 10% data rate gain.  
Industry’s largest bandwidth mmWave AAU empowers 5G-A 10Gbps+ experience The industry’s largest 1.6GHz bandwidth mmWave AAU enables single cell to achieve 28Gbps+ capacity, empowering 10Gbps+ experiences. It serves for diverse scenarios, including fixed wireless access (FWA), large-bandwidth mobile backhaul, and UHD live broadcasting, among others.  
5G-A B2B: Network-Cloud Integration, Service Commissioning in One Hour
UniEngine: All-in-one-box private 5G solution for core productionsUniEngine realizes all-in-one integration of 5G core, 5G RAN, and simplified O&M, while providing powerful computing to enable effortless deployment of 3rd party applications. Enhanced deterministic capabilities of RAN offer much improved service guarantees. The simplified O&M significantly assists even less experienced enterprises in rolling out their private 5G networks and enables end-to-end service performance monitoring.  
Indoor 5G-A CampSite creates new possibilities for TV broadcasting and VR entertainmentThe suitcase-sized CampSite is easy to deploy by one person. It integrates communication and computing, enabling 5G-A network setup within one hour. Additionally, with the inclusion of MiCell, the industry’s first mmWave distributed micro-cell and IF pooling solution, it achieves 2,500m2 coverage, providing up to 6Gbps downlink or 4Gbps uplink, with ultra-low latency of [email protected]% and high reliability. CampSite facilitates new use cases such as 5G-powered TV broadcasting, wireless large-scale and backpack-free VR entertainment, among others.  
5G-A B2X: Boosting the New Economy
Ultra-reliable 5G-A network and edge-cloud integration realize cost-efficient V2X for smart transportationZTE deployed the first 5G-A based vehicle-road-cloud integrated V2X network, empowering autonomous driving and enhancing traffic efficiency. The ultra-reliable network and the edge-cloud at base-station achieve [email protected]% latency, enabling L4 autonomous driving for L2 vehicles at speeds of up to 60km/h.  
5G-A NTN realizes universal connectivity through space-ground integrationZTE has developed the industry’s first NTN ground base station and completed the world’s first 5G IoT-NTN direct-to-cell trial over the S-band as well as the industry’s first maritime trial and NR-NTN lab and field trials. These efforts aim to provide capabilities for emergency communication, wide-area IoT, and internet services.   
Ubiquitous Intelligence: Achieving All-Round High Efficiency
5G-A BBU, the integration of communication and computation for rich AI applications5G-A BBU achieves seamless integration of communication and computation capabilities. Leveraging its computation capability, base-station native AI enhances RAN intelligence, enabling highly efficient service guarantees to boost traffic, implement more precise power-saving strategies, and facilitate L4 autonomous network O&M. By opening the computation resource open to third parties, cost-efficient app deployment at the base station becomes possible, enriching the range of new applications.   
AIGC empowered uSmartNet commercial used during a grand sports event in Asia Based on AIGC and digital twin technology, ZTE’s uSmartNet achieved the industry’s first smart games assurance during the event. This accomplishment resulted in zero major issue and zero major customer complaints, improving operational efficiency by 15% and reducing workload by 30%.
ABOUT ZTE:ZTE helps to connect the world with continuous innovation for a better future. The company provides innovative technologies and integrated solutions, and its portfolio spans all series of wireless, wireline, devices and professional telecommunications services. Serving over a quarter of the global population, ZTE is dedicated to creating a digital and intelligent ecosystem, and enabling connectivity and trust everywhere. ZTE is listed on both the Hong Kong and Shenzhen Stock Exchanges. www.zte.com.cn/global
FOLLOW US:Facebook  www.facebook.com/ZTECorpTwitter  www.twitter.com/ZTEPressLinkedIn  www.linkedin.com/company/zteYouTube www.youtube.com/@ZTECorporation
MEDIA INQUIRIES:ZTE CorporationCommunicationsEmail: [email protected] 

View original content:https://www.prnewswire.co.uk/news-releases/zte-set-to-bring-brilliant-5g-a-highlights-to-mwc-2024-unfolding-the-intelligent-future-302069969.html

Continue Reading

Artificial Intelligence

MIAMI INVESTOR SUMMIT KICKS OFF TODAY WITH GLOBAL LEADERS DISCUSSING HUMANITY’S BIGGEST CHALLENGES

Published

on

miami-investor-summit-kicks-off-today-with-global-leaders-discussing-humanity’s-biggest-challenges

MIAMI, Feb. 23, 2024 /PRNewswire/ — Leading figures from finance, government, business, tech, and innovation are convening to deliberate on a range of pressing economic and public policy issues at FII PRIORITY Miami summit, a major global investor summit, held today and tomorrow at Miami’s Faena Forum.

The summit brings together investors from around the world gathered with CEOs of major global corporations and policy experts to seek workable impactful solution to issues such as the regulation of AI. This year’s theme, ‘On the Edge of a New Frontier,’ reflects the profound changes the world is experiencing as technology transforms society and economies.
Richard Attias, CEO of FII Institute, said,”Today’s conversations reflected a sense of urgency as the world grapples with what to do about AI, political uncertainty, and changes in the shape of the global economy. And we bring these conversations to Miami—the new America—so we can chart a steadfast course towards a prosperous future, fully equipped with the insights and knowledge we need, harnessing the power of collective work and thought.”
In his opening conversation, H.E. Yasir Al-Rumayyan, Public Investment Fund Governor and Future Investment Initiative (FII) Institute Chairman, underscored the role of AI and its key contribution to future economies, saying “Al is coming in a big way! The impact will be very positive as soon as we have regulations in place to monitor, control, and enable this technology. AI could increase global GDP by 14%.”
Michael Dell, Chairman and CEO of Dell Technologies, addressed the issue of AI regulation, and how CEOs can innovate through periods of extreme volatility and pivot towards future trends, adding “regulations will struggle to match the rapid pace of technological change and may be outdated within a year. We have to ensure that emerging technologies reflect humanity and our system of values and beliefs.”
The summit’s much anticipated roundtable of global CEOs, entitled the “Board of Changemakers,” discussed trends and economic outlook. Participating in this year’s roundtable were Pamela Liebman, President and CEO of The Corcoran Group; Pierre Beaudion, Chairman of the Board of Bombardier Inc; Jenny Johnson, President and CEO of Franklin Templeton; Marcello Claure, Founder and CEO of Claure Group; and Stephen A. Schwarzman, Co-Founder, Chairman & CEO of The Blackstone Group. The session underscored the impact of digitalization, particularly blockchain and AI, on future investments, advocating for regulation and ethical usage. The members of the board also highlighted AI’s massive role in increasing efficiency and how AI can be harnessed to resolve climate change issues.
Mike Pompeo, 70th Secretary of State of the United States and Executive Chairman of Impact Investments, addressed the economic consequences of conflict, saying: “in the economic sphere, the connectivity between the things happening geopolitically is now ever more closely tied to thinking about investments and capital flows. It is impossible to separate geopolitical risk from capital allocation.”
Conclave sessions were held for select senior participants to delve into the concerns relevant to their sectors with full transparency.  For example, the conclave on AI convened many leading figures involved in the development of technologies, governance of AI, technology investment, and regulation, with a number of global initiatives on AI regulation presented
A special address by former Treasury Secretary Dr. Lawrence H. Summers, Charles W. Eliot University Professor & President Emeritus at Harvard University, and board member at OpenAI, highlighted economic and political contexts, including prospects for soft or hard economic landing, and likely outcomes of the US election. Dr. Summers was joined by Eric Schmidt, co-founder of Schmidt Fututes, who noted that AI is likely to double everyone’s productivity, underscoring the importance of AI regulation, he added “there are questions about what happens when computers start to make decisions.”
FII Institute’s Founding Partner is the Public Investment Fund of Saudi Arabia, which, alongside the Vision Partner, the Ministry of Investment of Saudi Arabia, and 26 international strategic partners are actively involved in the ongoing work of the institute. The FII Institute welcomed two new strategic partners at the summit: ACWA Power and Franklin Templeton. Speakers from strategic partners Franklin Templeton, GFH, HSBC, NEOM, ROSHN, Sanabil, Softbank, and State Street shared updates on their international projects relevant to the themes of the summit.
Notes to EditorFor media inquiries, please contact:Phone: +966 53 978 2030Email: [email protected]
The Future Investment Initiative (FII) Institute is a global non-profit foundation driven by data with an investment arm and one agenda: Impact on Humanity. Global and inclusive, we foster great minds from around the world and turn ideas into real-world solutions in four critical areas: Artificial Intelligence (AI) & Robotics, Education, Healthcare and Sustainability.
The FII PRIORITY Miami summit continues conversations held in Hong Kong and Riyadh in Autumn 2023. It is part of an ongoing global program of summits held throughout the year, with upcoming summits in Rio de Janeiro in June and Asia later in the year.
Photo – https://mma.prnewswire.com/media/2346202/FIII_MIAMI_INVESTOR_SUMMIT.jpgLogo – https://mma.prnewswire.com/media/1811613/4558144/FII_Institute_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/miami-investor-summit-kicks-off-today-with-global-leaders-discussing-humanitys-biggest-challenges-302069882.html

Continue Reading

Artificial Intelligence

Securitas issues a MEUR 500 Eurobond

Published

on

securitas-issues-a-meur-500-eurobond

STOCKHOLM, Feb. 23, 2024 /PRNewswire/ — Securitas has today successfully closed a MEUR 500 bond in the Eurobond market with maturity in 2030. The coupon was 3.875 percent including a margin of 115 basis points.

The proceeds will mainly be used to refinance existing debt. 
The joint lead managers were BofA Securities, CIC, Commerzbank, Danske Bank, DNB and UniCredit. 
Further information:
Investors: Micaela Sjökvist, Vice President, Investor Relations, +46 76 116 7443, [email protected]
The following files are available for download:
https://mb.cision.com/Main/1062/3934748/2624675.pdf
Eurobond_Eng_20240223

View original content:https://www.prnewswire.co.uk/news-releases/securitas-issues-a-meur-500-eurobond-302069855.html

Continue Reading

Trending