Uncategorized
IDTechEx Reports on The Age of Artificial Intelligence: AI Chips to 2034
The Age of Artificial Intelligence: AI Chips to 2034
Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016 to the robust predictive abilities of OpenAI’s ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable – allowing for longevity as new algorithms are introduced while keeping operational overheads low – but is also able to handle increasingly complex models at a point close to the end-user.
Drawing from the “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” reports, IDTechEx predicts that the growth of AI, both for training and inference within the cloud and inference at the edge, is due to continue unabated over the next ten years, as our world and the devices that inhabit them become increasingly automated and interconnected.
The why and what of AI chips
The notion of designing hardware to fulfill a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor, is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs). The purpose was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations more efficiently, thereby freeing the CPU up to focus on other things.
As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).
Just as computer graphics required the need for a different type of chip architecture, the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. Machine learning is the process by which computer programs utilize data to make predictions based on a model and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation, therefore, involves two steps: Training and Inference.
The first stage of implementing an AI algorithm is the training stage, where data is fed into the model, and the model adjusts its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (not provided in the training stage) is classified in a manner consistent with the acquired data.
Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). As such, training takes place within cloud computing environments (i.e. data centers), where a large number of chips are used that can perform the type of parallel processing required for efficient algorithm training (CPUs process tasks in a serialized manner, where one execution thread starts once the previous execution thread has finished. In order to minimize latency, large and numerous memory caches are utilized so that most of the execution thread’s running time is dedicated to processing. By comparison, parallel processing involves multiple calculations occurring simultaneously, where lightweight execution threads are overlapped such that latency is effectively masked. Being able to compartmentalize and carry out multiple calculations simultaneously is a major benefit for training AI algorithms, as well as in many instances of inference). By contrast, the inference stage can take place within both cloud and edge computing environments. The aforementioned reports detail the differences between CPU, GPU, Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) architectures, and their relative effectiveness in handling machine learning workloads.
Within the cloud computing environment, GPUs currently dominate and are predicted to continue to do so over the next ten-year period, given Nvidia’s dominance in the AI training space. For AI at the edge, ASICs are preferred, given that chips are more commonly designed with specific problems in mind (such as for object detection within security camera systems, for example). As the below graph shows, Digital Signal Processors (DSPs) also account for a significant share of AI coprocessing at the edge, though it should be noted that this large figure is primarily due to Qualcomm’s Hexagon Tensor Processor (which is found in their modern Snapdragon products) being a DSP. Should Qualcomm redesign the HTP such that it strays from being a DSP, then the forecast would heavily skew in favour of ASICs.
AI as a driver for semiconductor manufacture
Chips for AI training are typically manufactured at the most leading-edge nodes (where nodes refer to the transistor technology used in semiconductor chip manufacture), given how computationally intensive the training stage of implementing an AI algorithm is. Intel, Samsung, and TSMC are the only companies that can produce 5 nm node chips. Out of these, TSMC is the furthest along with securing orders for 3 nm chips. TSMC has a global market share for semiconductor production that is currently hovering at around 60%. For the more advanced nodes, this is closer to 90%. Of TSMC’s six 12-inch fabs and six 8-inch fabs, only two are in China, and one is in the USA. The rest are in Taiwan. The semiconductor manufacture part of the global supply chain is therefore heavily concentrated in the APAC region, principally Taiwan.
Such a concentration comes with a great deal of risk should this part of the supply chain be threatened in some way. This is precisely what occurred in 2020 when a number of complementing factors (discussed further in the “AI Chips: 2023 – 2033” report) led to a global chip shortage. Since then, the largest stakeholders (excluding Taiwan) in the semiconductor value chain (the US, the EU, South Korea, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. This is shown by the government funding announced by these major stakeholders in the wake of the global chip shortage, represented below.
These government initiatives aim to spur additional private investment through the lure of tax breaks and part-funding in the way of grants and loans. While many of the private investments displayed pictorially below were made prior to the announcement of such government initiatives, other additional and/or new private investments have been announced in the wake of them, spurred on as they are by the incentives offered through these initiatives.
A major reason for these government initiatives and additional private spending is the potential of realizing advanced technology, of which AI can be considered. The manufacture of advanced semiconductor chips fuels national/regional AI capabilities, where the possibility for autonomous detection and analysis of objects, images, and speech are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance and security, that the development of AI hardware and software has now become a primary concern for government bodies that wish to be at the forefront of technological innovation and deployment.
Growth of AI chips over the next decade
Revenue generated from the sale of AI chips (including the sale of physical chips and the rental of chips via cloud services) is expected to rise to just shy of USD$300 billion by 2034, at a compound annual growth rate of 22% from 2024 to 2034. This revenue figure incorporates the use of chips for the acceleration of machine learning workloads at the edge of the network, for telecom edge, and within data centers in the cloud. As of 2024, chips for inference purposes (both at the edge and within the cloud) comprise 63% of revenue generated, with this share growing to more than two-thirds of the total revenue by 2034.
This is in large part due to significant growth at the edge and telecom edge, as AI capabilities are harnessed closer to the end-user. In terms of industry vertical, IT & Telecoms is expected to lead the way for AI chip usage over the next decade, with Banking, Financial Services & Insurance (BFSI) close behind, and Consumer Electronics behind that. Of these, the Consumer Electronics industry vertical is to generate the most revenue at the edge, given the further rollout of AI into consumer products for the home. More information regarding industry vertical breakout can be found in the relevant AI reports.
The “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. More generally, the report covers the global edge AI Chips market across six industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application). IDTechEx also offers expert-led data and analysis on these and other related topics through a market intelligence subscription.
This article is from “Technology Innovations Outlook 2024-2034”, a complimentary magazine of analyst-written articles by IDTechEx providing insights into a number of areas of technology innovation, assessing the landscape now and giving you the outlook for the next decade.
The post IDTechEx Reports on The Age of Artificial Intelligence: AI Chips to 2034 appeared first on HIPTHER Alerts.
Uncategorized
Tata Electronics and Tokyo Electron Limited (TEL) Announce Strategic Partnership to Grow Semiconductor Ecosystem in India
Tata Electronics today signed a memorandum of understanding with Tokyo Electron Limited (TEL), a leading global supplier of semiconductor equipment and services. The two companies will collaborate to accelerate semiconductor equipment infrastructure for India’s first Fab being built by Tata Electronics in Dholera, Gujarat, and for its assembly and test facility in Jagiroad, Assam.
Through this partnership, Tata Electronics and TEL will also focus on training Tata Electronics’ workforce on TEL equipment and supporting ongoing improvement and R&D initiatives. This collaboration will leverage the strengths of both companies to establish a robust semiconductor manufacturing ecosystem in India.
As previously announced, Tata Electronics is building India’s first Fab in Dholera, Gujarat, with a total investment of INR 91,000 crores (~US$11bn). In addition, another INR 27,000 crores (~US$3bn) will be invested in a greenfield facility in Jagiroad, Assam, for the assembly and testing of semiconductor chips. Together, these facilities will produce semiconductor chips for applications across automotive, mobile devices, artificial intelligence (AI), and other key segments to serve customers globally. As the construction of these facilities progresses, it is critical to grow partnerships across the entire semiconductor ecosystem, spanning process and design technology, as well as equipment suppliers. With this announcement of the partnership with TEL, Tata Electronics has solidified a critical pillar to achieve its execution targets.
Dr Randhir Thakur, Managing Director & CEO, Tata Electronics, said, “We have a bold vision of becoming a leader in electronics manufacturing by offering integrated solutions across the value chain to our global customers. TEL has a history of working closely with its customers, and its expertise in the semiconductor equipment space will help build a dynamic ecosystem to support the timely execution of bringing up our Fab and advanced packaging factories. We are excited about the customer centricity that TEL brings to this partnership.”
Toshiki Kawai, President & CEO of Tokyo Electron Limited, emphasised, “We are delighted to announce our partnership with Tata Electronics, which brings together our combined expertise and resources to strengthen the semiconductor ecosystem in India significantly. This strategic collaboration spans both front-end fabrication and back-end packaging technologies, highlighting our commitment to delivering exceptional support and value to Tata Electronics. By leveraging our collective strengths, we aim to accelerate development and drive innovation across multiple technology nodes. Together, we are poised to set new benchmarks in the industry, fostering a robust and dynamic semiconductor landscape that will benefit all stakeholders.”
TEL is committed to supporting the Indian semiconductor ecosystem. Both the front-end and back-end product groups will provide resources and technology support to bring advanced TEL products to the Indian market. TEL will lead this effort by also offering diversified products for the MAGIC market (MAGIC—Metaverse, Autonomous Mobility, Green Energy, IoT & Information, Communications). TEL will actively explore opportunities to leverage India’s talent to establish an engineering service in India to support its global product development.
The post Tata Electronics and Tokyo Electron Limited (TEL) Announce Strategic Partnership to Grow Semiconductor Ecosystem in India appeared first on HIPTHER Alerts.
Uncategorized
Blackwired Launches ThirdWatch℠, A Paradigm Shift in Cybersecurity
Blackwired, the leading cyber observatory for disruptive cybersecurity technologies, has announced the launch of ThirdWatchSM, a groundbreaking solution to identify direct threats facing an organization and its Third Parties.
ThirdWatchSM is a subject-directed monitoring platform that provides a comprehensive 360-degree view in 3D of existential threats that impact organizations and the associated cyber risks posed by their vendors, partners, suppliers, networks, and digital assets. Utilizing a non-invasive, zero-touch technology process, ThirdWatchSM generates Direct Threat Intelligence while cross-referencing this intelligence with traditional vulnerability assessment data. This integration produces evidence-based scoring and specific solution sets to mitigate all direct cyber threats facing an organization.
Direct Threat Risk Management
This innovative platform establishes a new category—Direct Threat Risk Management. By incorporating all features of Attack Surface Management (ASM) and Third-Party Risk Management (TPRM) tools, ThirdWatchSM surpasses conventional offerings, delivering enhanced usability, substantial value, and unique Direct Threat Intelligence.
Central to ThirdWatchSM are its pivotal features, including Direct Threat Intelligence, Direct Threat Mitigation, Third-Party Risk Management, Network Vulnerability Assessment, 3D Visualization, a comprehensive Ecosystem and Evidence-Based Scoring. The platform is designed with five key configurations: Enterprise, Audit, Managed Service Provider (MSP), Incident Response (IR), and Legal.
“Implementing a ‘defend forward’ mentality is crucial. To combat modern threats, organizations need to pivot to a proactive approach to threat management, seeking out threats and neutralizing them before they escalate into attacks. ThirdWatchSM empowers organizations to identify and eliminate direct threats that jeopardize core operations, subsidiaries, and affiliates while mitigating risks posed by third parties in their ecosystems.” – Jeremy Samide, CEO & Co-founder, Blackwired.
ThirdWatchSM ensures compliance with third-party regulatory cybersecurity requirements and helps organizations navigate potential supply chain disruptions caused by cyber events.
Management teams can utilize the platform to monitor their cybersecurity operations’ effectiveness, access cyber threat landscapes swiftly, and provide a digital witness record of pre-event direct threat intelligence.
Blackwired leverages industry and government expertise to deliver innovative solutions that integrate cutting-edge technologies such as artificial intelligence, edge computing, blockchain, and quantum computing. This strategic integration is underpinned by a methodology that starts with the premise of proactive prevention and, when necessary, offense – a notably different approach compared to traditional reactive cybersecurity measures.
The launch of ThirdWatchSM marks a significant evolution in cybersecurity strategy. Organizations can utilize the platform to enhance cyber resilience while safeguarding their digital assets against a rapidly changing threat landscape. As cyber risks escalate, ThirdWatchSM offers a proactive defense mechanism designed to help organizations stay one step ahead of malicious threat actors.
The post Blackwired Launches ThirdWatch℠, A Paradigm Shift in Cybersecurity appeared first on HIPTHER Alerts.
Uncategorized
On Mission to Upskill 500,000 Students, MongoDB Partners With Ministry of Education’s All India Council for Technical Education
MongoDB, Inc. (NASDAQ: MDB) today announced the expansion of MongoDB for Academia in India, including a new partnership with All India Council for Technical Education (AICTE), Ministry of Education, Government of India. The AICTE partnership will be supported by SmartBridge’s SmartInternz learning platform to give more than 150,000 Indian students access to virtual internships and to gain the skills required to use MongoDB Atlas—the leading multi-cloud developer data platform.
As part of the program’s expansion, MongoDB also announced a new partnership with GeeksforGeeks, a platform for computer science resources in India. The collaboration will make the MongoDB Developer Learning Path available to all of GeeksforGeeks’ 25 million registered users.
Launched in September 2023, the MongoDB for Academia in India program provides student training, curriculum resources for educators, credits to use MongoDB technology at no cost, and certifications to help individuals start careers in the technology industry. The skills and training provided through the program are particularly important, as many Indian organizations struggle to find developers who have the skills to build modern applications and take advantage of emerging technologies like generative AI. According to a report from the National Association of Software and Service Companies, India’s technology sector will demand more than one million engineers with advanced skills in artificial intelligence and other capabilities over the next three years. Overall, the industry body expects there will be a need for around six million digital roles by 2028—and the available talent pool is forecast to be 4.7 million workers. This gap underscores the need for increased collaboration between industry and academia to upskill students and educators in India to meet the demands of the country’s large and growing economy.
“In India, we have a massive opportunity with the current wave of AI and modern technologies that will transform our lives and economy in the coming years. But to take advantage of that opportunity, it is vital our developers have the right skills. We’re excited to partner with MongoDB to help make that possible,” said Dr. Buddha Chandrasekhar CEO, Anuvadini AI, Ministry of Education and Chief Coordinating Officer, AICTE, Ministry of Education, Government of India.
To address this challenge, MongoDB for Academia is partnering with the All India Council for Technical Education (AICTE), the Indian government’s authority for the management of technical education, and the edtech platform SmartBridge to launch a virtual internship program through the SmartInternz platform. Aligned with the government’s Skill India Initiative, the program aims to provide full-stack development skills to over 150,000 students. Each internship will include 60 hours of experiential learning, hands-on bootcamps, courses, and project work, as well as simulated corporate environments where students can apply their learned skills, collaborate with peers, and receive mentorship.
“We’ve seen great appetite and interest on our platform for modern database technologies like MongoDB. We want to equip students with knowledge of in-demand technologies so they have skills they need to become the job-ready candidates India’s organizations are looking for,” said Amarender Katkam, Founder and CEO at SmartBridge and SmartInternz.
In the past year, the MongoDB for Academia program has made major strides toward its goal of upskilling 500,000 students. To date, more than 200 partnerships with educational institutions have been established, as well as collaborations with other government and private organizations. Hundreds of educators have been onboarded on to the MongoDB for Academia program, more than 100,000 students have received skills training, and over 450,000 hours of learning have been completed.
“India loves developers and so does MongoDB. I’m so proud of the work our MongoDB for Academia team is doing to empower Indian developers and to support the next generation of tech talent in this country,” said Sachin Chawla, Area Vice President, India at MongoDB.
MongoDB for Academia is also expanding to partner with GeeksforGeeks which will see the organizations collaborate on a number of new projects, including the syndication of key full-stack development courses to learners both in online and offline GeeksforGeeks centers across India. The MongoDB Developer Learning path will also become available to all GeeksforGeeks users, and is expected to reach more than 100,000 aspiring developers.
To learn more about MongoDB for Academia, visit mongodb.com/academia.
MongoDB Developer Data Platform
MongoDB Atlas is the leading multi-cloud developer data platform that accelerates and simplifies building with data. MongoDB Atlas provides an integrated set of data and application services in a unified environment to enable developer teams to quickly build with the capabilities, performance, and scale modern applications require.
The post On Mission to Upskill 500,000 Students, MongoDB Partners With Ministry of Education’s All India Council for Technical Education appeared first on HIPTHER Alerts.
-
Artificial Intelligence5 days ago
Integrum ESG and CSRHub enable fund managers to monitor both long-term trends and short-term movements in ESG ratings for companies, across a 10-year time series
-
Artificial Intelligence5 days ago
Narwal Launches Innovative AI Robot Vacuum at IFA 2024: Stress-Free Cleaning For All
-
Artificial Intelligence9 hours ago
Retail golden quarter: how AI can boost sales success for retailers
-
Artificial Intelligence7 hours ago
Infosys Collaborates with Clearstream, Part of Deutsche Börse, to Enhance Digital Issuance Platform to Simplify Securities Issuance and Servicing
-
Artificial Intelligence11 hours ago
Enhanced Security for Alpha Bank Romania Customers: New Anti-Fraud System Developed with Montran
-
Artificial Intelligence7 hours ago
Manulife-backed Serverfarm acquires two data centers providing over 500MW of new potential capacity in the Houston market
-
Artificial Intelligence5 hours ago
Data quality takes centre stage as 70 senior data leaders set to gather at CDO BFSI Exchange 2024
-
Artificial Intelligence5 days ago
Infosys and The Financial Times Unveil the ‘FT Money Machine’ Through Immersive Extended Reality Experience