Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Artificial Intelligence

WIMI Hologram Academy: Deep learning hardware technology based on in-memory computing structure

Published

on

HONG KONG, Aug. 02, 2022 (GLOBE NEWSWIRE) — WIMI Hologram Academy, working in partnership with the Holographic Science Innovation Center, has written a new technical article describing their exploration of deep learning hardware technology based on in-memory computing structure. This article follows below:

The development of Extended Reality (XR, including AR and VR) technology has made the dream of realizing the integration and interaction between the real world and the virtual world come true. AR/VR is a simulated environment developed by computer software. It creates an immersive experience that appears to be real. The potential of VR/AR technology has been completely unleashed due to technological advances, especially in the last five years. Scientists from WIMI Hologram Academy of WIMI Hologram Cloud Inc.(NASDAQ: WIMI), discussed in detail the deep learning hardware technology from general-purpose to in-memory computing structure.

1.VR/ARs demand for artificial intelligence

Nowadays, VR/AR has touched various industries and become an important technology for industrial development, and 3D content (including 3D model, 3D animation, and 3D interaction, etc.) is one of the core of VR/AR. However, at present, 3D content in various fields still needs a lot of manual production, and the threshold requirement for production personnel is relatively high, so the production capacity is very low, which is a major bottleneck restricting the development of related industries. Artificial Intelligence is expected to automate the production of 3D content to some extent, replacing some of the repetitive labor and improving production efficiency. The goals of both VR and AR include more natural interaction, which is one of the goals AI is trying to address. AlphaGo and AlphaZero demonstrate the intelligence of AI in certain areas that overlap with VR and AR, and are expected to compensate for the intelligence of VR and AR.

Deep Learning is a core subset of artificial intelligence. In recent years, DL is getting closer to the level of human skill in improving tasks such as image classification, understanding speech, playing video games, and translating between languages. Due to the large amount of training data and parameters required, modern Deep Neural Network requires high training costs, limiting the demand for DNN intelligent solutions for a large number of applications, such as VR/AR, etc. The increasing computing power requirements of DL have spawned the development of underlying hardware technologies.

In the following, we try to elaborate the reliance of deep learning on hardware, how deep learning works with different hardware support, and In-Memory Computing (IMC) for DL, pointing out the development direction of high-performance and low-power DL hardware.

2. Hardware dependence of deep learning

A DL model is like a huge self-organizing trial-and-error machine with millions (or even more) of changeable parameters. After feeding the machine with big data and performing tens or hundreds of millions of iterations of training cycles, the machine can find the best parameters and weights associated with the DL model. Currently, GPU (graphics processing unit) cards are the best hardware solution for DL due to their excellent parallel matrix multiplication capabilities and supported software. However, their flexibility (game support) makes them less efficient for DL, which is where other DL gas pedals ASICs (Application Specific Integrated Circuits) come in handy to provide better efficiency and performance. But both GPUs and ASICs are built on the traditional von Neumann architecture. The time and effort spent transferring data between memory and processor (the so-called von Neumann bottleneck) has become problematic, especially for data-centric applications such as real-time image recognition, natural language processing, and extended reality XR. To achieve larger acceleration factors and lower power outside of the vN architecture, non-volatile memory arrays based on IMCs such as phase-change memory and resistive random access memory have been explored. Vector matrix multiplication of IMCs replaces expensive high-power matrix multiplication operations in CPU/GPU and avoids moving weights from/to memory. Therefore, it has a great potential to have a huge impact on DL performance and power consumption.

3. Hardware for deep learning

The general DL algorithm consists of a series of operations (with neural networks for speech, language and visual processing). Although matrix multiplication dominates, optimizing performance efficiency while maintaining accuracy requires a core architecture that efficiently supports all auxiliary functions. The central processing unit (CPU) is used to handle complex tasks such as time slicing, complex control flow and branching, security, etc. In contrast, GPUs can only do one thing well. They handle billions of repetitive low-level tasks, such as matrix multiplication. GPUs have thousands of arithmetic logic units (ALUs) compared to traditional CPUs, which typically have only four or eight. However, the GPU is still a general-purpose processor that must support millions of different applications and software. For each of the thousands of ALUs, the GPU needs access to registers or shared memory to read and store the results of intermediate calculations. As the GPU performs more parallel computations on its thousands of ALUs, it also spends proportionally more energy accessing memory and also increases the footprint of the GPU for complex cabling. To solve these problems, ASICs for DL are needed, and TPUs are an example.

TPUs are matrix processors dedicated to neural network workloads, capable of processing large amounts of multiplication and addition of neural networks at extremely fast speeds while consuming less power and taking up less physical space. The key enabler is the dramatic reduction of vN bottlenecks (moving data from memory). By understanding the goals of the DNN, the TPU places thousands of multipliers and adders and connects them directly to form a large physical matrix of these operators. For the operation, first, the TPU loads the weights from memory into the matrix of multipliers and adders. Then, the TPU loads the data (characteristics) from memory. As each multiplication is executed, the result will be passed to the next multiplier as it is summed. Thus the output will be the sum of all multiplication results between data and parameters. No memory access is required at all for the entire process of massive computation and data transfer. The disadvantage of TPU is the loss of flexibility; it supports only a few specific neural networks.

4. In-memory computation for deep learning

The DNN inference and training algorithm mainly involves forward and inverse multiplication operations on vector matrices. This operation can be performed by in-memory computation on a 2D crossbar memory array that has been proposed more than 50 years ago. The weights (G) of a DNN are stored in a 1T (transistor)-1R (resistor) or 1T memory cell. By simultaneously applying a voltage input V on the rows and reading a current output I from the columns, the analog weight (G) summation is achieved by Kirchhoff’s current law and Ohm’s law. In an ideal crossbar memory array, the input-output relationship can be expressed as I=V-G. Vector-matrix multiplication is achieved by mapping the input vector to the input voltage V, the matrix to the conductance G, and the output to the current I. IMC vector-matrix multiplication replaces the expensive and high power matrix multiplication operation in GPUs/TPUs (digital circuits). avoids moving weights from memory, thus greatly improving the performance and power consumption of DNNs. Demonstrations of accelerated DNN training using the back-propagation algorithm report acceleration factors from 27x to 2140x (relative to CPU) with significant reductions in power consumption and area.

In addition, the PCM device acts as the DNN inference part of the synapse (weights), and the neurons of each layer drive the next layer by means of the weights WIJ and the nonlinear function f(). The input neurons are driven by pixels from consecutive MNIST (a very classical test data set in the field of machine learning) images, and the 10 output neurons identify which number appears. One limitation of IMC DNN acceleration is the deficiency of memory devices. Device features that are usually considered favorable for storage applications, such as high on/off ratios, digital per-bit storage, or unrelated features (e.g., asymmetric Set and Reset operations) are becoming limitations to accelerated DNN training. Perfect IMC DNN memory cells coupled with system and CMOS circuit designs that place specific requirements on perfect resistor devices can achieve acceleration factors of over 30,000x (relative to the CPU). There are significant benefits to developing or researching IMCs for DNNs, but there are currently no products on the market. Challenges that prevent it from happening include: 1)Defects in memory cells (cyclic endurance, small dynamic range, resistive drift, asymmetric programming). 2)Inter-layer data transfer (AD, DA conversion, digital function connection). 3)Flexible software, framework support (software re-configurable IMC DNN).

5.Summary

With the help of advanced algorithms and computing hardware (GPU), deep learning has pushed artificial intelligence to a new level. Thousands of parallel processing ALUs make GPUs powerful machines that can perform matrix multiplication for DNN operations. By sacrificing flexibility, DNN acceleration chips built with ASICs like TPUs can achieve higher performance and lower power consumption. However, using digital circuits for matrix multiplication has its limitations. To achieve higher acceleration factors and lower power consumption, the in-store computing IMC approach for vector matrix multiplication for DNNs has been proposed. While IMC brings great benefits for DNNs, it also faces many challenges. For example, deficiencies in the storage unit, data transfer between layers, and supporting software and frameworks need to be overcome before IMC DNNs can become a reality.

Founded in August 2020, WIMI Hologram Academy is dedicated to holographic AI vision exploration, and conducts research on basic science and innovative technologies, driven by human vision. The Holographic Science Innovation Center, in partnership with WIMI Hologram Academy, is committed to exploring the unknown technology of holographic AI vision, attracting, gathering and integrating relevant global resources and superior forces, promoting comprehensive innovation with scientific and technological innovation as the core, and carrying out basic science and innovative technology research.

Contacts

Holographic Science Innovation Center

Email: pr@holo-science. com

GlobeNewswire is one of the world's largest newswire distribution networks, specializing in the delivery of corporate press releases financial disclosures and multimedia content to the media, investment community, individual investors and the general public.

Artificial Intelligence

JupiterOne and watchTowr announce partnership to protect business critical assets with broad exposure management capabilities

Published

on

jupiterone-and-watchtowr-announce-partnership-to-protect-business-critical-assets-with-broad-exposure-management-capabilities

SINGAPORE, May 2, 2024 /PRNewswire/ — watchTowr, a leader in external attack surface management (EASM) technology and fuelled by watchTowr Labs, a renowned vulnerability R&D capability, has formed a strategic partnership with JupiterOne. JupiterOne is a leader in cyber asset attack surface management (CAASM) technology. This collaboration enables customers to rapidly prioritize emerging threats within their constantly changing environments, focusing on fixing the most critical risks impacting their business, which enables an end-to-end continuous threat exposure management process (CTEM).

Over 28,000 CVE records were published in 2023; a figure that is expected to increase as attackers shorten the time from known vulnerability to exploit, reducing it from weeks to days. JupiterOne and watchTowr’s integrated solution empowers enterprises to discover their most critical and exploitable vulnerabilities, prioritize them with asset context based on business impact and receive an actionable remediation plan to improve security posture.
This partnership enables a complete continuous threat exposure management program, addressing the full spectrum of cyber risk management. The fully integrated solution provides continuous monitoring and assessment of both internal and external digital assets, allowing for prioritization and effective threat mitigation for a business’s most critical assets. “Our partnership with watchTowr is a game-changer” said Forte. “Combining our data aggregation with real-time asset discovery and automated security testing allows us to offer a unique, all-encompassing approach to exposure management.”
Benjamin Harris, CEO, watchTowr, said, “While the number of reported vulnerabilities continues to rise, the vulnerabilities that matter – in mission-critical, key systems – have exploded at an alarming rate. This reality, combined with the significant shift in speed by attackers to weaponize vulnerabilities – the ability to validate exploitability and prioritise actions based on real business risk has never been more vital. We’re excited to join forces with JupiterOne to give security teams around the globe this much-needed end-to-end capability.”
About JupiterOne:
JupiterOne is a cybersecurity startup delivering powerful software solutions to companies across all industries, providing deep insights to cyber assets and the relationships between, empowering security professionals to have true knowledge and ownership of their attack surfaces.
About watchTowr: 
watchTowr is a global cybersecurity technology company, built by former adversaries.
watchTowr’s world-class External Attack Surface Management and Continuous Automated Red Teaming technology is informed by years of experience compromising some of the world’s most targeted organisations and utilised by Fortune 500, financial services and critical infrastructure providers every day.
Photo – https://mma.prnewswire.com/media/2401497/watchTowr_JupiterOne.jpg

View original content:https://www.prnewswire.co.uk/news-releases/jupiterone-and-watchtowr-announce-partnership-to-protect-business-critical-assets-with-broad-exposure-management-capabilities-302132392.html

Continue Reading

Artificial Intelligence

Clarivate Declares Dividend on Mandatory Convertible Preferred Shares

Published

on

clarivate-declares-dividend-on-mandatory-convertible-preferred-shares

LONDON, May 1, 2024 /PRNewswire/ — Clarivate Plc (NYSE: CLVT; CLVT PR A) (“Clarivate”), a leading global provider of transformative intelligence, today announced that its board of directors declared a quarterly dividend of $1.3125 per share on its 5.25% Series A Mandatory Convertible Preferred Shares (the “Preferred Shares”), payable in cash on June 3, 2024 to shareholders of record at the close of business on May 15, 2024.

On the mandatory conversion date, which is scheduled to occur on June 3, 2024, each Preferred Share will automatically and mandatorily convert into a number of ordinary shares of Clarivate (and cash in lieu of any fractional ordinary shares) based on the average volume weighted average price (“VWAP”) of Clarivate’s ordinary shares over a 30-trading day period that begins on, and includes, April 18, 2024 and is scheduled to end on, and include, May 30, 2024 (the “valuation period”). If such VWAP is (i) greater than $31.20, then the mandatory conversion rate will be 3.2052 ordinary shares of Clarivate per Preferred Share, (ii) less than or equal to $31.20 but equal to or greater than $26.00, then the mandatory conversion rate will be a number of ordinary shares of Clarivate per Preferred Share equal to $100.00 divided by such VWAP and (iii) less than $26.00, then the mandatory conversion rate will be 3.8462 ordinary shares of Clarivate per Preferred Share. The mandatory conversion rate will be announced following the end of the valuation period. The above description of the terms of the Preferred Shares is not complete and is subject to, and qualified in its entirety by reference to, the “Statement of Rights” for the Preferred Shares, which is filed as Exhibit 3.2 to Clarivate’s annual report on Form 10-K for the fiscal year ended December 31, 2023.
Cautionary Note Regarding Forward-Looking Statements
This communication contains “forward-looking statements” as defined in the Private Securities Litigation Reform Act of 1995. These statements, which express management’s current views concerning future business, events, trends, contingencies, financial performance, or financial condition, appear at various places in this communication and may use words like “aim,” “anticipate,” “assume,” “believe,” “continue,” “could,” “estimate,” “expect,” “forecast,” “future,” “goal,” “intend,” “likely,” “may,” “might,” “plan,” “potential,” “predict,” “project,” “see,” “seek,” “should,” “strategy,” “strive,” “target,” “will,” and “would” and similar expressions, and variations or negatives of these words. Forward-looking statements are neither historical facts nor assurances of future performance. Instead, they are based only on management’s current beliefs, expectations, and assumptions regarding the future of our business, future plans and strategies, projections, anticipated events and trends, the economy, and other future conditions. These forward-looking statements involve a number of risks and uncertainties (some of which are beyond our control) or other assumptions that may cause actual results or performance to be materially different from those expressed or implied by these forward-looking statements. Important factors that could cause our actual results and financial condition to differ materially from those indicated in the forward-looking statements include those factors discussed under the caption “Risk Factors” in our annual report on Form 10-K, along with our other filings with the U.S. Securities and Exchange Commission (“SEC”). However, those factors should not be considered to be a complete statement of all potential risks and uncertainties. Additional risks and uncertainties not known to us or that we currently deem immaterial may also adversely affect our business operations. Forward-looking statements are based only on information currently available to our management and speak only as of the date of this communication. We do not assume any obligation to publicly provide revisions or updates to any forward-looking statements, whether as a result of new information, future developments or otherwise, except as otherwise required by securities and other applicable laws. Please consult our public filings with the SEC or on our website at www.clarivate.com.
About Clarivate
Clarivate™ is a leading global provider of transformative intelligence. We offer enriched data, insights & analytics, workflow solutions and expert services in the areas of Academia & Government, Intellectual Property and Life Sciences & Healthcare. For more information, please visit www.clarivate.com.
Logo – https://mma.prnewswire.com/media/1159266/Clarivate_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/clarivate-declares-dividend-on-mandatory-convertible-preferred-shares-302133633.html

Continue Reading

Artificial Intelligence

CGTN: 3rd CMG Forum in Beijing discusses AI development

Published

on

cgtn:-3rd-cmg-forum-in-beijing-discusses-ai-development

BEIJING, May 1, 2024 /PRNewswire/ — Focusing on the development of AI, the third CMG Forum was held on Monday in Beijing.

Li Shulei, a member of the Political Bureau of the Communist Party of China (CPC) Central Committee and the head of the Publicity Department of the CPC Central Committee, attended the opening of the event and delivered a speech.
Guests at the forum stressed the role of media in promoting the innovative application of AI as well as its governance.
Efforts should also be made to boost the development of AI in creating positive, healthy, diverse and high-quality content, so that AI can become a force for good and benefit mankind, they agreed.
They also called on media to accelerate intelligent transformation and help bridge international exchanges and cooperation on the governance of AI to facilitate its healthy, orderly and safe development.
Hosted by China Media Group (CMG), the forum attracted more than 200 participants from international organizations, media, think tanks and multinational companies.
“Innovation and breakthroughs in science and technology not only guide the development and progress of human civilization, but also bring uncertainty to the changing world,” said Shen Haixiong, vice minister of the Publicity Department of the CPC Central Committee and president of CMG. He called for efforts to jointly create valuable and responsible artificial intelligence.
AI technology is affecting every aspect of our lives. Thomas Bach, president of the International Olympic Committee (IOC), stated in a video speech that CMG has always been a partner of the IOC, bringing the charm of the Olympic Games to hundreds of millions of Chinese viewers. He said the IOC invites CMG to work together for the creation of a future with the application of AI in Olympic sports.
“From ancient inventions such as silk, printing and the compass to modern technological advances such as robotics, telecommunications and green technology, China has always been committed to innovation and creation,” said Daren Tang, director general of the World Intellectual Property Organization (WIPO). He said WIPO pays close attention to ensuring a balance between the opportunities and risks of artificial intelligence and is committed to strengthening cooperation to ensure that artificial intelligence is properly used.
https://news.cgtn.com/news/2024-04-30/3rd-CMG-Forum-in-Beijing-discusses-AI-development-1tdDcXvCexG/p.html

View original content:https://www.prnewswire.co.uk/news-releases/cgtn-3rd-cmg-forum-in-beijing-discusses-ai-development-302133410.html

Continue Reading

Trending