Connect with us
European Gaming Congress 2024

Artificial Intelligence

Zeki Launches Dataset on Snowflake Marketplace

Published

on

zeki-launches-dataset-on-snowflake-marketplace

Human capital intelligence AI datasets enable organisations to identify and predict the individuals and companies most likely to produce the next generation of AI-related innovation
LONDON, July 17, 2024 /PRNewswire/ — Zeki, a UK-founded data company, announced today that select datasets are now available on Snowflake Marketplace. Zeki dataset availability on Snowflake Marketplace will enable joint customers to gather insights not captured by traditional talent search and assessment tools.

Zeki has proven there is a direct correlation between the quality and depth of the scientists, engineers and researchers that a deep tech company employs and the company’s future innovation potential. Leveraging proprietary IP, Zeki data can determine which deep tech companies should be more highly valued ahead of the market.* Zeki does this by identifying and evaluating the specific innovators that will produce the most valuable innovations for an organisation. Zeki evaluates and ranks each individual in its dataset using over 20 unique indicators to develop their Zeki Score.
Zeki’s data encompasses over 10 million top scientists, engineers and researchers operating in deep tech domains such as AI, quantum computing, data engineering, semiconductors and health tech across more than 40,000 companies worldwide. The data’s quality and uniformity enables Zeki to forecast future potential using proven, back-tested models. All Zeki data is open access and derived from 30,000 different sources.
Zeki data incorporates more than one billion data points from eight terabytes of data, tracking back 10 years. The novel approach uses advanced data integration to identify, match, disambiguate and verify every individual in Zeki’s datasets. 
The company’s unique data-led approach provides insights that are not captured by traditional talent search and assessment tools. Academically reviewed regression modelling has shown that hiring high-scoring innovators identified by Zeki data statistically lifts innovation at a company.*
Initial Zeki datasets available on Snowflake Marketplace include AI Talent Flows, AI Talent Flows in the US and AI Talent Flows within Europe. All datasets include sector breakdowns for Finance, Technology and Health. Zeki can provide bespoke data at the individual, company, sectoral or country level.
*Learn more about Zeki’s methodology and proprietary IP by downloading Zeki’s AI Companies 2024 Report for free.
About Zeki
Zeki holds the most accurate set of deep tech human capital intelligence data ever created. We leverage Zeki’s proprietary dataset to predict the future innovation potential of deep tech companies. Zeki is a women-led, diverse and global data company with over 30 years of relevant, interdisciplinary experience. Co-founded by Tom Hurd, who served as the most senior Homeland Security Advisor to four British Home Secretaries, Zeki draws on the skills of the intelligence community and applies this unique expertise in new ways. Learn more at www.thezeki.com.
Logo: https://mma.prnewswire.com/media/2463194/Zeki_Logo.jpg
 

View original content:https://www.prnewswire.co.uk/news-releases/zeki-launches-dataset-on-snowflake-marketplace-302199053.html

Continue Reading
Advertisement
Stake - Best Online Casino & Sports Betting Platform

Artificial Intelligence

LG OPENS NEW CHAPTER IN CONNECTED HOME LIVING WITH “THINQ ON” AI HOME HUB AT IFA 2024

Published

on

lg-opens-new-chapter-in-connected-home-living-with-“thinq-on”-ai-home-hub-at-ifa-2024

Revolutionizing AI Homes With Affectionate Intelligence for Seamless Connectivity, Custom Comfort and Care
SEOUL, South Korea, Aug. 29, 2024 /PRNewswire/ — LG Electronics (LG) is unveiling the LG ThinQ ON AI home hub featuring Affectionate Intelligence at IFA 2024 in Berlin, Germany, from September 6-10. This innovative hub provides outstanding connectivity and expandability, positioning itself as the centerpiece of a comprehensive smart home ecosystem, where LG’s appliances and “empathetic” AI work in unison to deliver customized comfort, care and convenience.

The AI home hub seamlessly connects with a vast array of appliances and Internet of Things (IoT) devices, making it easy for anyone to configure and enjoy a personalized AI-powered home. Powered by LG’s unique Affectionate Intelligence, the ThinQ ON continuously learns from individual usage patterns and broader usage trends to tailor customers’ daily experiences, accelerating the realization of LG’s “Zero Labor Home” vision.
Compact and lightweight, LG ThinQ ON features a cylindrical form factor and a muted color palette (gray/white) that allows it to blend effortlessly with any space or décor. The device is equipped with an AI speaker that facilitates conversations with LG’s AI voice assistant and lets users listen to their favorite audio content. The ThinQ ON’s advanced capabilities are driven by LG’s high-performance AI chipset, designed with future scalability in mind.
Generative AI-based Home Solutions Tailored to Each Customer’s Preferences
The ThinQ ON allows users to control their AI appliances and living spaces with spoken commands or requests delivered in a natural, conversational manner. This creates an experience akin to conversing with friends or family, adding a more human dimension to user-device interactions.
In addition to comprehending everyday language, LG’s AI home hub can understand the context of conversations and determine user preferences for each connected appliance, IoT device and service; bringing a new level of “care” to the home. What’s more, the ThinQ ON autonomously monitors all elements within the smart home ecosystem, helpfully informing users when a task is completed (e.g., the washer has finished a cycle) or if any issue had been detected.
On top of this, ThinQ ON makes it possible to check or change nearly all appliance settings using only voice commands and configure convenience-enhancing routines that automate appliance/IoT operations.
Connectivity, Expandability and Versatility
LG ThinQ ON is Matter-certified* and supports various network connectivity options, including Wi-Fi and Thread, ensuring hassle-free setup and a seamless smart home experience. Compatible with a wide range of LG innovations and a growing number of appliances and IoT devices from other manufacturers, the ThinQ ON enables users to manage their entire smart home ecosystem from one place, both now and in the future.
To further enhance the connectivity of the ThinQ ON, LG acquired the smart home platform company Athom in July of this year. Athom’s flagship smart hub, Homey Pro, can connect to over 50,000 devices. The Homey App Store features around 1,000 applications for connecting and controlling products from various global brands. LG plans to maximize the advantages of the ThinQ ON’s open platform by continuously expanding the range of supported brands and devices.
Enhanced Security and Privacy Protection with LG Shield
The ThinQ ON utilizes LG Shield, the company’s proprietary security system, to protect customer data at all times – from collection and storage to usage. LG Shield encrypts user data and securely stores it in a separate server. Additionally, data modifications required are made within a secure environment, preventing external parties from tampering with the operational code and ensuring customers’ safety while they enjoy the many benefits of LG’s latest AI-driven innovations.
“An AI voice assistant and hub that serves as the center of the smart home, ThinQ ON brings us significantly closer to realizing our vision of the Zero Labor Home,” said Lyu Jae-cheol, president of LG Electronics Home Appliance & Air Solution Company. “LG aims to make the ‘AI Home Lifestyle’ accessible to everyone, and will continue to deliver customized comfort, care and convenience through its Affectionate Intelligence solutions.”
Visitors to IFA 2024 (September 6-10) can see all of LG’s latest AI Home solutions, including ThinQ ON, at the company’s exhibition booth in Hall 18, Messe Berlin.
* Matter is the globally-adopted smart home connection protocol developed by the Connectivity Standards Alliance (CSA), of which LG Electronics is a member.
About LG Electronics Home Appliance & Air Solution Company
The LG Home Appliance & Air Solution Company is a global leader in home appliances, air solutions as well as smart home solutions featuring LG ThinQ. The company is creating various solutions with its industry leading core technologies and is committed to making life better and sustainable for consumers and the planet by developing thoughtfully designed kitchen appliances, living appliances, HVAC and air purification solutions. Together, these products deliver enhanced convenience, superb performance, efficient operation and sustainable lifestyle solutions. For more news on LG, visit www.LGnewsroom.com.

View original content:https://www.prnewswire.co.uk/news-releases/lg-opens-new-chapter-in-connected-home-living-with-thinq-on-ai-home-hub-at-ifa-2024-302233570.html

Continue Reading

Artificial Intelligence

Supermicro Previews New Max Performance Intel-based X14 Servers for AI, HPC, and Critical Enterprise Workloads

Published

on

supermicro-previews-new-max-performance-intel-based-x14-servers-for-ai,-hpc,-and-critical-enterprise-workloads

A Complete Architecture Upgrade that Includes New Performance-Optimized CPUs, Next-Generation GPU support, Upgraded Memory MRDIMMs, 400GbE Networking, New Storage Options Including E1.S and E3.S drives, and Direct-to-Chip Liquid Cooling, based on the Upcoming Intel® Xeon® 6900 Series Processors with P-Cores (formerly codenamed Granite Rapids-AP) Supporting Demanding Workloads
SAN JOSE, Calif., Aug. 28, 2024 /PRNewswire/ — Supermicro, Inc. (NASDAQ: SMCI) a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro’s efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro’s Early Ship Program or for remote testing with Supermicro JumpStart.

“We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features,” said Charles Liang, president and CEO of Supermicro. “Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry’s most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO.”
Click here for more information.
These new X14 systems feature completely re-designed architectures including new 10U and multi-node form factors to enable support for next-generation GPUs and higher CPU densities, updated memory slot configurations with 12 memory channels per CPU and new MRDIMMs which provide up to 37% better memory performance compared to DDR5-6400 DIMMS. In addition, upgraded storage interfaces will support higher drive densities, and more systems with liquid cooling integrated directly into the server architecture.
The new additions to the Supermicro X14 family comprise more than ten new systems, several of which are completely new architectures in three distinct, workload-specific categories:
GPU-optimized platforms designed for pure performance and enhanced thermal capacity to support the highest-wattage GPUs. System architectures have been built from the ground up for large-scale AI training, LLMs, generative AI, 3D media, and virtualization applications.High compute-density multi-nodes including SuperBlade® and the all-new FlexTwin™, which leverage direct-to-chip liquid cooling to significantly increase the number of performance cores in a standard rack compared to previous generations of systems.Market-proven Hyper rackmounts combine single or dual socket architectures with flexible I/O and storage configurations in traditional form factors to help enterprises and data centers scale up and out as their workloads evolve.Supermicro X14 performance-optimized systems will support the soon-to-be-released Intel® Xeon® 6900 series processors with P-cores and will also offer socket compatibility to support Intel Xeon 6900 series processors with E-cores in Q1’25. This designed-in feature allows workload-optimized systems for either performance-per-core or performance-per-watt.
“The new Intel Xeon 6900 series processors with P-cores are our most powerful ever, with more cores and exceptional memory bandwidth & I/O to achieve new degrees of performance for AI and compute-intensive workloads,” said Ryan Tabrah, VP and GM of Xeon 6 at Intel. “Our continued partnership with Supermicro will result in some of the industry’s most powerful systems that are ready to meet the ever-heightening demands of modern AI and high-performance computing.”
When configured with Intel Xeon 6900 series processors with P-cores, Supermicro systems support new FP16 instructions on the built-in Intel® AMX accelerator to further enhance AI workload performance. These systems include 12 memory channels per CPU with support for both DDR5-6400 and MRDIMMs up to 8800MT/s, CXL 2.0, and feature more extensive support for high-density, industry-standard EDSFF E1.S and E3.S NVMe drives.
Supermicro Liquid Cooling Solutions
Complementing this expanded X14 product portfolio are Supermicro’s rack-scale integration and liquid cooling capabilities. With an industry-leading global manufacturing capacity, extensive rack-scale integration and testing facilities, and a comprehensive suite of management software solutions, Supermicro designs, builds, tests, validates, and delivers complete solutions at any scale in a matter of weeks.
In addition, Supermicro offers a complete in-house developed liquid cooling solution including cold plates for CPUs, GPUs and memory, Cooling Distribution Units, Cooling Distribution Manifolds, hoses, connectors, and cooling towers. Liquid cooling can be easily included in rack-level integrations to further increase system efficiency, reduce instances of thermal throttling, and lower both the TCO and Total Cost to Environment (TCE) of data center deployments.
Upcoming Supermicro X14 performance-optimized systems include:
GPU-optimized – The highest performance Supermicro X14 systems designed for large-scale AI training, large language models (LLMs), generative AI and HPC, and supporting eight of the latest-generation SXM5 and SXM6 GPUs. These systems are available in air-cooled or liquid-cooled configurations.
PCIe GPU – Designed for maximum GPU flexibility, supporting up to 10 double-width PCIe 5.0 accelerator cards in a thermally-optimized 5U chassis. These servers are ideal for media, collaborative design, simulation, cloud gaming, and virtualization workloads.
Intel® Gaudi® 3 AI Accelerators – Supermicro also plans to deliver the industry’s first AI server based on the Intel Gaudi 3 accelerator hosted by Intel Xeon 6 processors. The system is expected to increase efficiency and lower the cost of large-scale AI model training and AI inferencing. The system features eight Intel Gaudi 3 accelerators on an OAM universal baseboard, six integrated OSFP ports for cost-effective scale-out networking, and an open platform designed to use a community-based, open-source software stack, requiring no software licensing costs.
SuperBlade® – Supermicro’s X14 6U high-performance, density-optimized, and energy-efficient SuperBlade maximizes rack density, with up to 100 servers and 200 GPUs per rack. Optimized for AI, HPC, and other compute-intensive workloads, each node features air cooling or direct-to-chip liquid cooling to maximize efficiency and achieve the lowest PUE with the best TCO, as well as connectivity up to four integrated Ethernet switches with 100G uplinks and front I/O supporting a range of flexible networking options up to 400G InfiniBand or 400G Ethernet per node.
FlexTwin™ – The new Supermicro X14 FlexTwin architecture is designed to provide maximum compute power and density in a multi-node configuration with up to 24,576 performance cores in a 48U rack. Optimized for HPC and other compute-intensive workloads, each node features direct-to-chip liquid cooling only to maximize efficiency and reduce instances of CPU thermal throttling, as well as HPC Low Latency front and rear I/O supporting a range of flexible networking options up to 400G per node.
Hyper – X14 Hyper is Supermicro’s flagship rackmount platform designed to deliver the highest performance for demanding AI, HPC, and enterprise applications, with single or dual socket configurations supporting double-width PCIe GPUs for maximum workload acceleration. Both air cooling and direct-to-chip liquid cooling models are available to facilitate the support of top-bin CPUs without thermal limitations and reduce data center cooling costs while also increasing efficiency.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro’s motherboard, power, and chassis design expertise further enable our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
 
 
 
 
 
Photo – https://mma.prnewswire.com/media/2491086/SYS_A22GA_NBRT.jpgPhoto – https://mma.prnewswire.com/media/2491087/SYS_522GA_NRT.jpgPhoto – https://mma.prnewswire.com/media/2491088/SYS_422GA_NBRT_LCC.jpgPhoto – https://mma.prnewswire.com/media/2491089/SYS_222FT_HEA_LCC.jpgPhoto – https://mma.prnewswire.com/media/2491090/SYS_212HA_TN.jpgPhoto – https://mma.prnewswire.com/media/2491091/SBI_612BA_1NE34.jpgLogo – https://mma.prnewswire.com/media/1443241/Supermicro_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/supermicro-previews-new-max-performance-intel-based-x14-servers-for-ai-hpc-and-critical-enterprise-workloads-302232535.html

Continue Reading

Artificial Intelligence

Lucinity Mentioned in the 2024 Gartner® Banker’s Guide to AML Tools for Productivity

Published

on

lucinity-mentioned-in-the-2024-gartner-banker’s-guide-to-aml-tools-for-productivity

REYKJAVIK, Iceland, Aug. 28, 2024 /PRNewswire/ — Lucinity has been mentioned in the recent 2024 Gartner Banker’s Guide to AML Tools for Productivity report, released on August 13th, 2024 by Pete Redshaw. As stated by Gartner in this report, “Banks’ biggest opportunity with AML tools lies in greater productivity and efficiency. Bank CIOs should not fall into the trap of focusing exclusively on improved risk scoring at the expense of more effective AML case investigations.” Gartner mentions Lucinity as a tool used for orchestration and business process management and as an AML vendor cited by third-party partners.

 
 
Gartner’s report highlights key findings, which states, “Banks’ primary drivers for anti-money-laundering (AML) replacements are increasing the productivity of case investigators and reducing the total cost of ownership (TCO) for AML.” The recommendation by Gartner states, “TCO: Calculate the extent to which a replacement AML system could boost your case investigators’ productivity, accuracy and consistency. Gartner’s view is that raising the productivity for your AML workforce by just a few percentage points, within the case management capability, will likely outweigh all the additional license and usage costs paid to the new AML vendor.” Lucinity’s platform is designed with this in mind as an AI-powered FinCrime operating system that takes investigations from hours to minutes.
Gartner also recommends in its holistic approach to “Source more of the AML suite from a single vendor. This approach will be more convenient (a single point of contact when things need fixing), as well as more effective, as the vendor is likely to be better integrated at a technical level and a process level.” Lucinity addresses this need by offering a unified case manager that seamlessly integrates AML, fraud, and sanctions cases into one comprehensive system.
Gartner observes that “AML vendors are increasingly using machine learning (ML) and generative AI (GenAI) to improve their offerings’ effectiveness (detection rate and accuracy ratio) and efficiency (automation, AI assistants and workflow).” Lucinity’s AI-powered copilot, Luci, is a prime example of this trend, automating routine tasks, guiding investigators through complex workflows, and minimizing errors to expedite investigations. Additionally, the Luci AI Studio enables teams to configure their AI to their specific needs, ensuring compliance with regulatory standards.
Gartner further stresses the need for explainable AI in AML systems, advising to, “Apply AI methods to augment risk scoring only where it is explainable to regulators.” We believe this aligns with Lucinity’s commitment to transparency, as every AI-driven decision within its platform is meticulously documented, providing a clear audit trail that meets rigorous regulatory requirements.
Guðmundur Kristjánsson, CEO of Lucinity, reflected on the report, stating, “It’s very insightful to see how closely the challenges in the market align with the pain points we address at Lucinity. We believe this alignment drives our commitment to delivering innovative AI solutions that unlock productivity and cost savings for financial institutions.”
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Media contacts: Celina [email protected] +354 792 4321
Logo – https://mma.prnewswire.com/media/2208676/4881394/Lucinity_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/lucinity-mentioned-in-the-2024-gartner-bankers-guide-to-aml-tools-for-productivity-302233142.html

Continue Reading

Trending