Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Artificial Intelligence

The Future of Ai Will Take a Different, More General Approach

Published

on

Santa Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) — The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

With this technology, ORBAI aims to develop Human AI, with fluent conversational speech on top of this AGI core, to provide AI professional services, all the way from customer service to legal, medical, to financial advice, with these services provided online, inexpensively, for the whole world. The core of the Legal AI has already been tested in litigation, with great success.

Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt ‘AI’, but consistently found that DL architectures tools fell far short of their expectations for ‘AI’. Brent started ORBAI to develop something better for them.

Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. Still, it isn’t easy to develop artificial general intelligence (AGI) through human-created algorithms. Do you think AGI may require machines to create their own algorithms? According to you, what is the future of machines that learn to learn?

This is correct, today people design deep learning networks by hand, defining the layers and how they connect, but after a lot of tinkering, they can only get each network to do a specific task, like CNNs for image recognition, or RNNs for speech recognition, or Reinforcement learning for simple problem solving, like games or mazes. All of these require a very well defined and constrained problem, and labelled data or human input to measure success and train. This limits the effectiveness and breadth of application for each of these specific methods.

ORBAI has built a toolset called NeuroCAD (https://www.orbai.ai/neurocad-v4-1.htm) that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. One example is our SNN autoencoder that can learn to take in any type or 2D or 3D spatial-temporal input, encode it to a latent, compressed format and also decode it. The cool part is you don’t have to format nor label your data. It learns the encoding automatically. This takes the functionality of CNNs, RNNs, LSTMs, GANs, and combines them into one more powerful general purpose analog neural network that can do all these tasks. By itself this is very useful, as the output can be clustered, then the clusters labelled, or associated with other modalities of input, or used to train a conventional predictor pipeline.

But this is for designing components. There is a second level to NeuroCAD that allows these components to be assembled and connected into structures, and these composite structures can be evolved to do very general tasks. For example, we may want to build a robot controller, so we put two vision autoencoders for stereo vision, a speech recognition autoencoder for voice commands, and autoencoders for sensors and motion controllers. Then we put an AI decision making core in the middle, that can take in our encoded inputs, store them in memory, learn how sequences of these inputs evolve in time, and store models for what responses are required. Again, all of these autoencoders and components are evolved to their specific area, and how they connect is evolved, as is the decision core in the middle.

To get this to work, we have to take some guesses about how to design this artificial decision core, the brain in the middle, and seed the genetic algorithms with a couple decent designs, so it will process the sensory input, store it and build relationships between the memories, build narratives with inputs and actions with progressively more advanced models that make the robot better able to understand what to do given specific instructions and the state of its world. Once we have an initial guess, we start evolving the components and how they connect to each other and the architecture of the decision making core.

So the short answer is yes, we will have evolutionary genetic algorithms design our AI, from the components, to the way they connect, to how they solve problems, starting with small ‘brains’ and working up, like biological evolution did.

For details, see the ORBAI Patents and NVIDIA GTC presentations listed at the bottom of our AGI page: https://www.orbai.ai/artificial-general-intelligence.htm

Many experts, including computer scientists and engineers, predict that artificial general intelligence (AGI) is possible in the near future. But, ORBAI shows us that it is coming even sooner than we likely anticipated. Could you please shed some light on the project and tell us more about the 3D characters?

What is usually meant is superhuman AGI, which is the apex of this process, but there are degrees and flavors of artificial general intelligence along the way.

– Having more general neural nets that can combine the functionality of CNNs, RNNs, RL, and other Gen 2 AI components into one neural net architecture that is more general and more powerful – One year

– Building an artificial intelligence that can take in sensory inputs, form memories and associations between them, plan and make decisions with them, at the level of insect – Two years, a rodent – Three years

– Human-like conversational speech and and general purpose decision making, but trained only in a specific vocation – Four years for first implementation, 6 years to Make it really work. Some vocations like Law and Medicine have constrained spaces of information and decisions, so are easier than building a general human

– These vocational AIs can be trained independently, then later be migrated to a common architecture and combined to form a multi-skilled AGI. It would not be a general human AI, but would have superhuman capability to do areas of each profession, have deeper and wider knowledge reach, and the ability to model the future, plan and predict better than humans.

– Perfecting AGI, making a completely conversational, human-level general AI that is indistinguishable from us and can pass a Turing Test will most likely require building a synthetic AGI, that is much more powerful than human, that can then use all that power to emulate or mimic a human being, if that is what we want it to do.

What most people talk about as AGI is actually superhuman artificial general intelligence. But how do we measure “superhuman”? Deep learning AI is already superhuman in some very specific areas, and with advances like ORBAI is doing, will become superhuman in broader professional areas in analysis, planning, and prediction. We will have better conversational speech, we might pass the Tuiring test in 4-6 years, but how can speech become superhuman after that? Mastering 8 languages or more? Hm, this gets a bit muddier. I think superhuman is when AGI can solve most problems and predict into the future far better than us.

We base our AGI curve on Moore’s Law, and unlike current Gen 2 DNN based AI, we are using analog neural net computers that scale proportionally with existing hardware, and evolve to become more efficient, and have greater capability with time:

So in summary what ORBAI is building is an AGI that can take in and analyze large amounts of arbitrary format and types of input data, build models of how its perceived world works, and make predictions and plan using those models, then apply that to specific fields like law, medicine, finance, enterprise planning, administration, agriculture, and others. Because human speech fits this concept of modelling a bi-modal sequence of events, it will be a feature, with the speech anchored to the rest of the memories and world data to give it context and relevance.

From ordering groceries through Alexa to writing an email with Siri, AI has been transforming many aspects of our lives. According to you, how will ORBAI’s 3D characters help people transform their lives and bring a change?

I have personally used the Alexa, Google and Siri voice interfaces in my home and have done my best to integrate them into my life and make use of them, but I still find them difficult and awkward, always feeling like there is an easier way to do the same task on a mobile screen. I think this is because these voice interfaces are the equivalent of what we had with the DOS-era command-line interfaces, where you state a command, then a set of parameters, and they have to be properly formatted, and correct, like “Alexis, what is the weather in Seattle tomorrow”. and the speech has to be crisply enunciated in an abnormal, staccato pattern. ORBAI did a lot of work in 2019, with testing many speech APIs in the home, and at conferences with holographic character kiosks, and found that most ordinary people cannot figure out how to talk to them properly, don’t know how to cue the device to listen, and tend to launch into long, rambling monologues, so voice interfaces just don’t work for them.

By creating a more advanced, conversational AI that uses our core technology to parse speech, and understand the flow of human speech, to tie it to memories of real concepts, events, and contexts, we can do a more natural back and forth flow of conversation between the person and AI that is much more relevant and grounded, and the AI can direct the conversation to get specific information from the user. Having a 3D character onscreen is more to get the person to look at the device, speak clearly into the microphone, and so the AI can watch the person speak, pick up facial expressions and even lip-read to augment the speech recognition. The characters also have a cool factor, will make our products recognizable and also make for excellent branding. There are already attorneys that fear Justine Falcon, Legal AI.

Having inexpensive access to professional services like legal, medicine, finance, education, and other information professions online with AI would greatly improve many people’s lives, even if it initially allows them to do a Q&A session with the AI and it leads them through better defining the issues they are having, then refer them to a professional, like a Doctor, providing a concise and professionally written report with the correct language and format to that professional. This would reduce the office visit time, and determine whether coming in is even necessary. Speaking with a lawyer about an issue is difficult for an average person because law is so precisely defined, and the language differs greatly from plain language and concepts, so the AI would be like a bridge to translate between them. For the developing world, extending the AI capability to doing medical diagnosis, and actually acting in a professional capacity in other fields would provide people with their first and only access to these kind of services, both changing and saving lives. With the advent of more advanced AGI for doing diagnosis, litigation, financial planning – the sky becomes the limit.

AI has already shown confinement in automating a lot of tasks successfully. Still, most laypeople have a universal question: can AI completely replace flesh-and-blood professionals in the future, or will it just act as a support system to ease the pressure of work?

The two hardest professions to replace will be Housekeeper and Handyman, simply because these professions require great manual dexterity, and ability to solve a wide variety of unstructured spatial problems, using various tools to accomplish an almost unlimited variety of tasks, and a robot body that is strong and dexterous enough with enough power to work all day at these tasks.

The simpler professions to automate with AGI will be the information professions, where a large amount of knowledge, mental models built on that knowledge, and a limited scope of actions or outcomes to perform constrain the problem. That is why we picked an AI lawyer, and an AI doctor to be the first candidates for AGI – they are both structured information professions like this.

We have seen how in many situations, AI and automation augment professions. ATMs and online banking reduced the work that bank tellers personally have to do, but mostly offloaded the mundane and repetitive work. Most likely this trend of AI augmenting humans will continue

We have been told that ORBAI is launching an equity fundraising campaign. Could you please tell us how people can invest in the future and, in turn, what benefits they will be getting?

Yes, ORBAI launched an equity crowdfunding campaign on 24 Sept 2021 on www.startengine.com/orbai. The details of the offering are on our campaign page, but the SEC rules prevent us from communicating any specifics to the public, as it would be solicitation. StartEngine also has a great deal of general information about equity crowdfunding at www.startengine.com.

Media Contact:
ORBAI Technologies, Inc.
Brent Oster
+1 408-963-8671
[email protected]

GlobeNewswire is one of the world's largest newswire distribution networks, specializing in the delivery of corporate press releases financial disclosures and multimedia content to the media, investment community, individual investors and the general public.

Artificial Intelligence

Compact Solutions, Mighty Results: Cervoz Unleashes the Power of Edge Computing

Published

on

compact-solutions,-mighty-results:-cervoz-unleashes-the-power-of-edge-computing

TAIPEI, May 20, 2024 /PRNewswire/ — Cervoz Technology, a leader in industrial-grade storage, memory, and expansion solutions, is powering the evolution of edge computing with its innovative solutions.

In today’s data-rich landscape, businesses rely on insights from IoT devices. With advancements in AI and 5G, data volumes surge, straining traditional cloud computing with bandwidth and latency issues. Edge Computing emerges as a solution, processing data at its source for real-time insights crucial for applications like digital twins, autonomous vehicles, and smart healthcare.
What is Edge Computing?
Edge Computing is a decentralized IT architecture that processes data near its origin, often at the network’s edge. This approach divides tasks from central data centers to edge devices and nodes. It enables faster, deeper insights and enhances responsiveness, improving speed and reducing latency compared to centralized clouds. Typically, an Edge Computing system comprises three layers: device, edge, and cloud.
The Device Layer
The device layer, situated closest to the physical world, comprises endpoint devices such as sensors, controllers, and cameras, among others. The layer prioritizes data collection and initial processing for real-time response and efficiency. It requires highly reliable, compact, and energy-efficient components. Cervoz’s M.2 2230 (A+E key /B+M key) NVMe SSDs and DDR4 SO-DIMM modules optimize this setup with fast data access and efficient storage in space-saving designs. They operate reliably across a wide temperature range (-40°C to 85°C) and feature additional conformal coating and anti-vibration filling for enhanced performance in harsh environments.
The Edge Layer
The edge layer, composed of edge servers and gateways, serves as a crucial link between device-layer data sources and cloud-based operations. It is designed for extensive data handling, filtering, and analytics, emphasizing quick operations and real-time responses to minimize reliance on remote data centers. For optimal performance, components must manage high processing loads with minimal latency and maximum stability. NVMe SSDs boasting PCIe Gen3 or higher, such as the Cervoz NVMe PCIe Gen3x4 SSD, with compact M.2 form factors, advanced DRAM cache buffer, and LDPC ECC technology, are perfectly suited for the edge layer. These SSDs enable efficient directory and error correction during high-speed data transfers, thus enhancing data accuracy and reliability.
The Cloud Layer
The cloud layer requires intensive processing and long-term storage, performing analytics, operating AI and machine learning, and managing extensive data systems across various edge locations. It demands high-performing and reliable components. Cervoz NVMe PCIe Gen4x4 SSDs excel with Read/Write speeds of up to 7,100/6,190 MB/s and 4K random IOPS up to 1,000K, with aluminum heatsinks and thermal throttling for overheating issues. DDR5-5600 DRAM boosts this setup with high data throughput and efficient power use, satisfying rigorous computational needs.
Interlayer Connectivity
Reliable, low-latency network connectivity is crucial as data flows between the device layer, edge, and cloud. Cervoz 10GbE Low-Profile Ethernet Card provides high-speed, stable wired connections in high-bandwidth, low-latency environments. Additionally, the new PCIe Compact Low-Profile Ethernet Card is ideal for narrow edge device enclosures. For areas where wiring is impractical, M.2 Wi-Fi cards offer essential wireless flexibility. These connectivity options ensure a solid foundation for seamless data management across all layers.
Cervoz offers comprehensive solutions to enhance edge computing capabilities. From compact NVMe SSDs and DRAM modules, to advanced modular expansion cards, enabling seamless integration, optimized performance, and unmatched reliability in edge computing deployments.
About Cervoz
Based in Taiwan, Cervoz Technology boasts almost twenty years of expertise in developing and providing industrial-grade storage, memory, and expansion solutions across a myriad of global industrial sectors.
ContactsSales: [email protected] 
Photo – https://mma.prnewswire.com/media/2414604/2024_05_PR_Compact_Solution_1920x1080.jpg

View original content:https://www.prnewswire.co.uk/news-releases/compact-solutions-mighty-results-cervoz-unleashes-the-power-of-edge-computing-302147340.html

Continue Reading

Artificial Intelligence

Team Exponentia.ai UK attended the AWS summit in London

Published

on

team-exponentia.ai-uk-attended-the-aws-summit-in-london

Team Exponentia.ai UK celebrates Sanjay Kumar’s insightful session at the AWS summit in London
LONDON, May 20, 2024 /PRNewswire/ — Sanjay Kumar, Head of Exponentia.ai in UK & Europe, delivered an impactful presentation on Driving Manufacturing Innovation through AWS Analytics & AI. The recently concluded AWS Summit took place on the 24th of April in ExCeL, London.

Sanjay shared the stage at Theatre B with Adrian Pinder from DS Smith and Ahmed Raafat from AWS, captivating the audience with insights and experiences from Exponentia.ai’s collaboration with DS Smith.
During the session, Sanjay Kumar highlighted Exponentia.ai’s impressive use cases and case studies, including the Data Factory and Procurement Spend Analytics (PSA) projects undertaken in partnership with DS Smith. These initiatives demonstrated Exponentia.ai’s commitment to leveraging the AWS ecosystem to drive operational efficiency and business growth in the manufacturing sector.
The Exponentia.ai team seized the opportunity to engage with industry peers, partners, and friends, while also exploring the latest innovations showcased at the summit. Their participation in various sessions and visits to innovative booths further enriched their understanding of industry trends and best practices.
Reflecting on the event, Sanjay Kumar expressed his enthusiasm, stating, “The AWS Summit in London provided an excellent platform for us to showcase our expertise in manufacturing analytics and AI. The overwhelming response from attendees reaffirms the value of our solutions in driving innovation and transformation across industries and our commitment to deliver value to our customers through data. We look forward to continuing our collaboration with AWS and our partners to deliver impactful outcomes for our clients.”
The AWS Summit London 2024 marked a significant milestone for Exponentia.ai, reinstating its commitment to excellence and innovation in the field of AI and analytics. As the team eagerly anticipates future events, they remain dedicated to empowering businesses with transformative technology solutions.
For more information about Exponentia.ai and its innovative offerings, please visit www.exponentia.ai. 
Photo: https://mma.prnewswire.com/media/2415812/Team_Exponentia_ai_UK.jpgLogo: https://mma.prnewswire.com/media/2415803/Exponentia_ai_Logo.jpg

View original content to download multimedia:https://www.prnewswire.co.uk/news-releases/team-exponentiaai-uk-attended-the-aws-summit-in-london-302148857.html

Continue Reading

Artificial Intelligence

Clarivate Launches AI-Enhanced Solution to Accelerate Trademark Watching

Published

on

clarivate-launches-ai-enhanced-solution-to-accelerate-trademark-watching

Providing faster and more accurate answers to critical business questions
LONDON, May 20, 2024 /PRNewswire/ — Clarivate Plc (NYSE: CLVT), a leading global provider of transformative intelligence, launched the first publicly available version of Trademark Watch Analyzer today at the 2024 International Trademark Association Annual Meeting. As the next-generation trademark protection solution enhanced by artificial intelligence (AI) and cloud technology, this solution combines Clarivate global trademark and case law data, in-house IP expertise, and cutting-edge AI technology. It will provide faster and more accurate answers to critical business questions by automating key trademark watching tasks and intelligently prioritizing result sets.

Trademark Watch Analyzer connects users to trademark datasets in 258 countries and territories across 191 official trademark registers, as well as data from 7+ million trademark litigation cases. This content is harmonized and connected through AI algorithms which query, connect and mine both datasets delivering advanced insights in supported watch products. This revolutionizes the way trademark watch results are delivered, so that clients can rank results based on their chance of success/opposition.
According to data from SAEGIS®, a CompuMark™ trademark solution from Clarivate, the number of trademark applications filed globally each year has more than doubled since 2014. With over one million new trademarks added to trademark registers around the world each month, the risk of infringement has never been greater, making vigilance of trademark applications essential to protect and grow strong brands.
Gordon Samson, President, Intellectual Property, Clarivate, said: “As the global business landscape grows more complex, trademark professionals are faced with challenges including more data, less context and shorter deadlines. Our advanced AI-driven solution enables clients to confidently monitor their trademarks anywhere in the world, with global monitoring and automated alerts, while also saving time, costs and critical resources. The launch of the Trademark Watch Analyzer is the latest example of our Think forward™ promise — connecting clients to trusted intelligence to ensure an IP-empowered tomorrow.”
With a more intuitive design and user interface, Trademark Watch Analyzer will enable clients to have a much-improved user experience as they work with their results. The navigation will be based on the same architecture as the Brand Landscape Analyzer launched in 2023, providing clients with a more cohesive experience within the Clarivate product suite.
Find out more about Trademark Watch Analyzer here.
About ClarivateClarivate™ is a leading global provider of transformative intelligence. We offer enriched data, insights & analytics, workflow solutions and expert services in the areas of Academia & Government, Intellectual Property and Life Sciences & Healthcare. For more information, please visit www.clarivate.com
Media Contact: Jack Wan, External Communications Director [email protected]
Logo – https://mma.prnewswire.com/media/1159266/Clarivate_Logo.jpg

View original content:https://www.prnewswire.co.uk/news-releases/clarivate-launches-ai-enhanced-solution-to-accelerate-trademark-watching-302149043.html

Continue Reading

Trending