header banner
Default

Emerj Artificial Intelligence Research, "Beyond GPUs: The Future of AI Computing with Perspectives from Intel, GE Research, Groq, and Rain,"


Beyond GPUs_ The Future of AI Computing with Perspectives from Intel, GE Research, Groq and Rain@2x

Edge AI, also known as edge intelligence, is a term that refers to the convergence of two dynamic technological trends: edge computing and Artificial Intelligence. In today’s tech landscape, the proliferation of mobile devices and advancements in communication technologies have fueled the rise of edge computing, which focuses on processing data at the network edge. 

Simultaneously, AI has made remarkable strides in the last half-decade, particularly in deep learning and hardware improvements, enabling powerful AI applications. The combination of these forces results in Edge Intelligence. 

A report by Fortune Business Insights projects the market for edge AI to grow to USD 107.47 billion by 29, with a CAGR of 31.7% during 2023-2029. In recent years, applications and use cases of edge AI have found the most results in the automotive, healthcare, and manufacturing sectors. Edge AI also seems to be promising for cybersecurity while safeguarding sensitive data. 

The surge in edge AI use cases is also driving adoption for 5G networks and IoT-based edge computing solutions to connect IT and telecom. While the demand for edge AI and its application is increasing, so are the business leaders’ questions on what hardware and infrastructure the company needs to deploy edge AI across the board. 

Emerj Senior Editor Matthew DeMello recently spoke with the leaders from Rain, Intel, GE Research, and Groq on the ‘AI in Business’ podcast in the special series – Beyond GPU to discuss the hardware and infrastructure requirements for edge AI. 

This article summarizes the following insights and strategies for edge AI development that these leaders shared during their podcast appearances:

  • Leveraging everyday devices like cell phones for the foundation of future edge AI use cases
  • The difference between domain and algorithmic knowledge in developing edge AI systems with computer vision suited for manufacturing use cases
  • The three main factors driving the expansion of computer vision and AI
  • The growing importance of inference as the end-game of model development

Emerj would like to thank our guests for sharing their knowledge and perspectives on AI adoption. In the episode summaries below, enterprise leaders can find a breakdown of these insights and how best to frame their application to the widest possible range of industries and contexts.

Finding ROI for AI at the Edge – with Gordon Wilson of Rain

VIDEO: Intel's Revolutionary Neuromorphic Processor: The Future of AI Computing.
AI Evolution

Guest: Gordon Wilson, CEO of Rain

Expertise: Artificial Intelligence, machine learning and data analysis

Brief recognition: Gordon Wilson is the CEO and co-founder of Rain. He has a degree in Statistics and Mathematics from the University of Florida. He has also worked in political campaigning and non-profit for several years.

Gordon discusses the concept of continuous deployment and adaptation of machine learning models in various contexts. He highlights how traditional deployment of machine learning models involves:

  • Training a model on a fixed dataset
  • Loading the parameters into the model
  • Deploying the model

However, the model remains static in such cases and doesn’t improve or adapt after deployment.

Gordon emphasizes shifting towards a new paradigm where models can continuously learn and adapt even after deployment. He suggests that this continuous learning and improvement of models presents new possibilities for enhancing user experiences and performance. He encourages business leaders to consider the potential benefits of models that can learn and improve over time.

Gordon gives an example of using not just one cell phone but the entirety of global information being recorded on mobile devices every day for the purposes of developing possible edge AI use cases: 

“Every phone today has a camera. Every phone is taking in visual information. There are so many applications and so many use cases that could be enabled if we could embed this intelligence into every mobile phone in the world. Whether it’s about recognizing objects, to helping people navigate their world, recognizing the plants in their yard, or creating extended augmented reality experiences through their phone – even to creating more advanced security protocols. There are billions of devices in the world today that have access to rich visual information, but they can’t act on it. We want to make it possible.”

– Gordon Wilson, CEO and Co-founder of Rain

When he looks at the statistical infrastructure, he points out that GPUs took off and are performing well because the GPU is fundamentally a very scalable architecture. He explains that GPUs are built with parallel cores and scale well with Moore’s law. 

Not only can many of these cores be integrated onto a single chip, sometimes numbering in the thousands, but many of these chips can also be connected on boards to form larger systems, similar to NVIDIA’s DGX pod.

A Closer Look at Manufacturing Challenges for Computer Vision – with Peter Tu of GE Research

VIDEO: How Nvidia Grew From Gaming To A.I. Giant, Now Powering ChatGPT
CNBC

Guest: Peter Tu, Chief Scientist for Artificial Intelligence, GE Global Research

Expertise: Video analytics, computer vision, face expression analytics, articulated motion analysis 

Brief Recognition: Dr. Tu has helped to develop a large number of analytic capabilities, including person detection from fixed and moving platforms, crowd segmentation, multi-view tracking, person reacquisition, face modeling, face expression analysis, face recognition at a distance, face verification from photo IDs and articulated motion analysis. He has over 50 peer-reviewed publications and has filed over 50 US patents.

To effectively harness the power of machine learning at the edge, Peter believes, skilled practitioners are necessary to articulate what is being sought after and to transform that vision into numerical models.

He points out that while there is a trend towards democratizing machine learning tools and techniques, a common issue is that many people download models and apply them to their data without review. If the results are not satisfactory, they move on without much introspection. Peter suggests that this approach lacks the depth to understand the process and results.

Peter then introduces the concept of the “geometry of learning,” a term he’s been exploring with colleagues at DARPA. This concept delves into understanding the underlying structure of data. 

In explaining the approach, Dr. Tu points out that data, especially in domains like images, exist within high-dimensional spaces. He introduces the idea of “manifolds,” which are smooth structures with geometric and topological properties. These manifolds help represent the structure and dependencies within data.

Peter further discusses the relationship between domain knowledge and algorithmic knowledge in evaluating the performance and utility of machine learning models, mainly using ROC curves and return on investment (ROI) as measures.

He identifies a crossover between two forms of knowledge:

  • “Domain knowledge” or understanding the subject matter or problem domain.
  • “Algorithmic knowledge” or understanding the mathematical and computational methods used in machine learning.

“There’s a crossover between what I would describe as domain knowledge and algorithmic knowledge if you will. And I think we’re starting to see hybrids of that because, to some extent, there is democratization on the algorithm side. These aren’t inscrutable things that require someone with a Ph.D. who’s devoted their life to understanding certain things. At some level, they are approachable.”

– Peter Tu, Chief Scientist for Artificial Intelligence, GE Global Research

Solutions for AI Hardware Challenges from Infrastructure to Deployment – with Mark Heaps of Groq

VIDEO: Future Computers Will Be Radically Different (Analog Computing)
Veritasium

Speaker: Mark Heaps, Vice President, Brand & Creative, Groq

Expertise:  Art/Creative Direction, Project Management, Conceptualization, Visual Story Telling

Brief Recognition: Mark has led projects and teams for some of the top Fortune 500 brands, including Apple, Google, Adobe, Dell, Capital One and E&J Gallo. He is also a published author, award-winning speaker and has been featured at events like Adobe MAX, SXSW, CreativePro Week and others.

Mark starts by addressing a significant problem, particularly in North America. He mentions a specific challenge related to hardware supply. Businesses are dealing with existing hardware providers and are encountering a time-related issue when trying to acquire chips, hardware, and systems necessary for their businesses. The delay period for obtaining these items ranges from 12 to 18 months.

Mark explains that businesses are exploring various coping alternatives due to these substantial delays. One of these alternatives is the idea of deploying rapidly in the cloud. 

However, he points out that even cloud service providers, often referred to as hyperscalers, face the same hardware acquisition challenges. It means that businesses considering the cloud route might still encounter obstacles in terms of hardware availability:

“So we have a real challenge today with meeting the demand at the rate that the AI explosion is happening, and this is why I think we’re seeing so many institutions developing their own silicon developing their own chips,” Mark tells the podcast audience. “And it’s why it makes a lot of space for people to build specialized processors, like the language processor that we have for these very particular types of workloads.”

He later discusses a significant challenge related to AI development, often called “developer velocity.” This challenge pertains to the time it takes for teams of developers to take an AI model from the initial stages to full deployment in production.

Mark points out that some companies address this challenge through AI-generated code or software, an approach that is typically explored to expedite the development process in various areas. 

However, even with such efforts, Mark highlights a key point: getting the latest and most advanced AI models, such as recommendation engines, anomaly detection systems, or large language models, up and running takes a lot of work. 

It takes these teams approximately six months to fully operationalize these models for their specific needs, with some variation between three to six months. Once those models are operationalized, taking the systems to the next level is the real challenge: 

“But when you’re ready to deploy and move into production, you have to move into inference. And this is really where you start getting the ROI back on all that investment for training and all that investment for infrastructure. Because now inference is where you’re delivering a service to your customers, to your end users. So that shift is saying, ‘Okay, we now need to move from training to inference.’ That’s, that’s a big shift for people – this is where the rubber meets the road.”

– Mark Heaps, Vice President, Brand & Creative, Groq

Inference refers to the process by which the deployed AI models actively engage in tasks like prediction, classification, and decision-making, responding to real-time data and user requests. This stage ensures that the investments made in training and infrastructure begin to yield returns. However, it comes with its set of challenges. One significant challenge lies in deploying neural networks to edge devices, which often have limited computational and memory resources.

 AI Hardware for Computer Vision – with Adam Burns of Intel

VIDEO: NVIDIA'S HUGE AI Chip Breakthroughs Change Everything (Supercut)
Ticker Symbol: YOU

Guest:  Adam Burns, Vice President – Network and Edge, Director of EdgeAI Development Tools at Intel.Intel

Expertise: Semiconductors, product management and 

Brief Recognition: Burns is the Vice President of OpenVINO Developer Tools at Intel Corporation, where he leads the strategy, development, and marketing of software tools and solutions for computer vision and artificial intelligence applications. He has a Master of Business Administration degree from Stanford University and a strong background in product management, product development, and market segmentation. 

The Intel Vice President discusses the distinctive challenges in the manufacturing and industrial sectors during his podcast appearance. He notes that issues in this space are often specific to each use case or deployment, necessitating customized solutions. 

The primary goal in manufacturing is to enhance yield and minimize defects, but this objective may differ depending on what is produced. Manufacturers need to adapt data analysis and modeling to their particular use cases. 

Once defects are identified, they investigate environmental and machinery factors contributing to them, facilitating predictive maintenance. Access to quality data and the availability of suitable tools are significant challenges in this sector.

In high-speed manufacturing processes, the main goal is to automate and optimize factory operations. Here, AI often works with humans, with systems providing real-time information to operators, helping them make better decisions and streamline production.

The key theme across these sectors is the interaction between AI and human operators, where AI tools are used to augment human decision-making and improve overall processes and outcomes.

He mentions three main factors driving the expansion of computer vision and AI. 

  • The importance of real-time data processing:  The ability to process data in real-time dramatically enhances the value of AI predictions, especially in applications like manufacturing. For instance, real-time data analysis in manufacturing can identify usable and destructive materials, allowing for efficient material reuse and preventing errors that could impact subsequent production.
  • The advancement of AI models: When provided with quality data, these models can generate highly reliable outcomes. This increase in model performance and accuracy fosters trust in the technology, making it valuable in various applications, particularly in computer vision.
  • The improvement of tools for customizing AI models: These tools are becoming more accessible and user-friendly, allowing practitioners, such as subject matter experts in factories, to customize AI models to suit specific needs.

“And then lastly, tools are getting better.  I’m getting to the point where I can customize those models per use case; I can do so with less data in really allow practitioners to get in there and use their expertise to the subject matter experts in the factory, can use their expertise to bring these models with new and advanced tools, some from Intel, some are from others. That’s really moving it through the state of the art so that real-time responsiveness makes the data and the insights more valuable. You get more accuracy and improvement in models, and the ability to customize them to what I need in my specific use case is improving. And it’s getting much more approachable and easier to use.”

-Adam Burns, Vice President – Network and Edge, Director of EdgeAI Development Tools at Intel.

Sources


Article information

Author: Amy Brooks

Last Updated: 1704187204

Views: 2214

Rating: 3.8 / 5 (114 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Amy Brooks

Birthday: 1981-07-01

Address: 07181 Jennifer Brook Apt. 523, Christopherton, CT 13786

Phone: +4891638077240481

Job: Article Writer

Hobby: Motorcycling, Running, Aquarium Keeping, Skydiving, Yoga, Origami, Crochet

Introduction: My name is Amy Brooks, I am a proficient, intrepid, clever, vibrant, cherished, treasured, rich person who loves writing and wants to share my knowledge and understanding with you.