Machine learning at the edge: AI chip company challenges Nvidia and Qualcomm

by Janice Allen
0 comments

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


The current demand for real-time data analytics at the edge marks the beginning of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data, in turn, is fueling a huge AI chip market, as companies look to provide edge-to-edge ML models with less latency and greater power efficiency.

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices that live at the edge. Those devices are also hardware-centric, limiting their computing power and making them unable to handle various AI workloads. They use power inefficient GPU or CPU based architectures and are also not optimized for embedded edge applications with latency requirements.

While industry giants such as Nvidia and Qualcomm offer a wide range of solutions, they usually use a combination of GPU or data center-based architectures and scale them to the embedded edge rather than creating a purpose-built solution from scratch. In addition, most of these solutions are designed for larger customers, making them extremely expensive for smaller companies.

Essentially, the $1 trillion global embedded-edge market relies on legacy technology that limits the pace of innovation.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

A new machine learning solution for the edge

ML company Sima AI tries to address these shortcomings with its machine learning system-on-chip (MLSoC) platform that enables ML deployment and scalability at the edge. Founded in 2018, the California-based company announced today that it has begun shipping the MLSoC platform to customers, with an initial focus on helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones. , autonomous vehicles, healthcare and the public sector.

The platform uses a software-hardware co-design approach that emphasizes software capabilities to create edge ML solutions that consume minimal power and can handle a variety of ML workloads.

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and powerful application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces and systems management – all connected via a network-on-chip (NoC). The MLSoC has low operating power and high ML throughput, making it ideal as a standalone edge-based system controller, or to add an ML offload accelerator for processors, ASICs, and other devices.

The software-first approach includes carefully defined intermediate representations (including the TVM Relay IR), along with new compiler optimization techniques. This software architecture enables Sima AI to support a wide variety of frameworks (e.g. TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks.

The MLSoC promise – a software-first approach

Many ML startups are focused on building only pure ML accelerators and not a SoC with a computer vision processor, application processors, CODECs and external memory interfaces that allow the MLSoC to be used as a standalone solution that does not need to be connected with a host processor. Other solutions tend to lack network flexibility, performance per watt, and push-button efficiency – all of which are needed to make ML effortless for the embedded edge.

Sima AI’s MLSoC platform differs from other existing solutions in that it solves all these areas simultaneously with its software-first approach.

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor at any resolution. “Our ML compiler uses the open-source Tensor Virtual Machine (TVM) framework as its front-end, supporting the widest range of ML models and ML frameworks for computer vision,” Krishna RangasayeeCEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance standpoint, Sima AI’s MLSoC platform claims to deliver 10x better performance in key merits such as FPS/W and latency than alternatives.

The company’s hardware architecture optimizes data movement and maximizes hardware performance by accurately planning all calculations and data movement in advance, including internal and external memory to minimize latency.

Achieving scalability and push button results

Sima AI provides APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a set of specialized and generic optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning accelerator (MLA) block.

For Rangasayee, the next phase of Sima AI’s growth is focused on revenue and scaling their technical and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. Aiming to transform the embedded-edge market, the company has also announced partnerships with key industry players such as TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

The mission of VentureBeat is a digital city square for tech decision makers to gain knowledge about transformative business technology and transactions. Discover our briefings.

You may also like

All Right Reserved Businesskinda.com