Vitis AI in the cloud

Adaptable and Real-Time AI Inference Acceleration in the VMAccel Cloud

Step 1

Fill out the form to sign up for a free 5-hour demo account on the VMAccel® FPGA Cloud

Step 2

Check your email for log in credentials for the VMAccel® FPGA Cloud platform

Step 3

Enter log-in credentials on the VMAccel® FPGA Cloud platform and run the Vitis AI™ demo.

Explore the possibilities of AI inference development in the cloud

AI Model Zoo

AI model zoo offers rich deep learning models from the most popular frameworks such as Pytorch, Tensorflow, Tensorflow 2, and Caffe. It provides optimized and retrainable AI models that enable faster deployment, performance acceleration, and productization on all Xilinx platforms. 

AI Optimizer

With world-leading model compression technology, the AI optimizer reduces model complexity by 5X to 50X with minimal accuracy impact. Deep compression takes the performance of your AI inference to the next level.

AI Quantizer

The AI quantizer can reduce the computing complexity without losing prediction accuracy by converting the 32-bit floating-point weights and activations to fixed-point like INT8. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.

AI Complier

The AI compiler maps the AI model to a high-efficient instruction set and data flow. It also performs sophisticated optimizations such as layer fusion, instruction scheduling, and reuses on-chip memory as much as possible.

AI Profiler

The performance profiler allows programmers to perform in-depth analysis of the efficiency and utilization of the AI inference implementation.

AI Library

The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with DPU cores. It is built based on the Vitis AI runtime (VART) with unified APIs and provides easy-to-use interfaces for the AI model deployment on Xilinx platforms.

Copyright 2023 VMACCEL
Powered by VMAccel