ICP Germany has completed its portfolio of AI accelerator cards with the Mustang-M2BM-MX2 card.
In addition to Mini PCIe and PCIe based solutions, an M.2 PCIe plug-in card variant is now available.
With the M.2 format in the size 22×80 mm, system integrators are able to build small embedded PC systems with AI functionality as deep learning inference systems.
Two Intel® Movidius™ Myriad™ X MA2485 Vision Processing Units (VPUs) provide AI functionality through 16 SHAVES cores each.
Each Myriad™ VPU delivers up to one trillion calculations per second. With a maximum power of 8 watts, the Mustang-M2BM-MX2 is particularly suitable for low-power AI applications. Additionally, the multi-channel capability allows each VPU to be assigned a different DL topology.
This allows simultaneous calculations to be performed, and at the same time, for example, object recognition and face recognition can be performed. Compatibility with the Open Visual Inference Neural Network Optimisation (OpenVINOTM) Toolkit from Intel®, ensures a simple and rapid integration of various AI training models.
The OpenVINOTM Toolkit not only optimises the performance of the training model but also ensures that it scales to the target system. Thanks to this fast and optimised integration, both developers and customers benefit from lower development costs.
The Mustang-M2BM-MX2 is compatible with common Linux operating systems such as Ubuntu, CentOS and Windows® 10, and supports numerous architectures and topologies of neural networks, such as AlexNet, GoogleNet, SqueezeNet and Yolo. In addition to other AI accelerator cards, ICP also offers ready-to-use embedded systems, which are also equipped with AI functionality.
The products can be seen live at the Embedded World in Nuremberg in Hall 1 Stand 201.