Kur pirkti

ASBIS tiekia didelį asortimentą IT produktų savo klientams Lietuvoje. Aplankykite skilti kur pirkti ir sužinokite kur yra artimiausia parduotuvė

ASBIS naujienos

Gegužė 06, 2018
For 38 years, Computex has been evolving in alignment with global ICT trends ...
Vasaris 07, 2018
ASBIS won the prestigious “Strongest Enterprise Device Growth 2017 in Eastern ...
Sausis 29, 2018
ASBIS has just become the official distributor of AccelStor storage solutions ...
Rugpjūtis 21, 2017
ASBIS becomes the official distributor of INFINIDAT solutions in new markets: ...
Birželis 06, 2014
Vilnius, Lietuva – Pasaulinis mobiliųjų įrenginių, elektronikos prietaisų ir ...
Gegužė 21, 2014
Mobiliųjų įrenginių gamintoja iš Kanados „BlackBerry“ stiprina savo pozicijas ...
Sausis 09, 2014
Naujieji „Prestigio“ su „Windows 8.1“ nusitaikė į Lietuvos verslo ir švietimo ...
Lapkritis 06, 2013
'IN BUSINESS Awards 2013' yra aukščiausias apdovanojimas Kipre veikiančioms ...
Rugpjūtis 07, 2013
2013 m. rugpjūčio 6d. — Sparčiai auganti tarptautinė kompanija „Prestigio“ ...
Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick

Rugpjūtis 07, 2017

Intel, Compute stick

Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick

Intel launched the Movidius™ Neural Compute Stick, the world's first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge

BUY ONLINE

Intel Products

Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy. Whether it is training artificial neural networks on the Intel® Nervana™ cloud, optimizing for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel® Xeon® Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services.

"The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device," said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company. "This enables a wide range of AI applications to be deployed offline."

Machine intelligence development is fundamentally composed of two stages: (1) training an algorithm on large sets of sample data via modern machine learning techniques and (2) running the algorithm in an end-application that needs to interpret real-world data. This second stage is referred to as "inference," and performing inference at the edge – or natively inside the device – brings numerous benefits in terms of latency, power consumption and privacy:

  • Compile: Automatically convert a trained Caffe-based convolutional neural network (CNN) into an embedded neural network optimized to run on the onboard Movidius Myriad 2 VPU.
  • Tune: Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
  • Accelerate: Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.

Disclaimer:: The information contained in each press release posted on this site was factually accurate on the date it was issued. While these press releases and other materials remain on the Company's website, the Company assumes no duty to update the information to reflect subsequent developments. Consequently, readers of the press releases and other materials should not rely upon the information as current or accurate after their issuance dates.