By far the easiest way I've found to learn and use advanced computer vision. This single board can do everything from driving a car to landing a drone.
My experience with OpenMV thus far has been superb — their features “just work” and they’re easy to use too.
OpenMV is a game changer for embedded computer vision and is our go-to platform for TinyML.
Machine Vision with Python
The OpenMV project is about creating low-cost, extensible, Python powered, machine vision modules and aims at becoming the “Arduino of Machine Vision“. Our goal is to bring machine vision algorithms closer to makers and hobbyists. We’ve done the difficult and time-consuming algorithm work for you leaving more time for your creativity!
The OpenMV Cam is like an super powerful Arduino with a camera on board that you program in Python. We make it easy to run machine visions algorithms on what the OpenMV Cam sees so you can track colors, detect faces, and more in seconds and then control I/O pins in the real-world.
You can use the OpenMV Cam to detect faces and find eyes using our built-in Haar Cascade feature detection algorithm. You can also precisely track pupils too.
The OpenMV Cam can save Grayscale or RGB565 BMP / JPG / PPM / PGM images to an attached μSD Card. You can also save timelapsed photos too.
The OpenMV Cam uses less than 200 mA while processing images so you can use the OpenMV Cam like a microcontroller (Arduino) that's attached to your USB port.
You can save Grayscale or RGB565 MJPEG videos and Grayscale or RGB565 GIF images to an attached μSD Card. You can also overlay graphics / text on the video too.
The OpenMV Cam can track color blobs in Grayscale or RGB565 images. It can do multi-color / multi-blob tracking. Additionally, the OpenMV Cam can detect color codes too.
With the OpenMV Cam you can use machine vision to control I/O pins in the real world. The OpenMV Cam has a SPI bus, I2C bus, Async Serial bus (RX / TX), ADC, DAC, and more.
Classifying images has never been easier! Using OpenMV IDE you can trivially build a dataset, upload that dataset to Edge Impulse in the cloud, and use Transfer Learning with MobileNet to generate a TensorFlow Lite Convolution Neural Network (CNN) that will run onboard your OpenMV Cam.
The OpenMV Cam is Expandable
The OpenMV Cam has a standard I/O pin layout so you can stack shields on it like an Arduino. We sell an LCD shield so you can see what the OpenMV Cam sees on-the-go, a prototyping shield so you can create your own custom circuit, a WiFi shield so you can connect to the internet, and a thermal imaging shield to see in the dark with.
You code in Python
The OpenMV Cam runs the MicroPython operating system which allows you to program the OpenMV Cam using Python (Python 3 to be precise). Python makes working with machine visions algorithms much easier. For example, the find_blobs() method in the code finds color blobs and returns a list of 8-valued objects representing each color blob found. In Python iterating through the list of objects returned by find_blobs() and drawing a rectangle around each color blob is easily done in just two lines of code.
Finally, you program the OpenMV Cam using The OpenMV IDE which features a powerful text editor, frame buffer viewer so you can see what the camera sees, serial terminal for debugging, and a histogram display for making color tracking easy.
Standard M12 Lens Mount
The OpenMV Cam uses a standard M12 lens mount so you aren't limited by the 2.8mm lens the OpenMV Cam ships with. We sell a 4X telescopic zoom lens, an ultra-wide-angle 185° fish-eye lens, and an IR cut-filter-less lens for use with IR tracking applications.
Best of all, since we're using a standard M12 lens mount you can buy and attach more exotic M12 lenses to the OpenMV Cam yourself.
We have OpenMV Cams, shields, and lenses for sale. The OpenMV Cam is our main product while the shields and lenses offer additional functionality.
For faster and cheaper shipping please order from our distributors in your area below:
If what you want is out of stock at your nearest distributor then please order directly from us below: