Newsletter
Subscribe to receive our blog updates
Hi Everyone!
It's finally here! All OpenMV Cams now support ROMFS with the release of OpenMV firmware v4.7.0 and OpenMV IDE v4.7.0!
Glad you asked! So, we've always embedded large binary files in our firmware like Haar Cascades and Neural Networks. As firmware space has gotten tight over the years as we've added new features, fixed bugs, and so on and so forth we've had to enable/disable assets across different cameras to make space.
We know that this has caused a lot of frustration.
OpenMV Cams have always had a FAT file system onboard and SD card support. However, any files read from FAT have to be loaded into RAM... which has also always been limited.
ROMFS solves the storage problem by creating a memory-mapped file-system which you edit via OpenMV IDE! All files stored in the new ROMFS partition on your OpenMV Cam can be read like as if they were files, but, we don't have to load them into RAM. This feature allows you to deploy large binary assets like Neural Networks to your OpenMV Cam, which can be executed in-place off of the flash.
Now with ROMFS, we've put the external FLASH on the OpenMV Cam H7 Plus, RT1062, Arduino Portenta, Arduino Nicla, and Arduino Giga to good use. ROMFS allows us to offer MB's of storage now that can be used for larger ML models without requiring any RAM.
Furthermore, on the upcoming OpenMV AE3 and N6 we've really leaned into leveraging ROMFS with 24MB partitions for model storage. Now you can run 10MB+ YOLOV8 models on our new cameras easily!
ROM support required quite a few monumental changes:
This took a long time! But, it's very much worth it! Now, with ROMFS, we can now leverage large custom binary assets without any issues.
ML models? No problem. Giant remap tables for lens correction? Coming soon.
The ROMFS file system stores file records in memory, aligned, contiguous chunks in flash. The contiguous part is why they don't have to be read into RAM, whereas FAT stores files in chunks, which can be all over the place. ROMFS also allows us to memory align files to 16-byte, 32-byte, and etc. addresses. This superpower allows us to pass the address of a neural network in ROMFS directly to the NPU hardware accelerators onboard the OpenMV AE3 and OpenMV N6.
This is what ROMFS allows us to do:
Now we can run big models onboard OpenMV Cams!
Last but not least, we now integrate a Model Zoo! By the time we release the OpenMV AE3 and N6, we're going to have demo model support for:
You can check out the models we plan to support via Tools->Machine Vision->Open Model Zoo.
Not only does OpenMV IDE ship with the Model Zoo, but it also integrates ARM's Vela Compiler to convert Models to run on the OpenMV AE3 and ST's EdgeAI tools to convert Models to run on the OpenMV N6! Now you can easily bring your own INT8 quantized TensorFlow Lite Model and run it onboard with NPU acceleration.
And as promised during our Kickstarter, Roboflow is working on enabling YOLOV8 INT8 quantized TensorFlow Lite Model output support, so you can train your own custom models too!
Last but not least, OpenMV IDE also includes MPY Cross too so you can compile your Python code to bytecode to store on ROMFS or the FATFS. We just support compiling individual scripts now, but, we'll add support for compiling modules next.
We're steadily working towards delivering the OpenMV AE3 and N6 this year. Here's where we are so far:
OpenMV IDE: On the desktop tool side, the OpenMV IDE is feature-complete to support the OpenMV AE3 and N6. Not only has it been updated to support ROMFS, the new Model Zoo, and all the new bootloaders, but, its also ready to fully provision and recovery the OpenMV AE3 and N6 using vendor tools so we can manufacture the OpenMV AE3 and N6 easily along with also being able to always factory reset your OpenMV Cam.
OpenMV AE3: We're sending out our final prototype for the OpenMV AE3 this week, along with ordering long-lead time parts for mass production. From the looks of things, we're on track to start delivering the OpenMV AE3 by the end of Q3 this year. Everything is pretty much operational onboard, we have some camera image quality tuning to do still, but otherwise, we just need to get all our demo models from the model zoo working onboard with example scripts.
OpenMV N6: The OpenMV N6 MicroPython port is still in progress. However, we've pretty much verified that all the hardware on the board is working as expected. For the new multispectral camera support, we now have a new camera system interface which lets you use multiple frame buffers at the same time!
OpenMV Cams were spotted out in the wild at Robocup Egypt!
Kareem Youssri organized the event at AAST in Alexandria.
Glad to see folks putting the OpenMV Cam H7 Plus to great use!
Team Majorana in Italy also used an OpenMV Cam H7 in the Robocup Junior Rescue Line Competition!
For this competition, their robot had to find colored balls around the stage.
Congrats to the team for their success!
Finally, HaptiTemp: A Next-Generation Thermosensitive GelSight-Like Visuotactile Sensor used the OpenMV Cam H7 Plus to create a new type of sensor.
Looks like it needed some cooling.
Keep sending in how you're using your OpenMV Cams folks!
Subscribe to receive our blog updates