YOLOTrack

YOLOTrack is a framework for the real-time localization and classification of objects in optical microscopy images using a single-shot convolutional neural network YOLO (“You Only Look Once”) . We adapted the TinyYOLOv2 architecture enabling to localize and classify objects at very low signal-to-noise ratios for images as large as 416 px x 416 px at frame rates of up to 100 fps.

The picture below on the left shows the YOLOTrack-1.0 localization and classification of differently shaped microparticles in a darkfield microscopy image. The scripts to train the network in Python/Keras using the TensorFlow backend and source codes to run the model inference on a GPU with Python, C++ or LabVIEW are available at our GitHub repository: YOLOTrack-1.0.

YOLOTrack-1.1 is an extension to YOLOTrack-1.0 to detect oriented bounding boxes. In the picture below on the right you can see the YOLOTrack-1.1 localization, classification and orientation detection of elliptical, rod-like and Janus-type microparticles in a darkfield microscopy image. The scripts and source codes are available at our GitHub repository: YOLOTrack-1.1.

See also the publication “Active Particle Feedback Control with a Single-Shot Detection Convolutional Neural Network“, M. Fränzl, F. Cichos, Sci. Rep. 10, 12571 (2020)

Comments are closed.