SimuScan: Label-Free Deep Learning for Autonomous AFM

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Atomic force microscopy (AFM) underpins nanoscale research across materials, energy, and biology. Yet its utility is restricted by low imaging throughput and the requirement for a highly skilled operator. Progress toward autonomous operation is hindered by the scarcity of annotated datasets and the poor generalizability of existing models. To overcome this, we introduce a synthetic data generator tailored for AFM that produces tunable, high-fidelity images of arbitrary morphologies while embedding realistic artifacts such as tip convolution, noise, flattening errors, and debris. SimuScan datasets enable scalable, label-free training of deep learning models including YOLOv8, U-Net, and Mask R-CNN. Integrated with autonomous AFM control, the trained vision system can not only locate, segment, and analyze nanoscale structures across millimeter-scale areas in real time, but also recursively act on them by acquiring new images or adapting scan parameters, achieving robust performance without human labeling nor experimental intervention. We validate these autonomous closed-loop capabilities on nanostructures, DNA, and bacterial cells, demonstrating reliable high-resolution imaging with minimal operator input. By coupling synthetic data generation with deep learning and adaptive AFM control, this platform accelerates imaging workflows, broadens accessibility, and creates opportunities for high-throughput applications in nanomaterials discovery, biomedical diagnostics, and wafer-scale inspection.

Article activity feed