An Automated ML Anomaly Detection Prototype

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Predictive maintenance (PdM) often fails to progress beyond pilot projects because machine learning-based anomaly detection requires expert knowledge, extensive tuning, and labeled fault data. This paper presents an automated prototype that builds and evaluates multiple anomaly detection models with minimal manual configuration. The prototype automates feature creation, model training, hyperparameter search, and ensemble construction, while allowing domain experts to control how anomaly alerts are triggered and how detected events are reviewed. Developed in a multi-year photovoltaic (PV) solar farm case study, it targets operational anomalies such as sudden drops, underperformance periods, and abnormal drifts, using expert validation and synthetic benchmarks to shape and evaluate anomaly categories. Experiments on the real PV data, a synthetic PV benchmark, and a machine temperature dataset from the Numenta Anomaly Benchmark show that no single model performs best across datasets. Instead, diverse base models and both rule-based and stacked ensembles enable robust configurations tailored to different balances between missed faults and false alarms. Overall, the prototype offers a practical and accessible path toward PdM adoption by lowering technical barriers and providing a flexible anomaly detection approach that can be retrained and transferred across industrial time-series datasets.

Article activity feed