Evaluating AlphaEarth Foundation Embeddings for Pixel- and Object-Based Land Cover Classification in Google Earth Engine

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Foundation models such as AlphaEarth introduce a new paradigm in remote sensing by providing semantically rich, pretrained embeddings that integrate multi-sensor, spatio-temporal, and contextual information. This study evaluates the performance of AlphaEarth embeddings for land-cover classification under both pixel-based and object-based paradigms within the Google Earth Engine (GEE) environment. Sentinel-2 imagery for 2024 was used to map a 1,930-hectare region in Pabbi Tehsil, Khyber Pakhtunkhwa, Pakistan, where rapid urbanization is reshaping traditional land use. Four experimental configurations—Pixel-Based Spectral Indices (PBSI), Pixel-Based AlphaEarth Embeddings (PBAE), Object-Based Spectral Indices (OBSI), and Object-Based AlphaEarth Embeddings (OBAE)—were implemented using a Random Forest classifier.The results show that AlphaEarth embeddings consistently outperformed spectral index–based models, improving overall accuracy by ≈ 5 percentage points and Kappa by ≈ 3. Object-based approaches enhanced spatial coherence and boundary delineation, particularly for built-up and road classes, while maintaining stable area statistics across pipelines. The findings demonstrate that pretrained embeddings can achieve deep-learning-level accuracy through lightweight, cloud-native workflows, offering an efficient pathway for land-cover mapping and urban-cadastral monitoring in data-scarce regions.

Article activity feed