Support

Get started, find answers, and troubleshoot issues.

General Questions

What does LGND do?

LGND generates, hosts, and queries embeddings from large Earth observation models to support wide ranging analytics.

What problem is LGND solving?

Whereas previous image classification architectures required hundreds of labeled examples and custom models, LGND — and the advent of large Earth observation models — unlock new efficiencies that achieve the same outcome faster and at a fraction of the cost.

Historically image classification models required hundreds or thousands of annotated examples to become expert “recognizers.” Large Earth observation models have evolved to be smarter, faster learners that can accomplish these same tasks with just a handful of examples.

How accurate is LGND?

LGND unlocks significant accuracy with just a few reference examples. Accuracy depends on many factors: how much training data is provided, how distinct an object is relative to its surroundings, and how variable the object is over space and time. It is rare for a model to work perfectly out of the box. Like with other AI tools, LGND’s analytics are refined through user prompting and feedback. 

How do I use LGND?

LGND offers an API as well as our LGND Studio app. The API is best for technical users who need a solution for generating and hosting embeddings to integrate into existing solutions. Our Studio app is an end-to-end service with an interactive user interface for Earth observation analytics.

What analytics does LGND support?

LGND can identify bounding boxes for features of interest anywhere in the world. Embeddings can classify images as well as model continuous variables like the amount of biomass above ground.

Which large Earth observation model does LGND use?

LGND currently hosts the Clay foundation model. Additional open-source models will be available in the near future. 

What imagery sources are available?

LGND provides easy access to Sentinel-2, Landsat, and NAIP imagery.

Technical Support

What bands were used for pretraining?

The Clay model was trained on 10 bands from Sentinel-2 imagery, 10 bands from Landsat imagery, and all four bands of NAIP.

Which bands can be used for inference?

Wavelengths are encoded in the model. It can therefore extrapolate to wavelengths that are within or near the ranges used for pretraining.

How long does it take to train and run a classification model on LGND?

Typically, model training requires a few minutes to hours. Inference takes a few minutes to hours. Both depend on the number of labels used (training) and the area of interest (inference).

How frequently can I run a model?

You can run a model as many times as you’d like. If you're studying a phenomenon that changes frequently, you can run your model on each update of imagery. Landsat and Sentinel-2 offer updates roughly every five days. NAIP imagery updates every other year.

How frequently can I update my model’s results?

Models can be run each time new (cloud-free!) imagery becomes available. 

How large of an area can be analyzed?

LGND can be run on any sized area. The unit of analysis is a raster tile. A raster tile represents a single remotely sensed image (satellite, aerial, drone) for a specific location on Earth and acquired at a specific time.

How large are raster tiles?

Chips are typically 256x256 pixels. A Sentinel-2 chip  that is 256 – where each pixel is 10 meters - would therefore have a corresponding area of 2.5km^2.