Small utilities for drone-mapping workflows alongside OpenDroneMap.
footprints.py— render the ground footprints of a set of drone images as a JPEG with coverage area in m²/ha. No ML, no dense reconstruction — just EXIF + XMP.deepforest_detect.py— run DeepForest on an orthomosaic to detect individual tree crowns and export them as a georeferenced GeoJSON.
One-time conda env (Python 3.11, conda-forge geo stack, DeepForest via pip):
cd drone-tools
conda env create -f environment.yml
conda activate droneUpdate the env later with conda env update -f environment.yml.
footprints.py only uses Pillow, so it also works outside this env (system Python + pip install Pillow is enough). deepforest_detect.py needs the full env.
Reads every JPEG in a folder, computes each image's ground footprint from GPS + DJI XMP (RelativeAltitude, GimbalYawDegree) and the camera model, and renders them on a local tangent plane. 15 % alpha fill so overlapping coverage stacks visibly; darker regions = more images covering that spot; white = gap.
python footprints.py <images-folder> [output.jpg]Example:
python footprints.py ~/dev/drone/voo-4/images/
# -> ~/dev/drone/voo-4/images_footprints.jpg (+ .jgw + .prj)Output files (always a JPEG, named <folder>_footprints.jpg if no second arg is given):
<name>.jpg— the rendering, with total covered area drawn in the corner.<name>.jgw+<name>.prj— sidecar world files so QGIS can drop the JPEG onto a basemap in WGS-84.
- Assumes nadir-pointed shots. Prints a warning if any gimbal pitch deviates more than 15 ° from -90 °.
- Sensor size is looked up by camera model (DJI common sensors hardcoded in the script); falls back to the 35 mm-equivalent focal length when the model is unknown. Add new sensors in
DJI_SENSORSat the top of the file if yours is missing. - Ground-plane elevation is taken from the DJI
RelativeAltitudetag (altitude above takeoff point). If that tag is missing, falls back to the spread of GPS MSL altitudes, which is less accurate.
Runs DeepForest's pretrained tree-crown detector on a large GeoTIFF orthomosaic and writes the detections as GeoJSON you can overlay on the ortho in QGIS.
python deepforest_detect.py <ortho.tif> [options]Options:
| Flag | Default | Meaning |
|---|---|---|
--output-dir DIR |
ortho's folder | where outputs land |
--gsd METERS |
0.1 |
target ground sample distance (10 cm). The ortho is resampled to this before inference. |
--patch-size PX |
400 |
tile size passed to predict_tile (pixels at the resampled GSD). |
--patch-overlap FRAC |
0.25 |
overlap between tiles, for NMS to merge boxes on tile edges. |
Example:
conda activate drone
python deepforest_detect.py ~/dev/drone/voo-4/odm_orthophoto/odm_orthophoto.tifOutputs, next to the ortho:
<stem>_trees.geojson— onePolygonfeature per detected crown, withlabel,score, and original pixel extents. CRS is inherited from the ortho.<stem>_trees_preview.jpg— the resampled ortho with orange boxes drawn over detections, for a quick eyeball check.
The pretrained model was trained on NEON RGB imagery at ~10 cm GSD. Orthos at 2 cm (which is what ODM produces by default) give it patches where a single tree fills the entire 400 × 400 patch — it either under-predicts or needs a very large patch-size to compensate. Resampling is faster and simpler.
- The pretrained NEON model is US temperate forest. On Brazilian vegetation it tends to over-predict on grass/shrubs and miss some crowns in dense closed canopy. For a real project, fine-tune with a few hundred hand-annotated boxes (see DeepForest docs).
- GPU is auto-detected via PyTorch. On an RTX-class GPU a moderate ortho runs in seconds; on CPU, minutes.
drone-tools/
├── README.md (this file)
├── LICENSE MIT
├── environment.yml conda env (python + rasterio + geopandas + pillow + deepforest)
├── footprints.py image-footprint renderer
└── deepforest_detect.py DeepForest tree-crown detector
MIT — see LICENSE.