Altametris - RandLA-Net

Welcome to randlanet’s documentation

Note

For a complete documentation, please visit the repository randlanet.

Owner altametris.

Project Description

RandLA-Net Deep Learning model for point cloud segmentation

Steps

  1. Install the package following the Installation

  2. Learn how to use it following the Usage

Contents

Contributors

README

RandLA-Net

RandLA-Net model for point cloud segmentation. Implemented in Python 3.11 and PyTorch 2.4.

Installation

RandLA-Net can be installed directly from Azure Artifacts. See the details in documentation.

However, some conda dependencies need to be installed to fully use the package:

conda install pytorch==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
conda install -c conda-forge python-pdal=3.4.5

Usage

RandLA-Net can be used for preparing data, training, evaluating, and inference using command lines and minimal configuration. All experiments are tracked using mlflow either locally on the tour machine or on Azure ML.

Configuration

The model’s architecture, training parameters, evaluating parameters, and label mapping can be specified in a config.yaml. See altametris.randlanet.config.config.yaml for an optimal example.

data preparation

Point clouds must be transformed into a prepared data more convenient to training or evaluating. KDTree and labels are created for each dataset in order to apply fast knn on input clouds. This step is done using multi-processing and enables to create such files with a predefined naming convention filename_KDTree.pkl and filename_labels.npy.

randlanet-prepare --input-dir ./input --output-dir --skip --num-workers -1

--skip enables to skip existing prepared data, and num-workers is the number of workers in multi-processing. -1 is to use all cpu - 2.

model training
randlanet-train --prepared-dir ./data --log-dir ./log --config ./config.yaml --experiment-name randlanet-training --run-name run1 --tags model=v2 env=test

Except the prepared-dir and config, all arguments are optional.

model evaluation
randlanet-evaluate --prepared-dir ./data/prepared --raw-dir ./data/raw --config ./config.yaml --model ./checkpoint.pt --log-dir ./log --experiment-name randlanet-evaluate --run-name run1 --tags model=v2 env=test
model inference

To run inference on point clouds in las or laz format:

randlanet-predict --input-dir ./data --output-dir ./output --model ./checkpoint.pt --batch-size 16 --device gpu --possibility-threshold 0.5 --prediction-smooth 0.98 --verbose

Except input-dir, output-dir, and model, all other arguments are optional.

  • batch-size: set the batch-size in inference

  • device: torch device, gpu or cpu

  • possibility-threshold: parameter of inference loop. Keep at 0.5 .

  • prediction-smooth: parameter of merging predictions after inference loop. Keep at 0.98.

  • --verbose: show progress bar.

Indices and tables