altametris.sara.core.base_detector¶
Base detector class for inference.
Provides a common interface for running inference on ML models with support for: - Model loading - Batch and single image prediction - Device management - Warmup for performance optimization
Example
>>> class MyDetector(BaseDetector):
... def _load_model(self, model_path):
... self.model = torch.load(model_path)
... def predict(self, source, **kwargs):
... return self.model(source)
Attributes¶
Classes¶
Abstract base class for model inference/detection. |
Module Contents¶
- altametris.sara.core.base_detector.logger¶
- class altametris.sara.core.base_detector.BaseDetector(model_path: str | pathlib.Path, device: str = 'auto', warmup: bool = False, **kwargs: Any)¶
Bases:
abc.ABCAbstract base class for model inference/detection.
Provides common inference infrastructure: - Model loading and initialization - Device management - Warmup capability - Prediction interface
- Parameters:
model_path – Path to model weights
device – Device for inference (“cpu”, “cuda”, “mps”, “auto”)
warmup – Whether to run warmup inference on initialization
Example
>>> detector = MyDetector(model_path="weights/best.pt", device="cuda") >>> results = detector.predict(source="image.jpg")
- model_path¶
- _device¶
- model = None¶
- _is_initialized = False¶
- _resolve_device(device: str) torch.device¶
Resolve device string to torch.device.
- Parameters:
device – Device string
- Returns:
Resolved torch.device
- property device: torch.device¶
Get current device.
- property is_initialized: bool¶
Check if detector is initialized.
- abstract _load_model(model_path: pathlib.Path, **kwargs: Any) None¶
Load model from path.
- Parameters:
model_path – Path to model file
**kwargs – Additional loading arguments
- Raises:
ModelError – If model cannot be loaded
Note
Must be implemented by subclasses. Should set self.model.
- abstract predict(source: Any, **kwargs: Any) Any¶
Run inference on source.
- Parameters:
source – Input source (image path, array, video, etc.)
**kwargs – Inference parameters (conf, iou, etc.)
- Returns:
Prediction results (format depends on detector type)
- Raises:
InferenceError – If prediction fails
Note
Must be implemented by subclasses
- warmup(iterations: int = 3) None¶
Warmup the model with dummy inference.
Useful for GPU models to pre-allocate memory and compile kernels.
- Parameters:
iterations – Number of warmup iterations
Example
>>> detector.warmup(iterations=5)
- validate_source(source: Any) None¶
Validate input source.
- Parameters:
source – Input source to validate
- Raises:
InferenceError – If source is invalid
- __call__(source: Any, **kwargs: Any) Any¶
Callable interface for prediction.
- Parameters:
source – Input source
**kwargs – Inference parameters
- Returns:
Prediction results
Example
>>> detector = MyDetector(model_path="weights/best.pt") >>> results = detector("image.jpg", conf=0.5)
- __repr__() str¶
String representation of detector.