vollo_torch

class vollo_torch.Fp8Weights

Weights used inside this context will be quantized to an 8-bit format, which may be useful for getting a model to fit on the board.

class vollo_torch.Fp32Activations

Activations/computations inside this context will be in 32-bit precision. See https://vollo.myrtle.ai/latest/supported-models.html for which operations support this.