The Principal Dev – Masterclass for Tech Leads

The Principal Dev – Masterclass for Tech Leads28-29 May

Join

anofox-forecast

CI Crates.io Documentation codecov MIT License Technical Depth Code Quality

Technical depth grading and code quality analysis powered by pmat

Time series forecasting library for Rust.

Provides 50+ forecasting models, 76+ statistical features, automatic model selection, ensemble methods, seasonality decomposition, changepoint detection, anomaly detection, hierarchical reconciliation, and model serialization.

Use Cases

Want to try it out? Use the anofox app for interactive forecasting in the browser.

Need to run this on 10GB of data? Use our DuckDB extension for SQL-native forecasting at scale.

Need to use this in a React Dashboard? Use our npm package for WebAssembly-powered forecasting in the browser.

npm install @sipemu/anofox-forecast
import init, { TimeSeries, AutoForecaster, AutoEnsembleForecaster } from '@sipemu/anofox-forecast';

await init();

const ts = new TimeSeries([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
const model = new AutoForecaster();
model.fit(ts);
const forecast = model.predict(5);
console.log(forecast.values);

Features

Forecasting

Analysis & Decomposition

Evaluation & Uncertainty

Data Processing & Pipeline

Persistence & Interoperability

Installation

Add this to your Cargo.toml:

[dependencies]
anofox-forecast = "0.6.0"

Optional Features

[dependencies]
# Parallel AutoARIMA (4-8x speedup via rayon, opt-in for embedding contexts like DuckDB)
anofox-forecast = { version = "0.6", features = ["parallel"] }

# Model serialization (save/load to JSON)
anofox-forecast = { version = "0.6", features = ["serde"] }

# Probabilistic postprocessing (conformal, IDR, QRA — enabled by default)
anofox-forecast = { version = "0.6", default-features = false }  # to disable
Feature Default Description
postprocess Yes Conformal prediction, IDR, QRA, historical simulation
parallel No Rayon-based parallelism for AutoARIMA, AutoForecast, bootstrap, and cross-validation (not available on WASM)
serde No JSON and bincode serialization/deserialization for models

Quick Start

Creating a Time Series

use anofox_forecast::prelude::*;
use chrono::{TimeZone, Utc};

// Create timestamps
let timestamps: Vec<_> = (0..100)
    .map(|i| Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap() + chrono::Duration::days(i))
    .collect();

// Create values
let values: Vec<f64> = (0..100).map(|i| (i as f64 * 0.1).sin() + 10.0).collect();

// Build the time series
let ts = TimeSeries::builder()
    .timestamps(timestamps)
    .values(values)
    .build()?;

Automatic Model Selection

use anofox_forecast::prelude::*;
use anofox_forecast::models::auto_forecast::AutoForecast;

// Automatically selects the best model across ARIMA, ETS, and Theta
let mut model = AutoForecast::new();
model.fit(&ts)?;

let forecast = model.predict(12)?;
println!("Best model: {}", model.name());

ARIMA Forecasting

use anofox_forecast::prelude::*;
use anofox_forecast::models::arima::ARIMA;

// Create and fit an ARIMA(1,1,1) model
let mut model = ARIMA::new(1, 1, 1);
model.fit(&ts)?;

// Generate forecasts with 95% confidence intervals
let forecast = model.predict_with_intervals(12, 0.95)?;

println!("Point forecasts: {:?}", forecast.primary());
println!("Lower bounds: {:?}", forecast.lower_series(0));
println!("Upper bounds: {:?}", forecast.upper_series(0));

Holt-Winters Forecasting

use anofox_forecast::models::exponential::HoltWinters;

// Create Holt-Winters with additive seasonality (period = 12)
let mut model = HoltWinters::additive(12);
model.fit(&ts)?;

let forecast = model.predict(24)?;

Model Comparison

use anofox_forecast::models::{BoxedForecaster, ModelRegistry};
use anofox_forecast::utils::comparison::{compare_registry, ComparisonConfig};

// Compare all registered models side-by-side
let config = ComparisonConfig::default();
let table = compare_registry(&ts, &config)?;
println!("{}", table);

Feature Extraction

use anofox_forecast::features::{mean, variance, skewness, approximate_entropy};

let values = ts.values();

let m = mean(values);
let v = variance(values);
let s = skewness(values);
let ae = approximate_entropy(values, 2, 0.2)?;

println!("Mean: {}, Variance: {}, Skewness: {}, ApEn: {}", m, v, s, ae);

STL Decomposition

use anofox_forecast::seasonality::Stl;

// Decompose with seasonal period of 12
let stl = Stl::new(12)?;
let decomposition = stl.decompose(&ts)?;

println!("Trend: {:?}", decomposition.trend());
println!("Seasonal: {:?}", decomposition.seasonal());
println!("Remainder: {:?}", decomposition.remainder());

Rolling Features in Regression Models

use anofox_forecast::models::regression::{
    RegressionFeatures, RegressionForecaster, RollingStatKind,
};

// OLS with lag-1, a rolling mean of the last 7 values, and a rolling std.
// Every rolling feature is recomputed at each horizon step using the
// previous predictions — correct recursive multi-step semantics.
let mut model = RegressionForecaster::ols(
    RegressionFeatures::new()
        .no_trend()
        .lags(1)
        .with_rolling_mean(7)?                        // last 7, lag=1
        .with_rolling_std(14)?                        // last 14, lag=1
        .with_ewm_mean(20, 0.3)?                      // EWM window=20, α=0.3
        .no_exog(),
);
model.fit(&ts)?;
let forecast = model.predict(12)?;

// All `RollingStatKind` variants: Mean, Std, Var, Min, Max, Median, Sum,
// EwmMean { alpha }, EwmStd { alpha }. Custom lag via `.with_rolling_lagged(w, lag, kind)`.
// `lag == 0` is rejected at build time to prevent target leakage.

Transform Pipeline

use anofox_forecast::transform::pipeline::{Pipeline, PipelineBuilder};
use anofox_forecast::transform::transforms::{BoxCoxTransform, DifferenceTransform};
use anofox_forecast::models::baseline::Naive;

// Chain transforms around any model — Pipeline itself implements Forecaster
let mut pipeline = Pipeline::builder()
    .transform(BoxCoxTransform::auto())
    .transform(DifferenceTransform::new(1))
    .model(Box::new(Naive::new()))
    .build();

pipeline.fit(&ts)?;
let forecast = pipeline.predict(12)?;

Batch Forecasting (Many Series)

use anofox_forecast::models::exponential::{GlobalAutoETS, GlobalETS, ETSSpec, ModelPool};

// 1000 series, each a Vec<f64> — all same length
let all_series: Vec<Vec<f64>> = load_my_data();

// GlobalAutoETS: select best model per series, shared optimization (28-32x faster)
let mut model = GlobalAutoETS::new(12, ModelPool::Reduced);
model.fit(&all_series).unwrap();
let forecasts = model.predict(12); // Vec<Vec<f64>>, one per series

// GlobalETS: fit a known spec across all series (75-96x faster)
let mut model = GlobalETS::new(ETSSpec::ana(), 12);
model.fit(&all_series).unwrap();
let forecasts = model.predict(12);

Exogenous Regressors

use anofox_forecast::features::FeatureGenerator;

// Generate deterministic regressors from timestamps
let gen = FeatureGenerator::new()
    .fourier(7, 2)       // Weekly Fourier terms
    .day_of_week()        // Day-of-week indicators
    .holiday("promo", promo_dates);

gen.add_to(&mut ts);     // Attach features to TimeSeries

let mut model = ARIMA::new(1, 1, 1);
model.fit(&ts)?;

// Inspect OLS pre-regression coefficients
if let Some(ols) = model.exog_coefficients() {
    println!("Intercept: {:.4}", ols.intercept);
    for (name, coef) in ols.regressor_names.iter().zip(&ols.coefficients) {
        println!("  {}: {:.4}", name, coef);
    }
}

Changepoint Detection

use anofox_forecast::changepoint::{Pelt, CostFunction};

// Automatic penalty selection (recommended)
let result = Pelt::new(CostFunction::L2)
    .min_size(5)
    .auto_detect(&data);
println!("Found {} changepoints at {:?}", result.result.n_changepoints, result.result.changepoints);
println!("Auto-selected penalty: {:.2}", result.penalty);

// Manual penalty
let result = Pelt::new(CostFunction::L2)
    .penalty(10.0)
    .detect(&data);

Sequential Monitoring of Forecast Errors

Online detection of when a fitted model has become inaccurate. Port of the R package changepoint.forecast by Thomas Grundy, based on Fremdt (2014).

use anofox_forecast::models::baseline::Naive;
use anofox_forecast::monitor::{
    monitor_forecaster, Detector, ForecastErrorType, SequentialConfig, SequentialDetector,
};

// Option A: monitor a fitted forecaster's residuals directly
let mut model = Naive::new();
model.fit(&ts)?;

let cfg = SequentialConfig::new(200)        // training window length m
    .detector(Detector::PageCusum)          // recommended default
    .error_type(ForecastErrorType::Both);   // monitor mean AND variance
let detector = monitor_forecaster(&model, cfg)?;

if let Some(tau) = detector.first_detection() {
    println!("Model drifted at observation {}", tau);
}

// Option B: bring your own residual stream and update it online
let cfg = SequentialConfig::new(100).detector(Detector::PageCusum);
let mut detector = SequentialDetector::fit(&residuals, cfg)?;

// Each time a new actual arrives, compute the new error and stream it in.
// State is constant-size; this is bit-equivalent to a fresh fit on the full
// concatenated series.
detector.update(&[new_error])?;
if detector.has_detected() {
    // refit the forecasting model
}

Spectral Analysis

use anofox_forecast::detection::welch_periodogram;

// Welch's periodogram with overlapping windows
let psd = welch_periodogram(&values, 64, 0.5);

// Find dominant period
if let Some((period, power)) = psd.iter().max_by(|a, b| a.1.partial_cmp(&b.1).unwrap()) {
    println!("Dominant period: {}, power: {:.4}", period, power);
}

For comprehensive periodicity detection (ACF, FFT, Autoperiod, CFD-Autoperiod, SAZED), see the fdars crate.

Probabilistic Postprocessing

use anofox_forecast::postprocess::{PostProcessor, PointForecasts, BacktestConfig};

// Historical forecasts and actuals for calibration
let train_forecasts = PointForecasts::from_values(train_f);
let train_actuals = vec![/* ... */];

// Create a conformal predictor with 90% coverage
let processor = PostProcessor::conformal(0.90);

// Backtest with horizon-aware calibration
let config = BacktestConfig::new()
    .initial_window(100)
    .step(10)
    .horizon(7)
    .horizon_aware(true);

let results = processor.backtest(&train_forecasts, &train_actuals, config)?;
println!("Coverage: {:.1}%", results.coverage() * 100.0);

// Train calibrated model and predict
let trained = processor.train(&train_forecasts, &train_actuals)?;
let new_forecasts = PointForecasts::from_values(new_f);
let intervals = processor.predict_intervals(&trained, &new_forecasts)?;

println!("Lower: {:?}", intervals.lower());
println!("Upper: {:?}", intervals.upper());

API Reference

Core Types

Type Description
TimeSeries Main data structure for univariate/multivariate time series
Forecast Prediction results with optional confidence intervals
Forecaster Trait implemented by all forecasting models (exog_coefficients() for OLS inspection)
Pipeline Composable transform → model chain, itself implements Forecaster
FeatureGenerator Deterministic regressor generation (Fourier, DOW, MOY, quarter, holidays)
AccuracyMetrics Model evaluation metrics (MAE, MSE, RMSE, MAPE, etc.)

Forecasting Models

Family Models
Auto Selection AutoForecast, AutoEnsemble
ARIMA ARIMA, SARIMA, AutoARIMA
Exponential Smoothing SES, Holt, HoltWinters, SeasonalES, ETS, AutoETS (with ModelPool)
Theta Theta, OptimizedTheta, DynamicTheta, AutoTheta
Baseline Naive, Mean, SeasonalNaive, RandomWalkWithDrift, SMA, WindowAverage, SeasonalWindowAverage
Intermittent Croston, TSB, ADIDA, IMAPA
Complex Seasonality TBATS, AutoTBATS, MFLES, MSTLForecaster
Volatility GARCH
Multivariate VAR (Vector Autoregression)
State-Space KalmanFilter, StateSpaceModel (local level, local linear trend)
Ensemble Ensemble (Mean, Median, Weighted MSE, InverseAIC, Stacking, HorizonAdaptive)
Regression RegressionForecaster (OLS, Ridge, ElasticNet, Quantile, WLS, RLS, Tweedie, Poisson, BLS, Dynamic)
Hierarchical HierarchyTree (BottomUp, TopDown, MiddleOut, MinTraceOls, MinTraceShrink, MinTraceVariance, MinTraceStruct)
Batch/Global GlobalETS, GlobalAutoETS, GlobalCroston, GlobalTheta, batch::auto_ets, batch::ets, batch::mfles

Utilities

Function / Type Description
compare_models() Compare forecasters on the same data with timing
compare_registry() Compare all registered models at once
cross_validate() Time series cross-validation (parallel with parallel feature)
cross_validate_early_stop() CV with convergence-based early stopping
rolling_forecast() Walk-forward evaluation with rolling/expanding windows
StreamingCVAggregator Online metric aggregation using Welford's algorithm
bootstrap_forecast() Bootstrap confidence intervals for any model
diagnose_residuals() Unified residual diagnostics (Ljung-Box, DW, Jarque-Bera)
ModelDiagnostics Comprehensive diagnostics: Ljung-Box, Jarque-Bera, Breusch-Pagan
IntermittentDiagnostics Syntetos-Boylan demand classification with model recommendations
AidAnalyzer Automatic Identification of Demand: distribution fitting, anomaly detection
rmsse() / wrmsse() Root Mean Squared Scaled Error and Weighted RMSSE (M5 competition metric)
bias() / periods_in_stock() Signed bias and inventory-focused PIS metric
ForecastMetrics::compute() All 10 metrics in one call (MAE through Theil's U)
fit_all_and_compare() Fit all registry models, rank by holdout accuracy
cross_validate_all() CV all registry models with aggregated metrics
ensemble_best_k() Auto-select top-k models into an ensemble
SeasonalComponent / TrendComponent Composable traits for seasonal/trend components (standalone + features)
DummySeasonality One-hot seasonal encoding — arbitrary seasonal shapes
SeasonalDifference Standalone seasonal differencing with strength/variance features
HodrickPrescottFilter Smooth trend extraction with cycle decomposition
PiecewiseLinearTrend PELT-based piecewise linear trend with per-segment regression
PolynomialTrend Polynomial trend (degree 1-3) with Cholesky solve
ExponentialTrend Log-linear exponential growth/decay trend
LogisticTrend Logistic S-curve trend with auto/fixed capacity
TheilSenTrend Robust Theil-Sen median-slope trend estimator
AutoTrend Automatic best-trend selection via AICc/BIC/holdout
AutoSeasonal Automatic best-seasonal selection via AICc/BIC
Recency Fit on recent data only (Window, Fraction, Full, Auto via PELT)
BinnedConformalPredictor Heteroscedastic prediction intervals binned by predicted magnitude
RegressionForecaster Multi-backend regression: OLS, Ridge, ElasticNet, Quantile, WLS, RLS, Tweedie, Poisson, BLS, Dynamic
RegressionBackend Backend selection enum with convenience constructors (ridge(), quantile(), wls_decay(), etc.)
RegressionFeatures Feature builder for regression models (trend, seasonal, lags, structural, recursive, exog)
FeatureSafety Feature leakage classification: Deterministic, DataDependent, Structural, External
StructuralFeature Trait for forward-filled features during prediction (changepoints, outlier indicators)
ChangepointFeature Structural feature for regime indicators (StepFunctions, RegimeIndex, CumulativeCount)
RecursiveFeature Trait for features recomputed at every horizon step from the rolling history buffer
RollingFeature / RollingStatKind Rolling window statistics (Mean, Std, Var, Min, Max, Median, Sum, EwmMean, EwmStd) as regression features
Pipeline / PipelineBuilder Composable transform → model chains (BoxCox → Difference → Model → inverse)
Transform trait Reversible transforms: DifferenceTransform, SeasonalDifferenceTransform, BoxCoxTransform, ScaleTransform, LogTransform
FeatureGenerator Deterministic feature generation: fourier(), day_of_week(), month_of_year(), quarter(), holiday()
OLSResult / exog_coefficients() Inspect OLS pre-regression coefficients (intercept, betas, regressor names)
deseasonalize() / seasonal_adjust() Remove seasonal component from data or TimeSeries
select_features() Automated feature selection (variance, correlation, top-K)
to_json() / from_json() Serialization for models, Forecast, and TimeSeries (requires serde feature)
to_bincode() / from_bincode() Binary serialization (requires serde feature)

Feature Categories

Category Examples
Basic mean, variance, minimum, maximum, quantile
Distribution skewness, kurtosis, variation_coefficient
Autocorrelation autocorrelation, partial_autocorrelation
Entropy approximate_entropy, sample_entropy, permutation_entropy
Complexity c3, cid_ce, lempel_ziv_complexity
Trend linear_trend, adf_test, ar_coefficient, hp_trend_strength, piecewise_n_segments
Seasonality dummy_seasonal_strength, seasonal_diff_strength, seasonal_diff_variance_reduction
Selection select_features, rank_features

Postprocessing Types

Type Description
PostProcessor Unified API for all postprocessing methods
ConformalPredictor Distribution-free prediction intervals
BinnedConformalPredictor Heteroscedastic intervals — bins by predicted magnitude
HistoricalSimulator Empirical error distribution
IDRPredictor Isotonic Distributional Regression
QRAPredictor Quantile Regression Averaging

Examples

48 runnable examples covering all major features, each with a companion .md description. See examples/README.md for the full categorized index.

cargo run --example quickstart              # End-to-end forecasting
cargo run --example arima                   # ARIMA family
cargo run --example regression              # 11 regression backends
cargo run --example cross_validation        # Time series CV
cargo run --example postprocess_conformal   # Conformal prediction intervals

Guides

Dependencies

Acknowledgments

The postprocessing module is a Rust port of PostForecasts.jl. The sequential monitoring module (monitor::) is a Rust port of changepoint.forecast by Thomas Grundy (Lancaster University), based on Fremdt (2014). Feature extraction is inspired by tsfresh. Forecasting models are validated against StatsForecast by Nixtla. See THIRDPARTY_NOTICE.md for full attribution and references to the research papers that inspired this implementation.

License

MIT License - see LICENSE for details.

Join libs.tech

...and unlock some superpowers

GitHub

We won't share your data with anyone else.