anofox-forecast
Technical depth grading and code quality analysis powered by pmat
Time series forecasting library for Rust.
Provides 50+ forecasting models, 76+ statistical features, automatic model selection, ensemble methods, seasonality decomposition, changepoint detection, anomaly detection, hierarchical reconciliation, and model serialization.
Use Cases
Want to try it out? Use the anofox app for interactive forecasting in the browser.
Need to run this on 10GB of data? Use our DuckDB extension for SQL-native forecasting at scale.
Need to use this in a React Dashboard? Use our npm package for WebAssembly-powered forecasting in the browser.
npm install @sipemu/anofox-forecast
import init, { TimeSeries, AutoForecaster, AutoEnsembleForecaster } from '@sipemu/anofox-forecast';
await init();
const ts = new TimeSeries([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
const model = new AutoForecaster();
model.fit(ts);
const forecast = model.predict(5);
console.log(forecast.values);
Features
Forecasting
-
Forecasting Models (50+)
- ARIMA, SARIMA, and AutoARIMA with automatic order selection
- Exponential Smoothing: SES, Holt's Linear, Holt-Winters, SeasonalES
- ETS (Error-Trend-Seasonal) state-space framework with AutoETS and
ModelPool(Reduced/Complete/DampedTrendOnly/MatchErrorSeasonal) - Baseline methods: Naive, Seasonal Naive, Random Walk with Drift, SMA, Window Average
- Theta family: Theta, Optimized Theta, Dynamic Theta, AutoTheta
- Intermittent demand: Croston, ADIDA, TSB, IMAPA
- TBATS/AutoTBATS for complex seasonality
- MFLES (Multiple Frequency Locally Estimated Scatterplot) with cached Cholesky Fourier OLS
- MSTL-based forecasting with configurable trend/seasonal methods and pre-regression exogenous support
- GARCH for volatility modeling
- VAR (Vector Autoregression) for multivariate forecasting with Granger causality
- Kalman filter / state-space models (local level, local linear trend, custom)
- Exogenous regressor support across model families with OLS coefficient extraction (
exog_coefficients()) FeatureGenerator: deterministic regressor generation (Fourier harmonics, day-of-week, month-of-year, quarter, holiday indicators, cyclical sin/cos encoding, binary calendar indicators)RegressionForecaster:anofox-regressionbackends asForecaster— 11 regression backends (OLS, Ridge, ElasticNet, Quantile, WLS, RLS, Tweedie, Poisson, BLS, NNLS, Dynamic), configurable trend/seasonal/structural features, recursive multi-step prediction, auto-lag selection (BIC/AIC), differencing and seasonal differencing, rolling-window features (mean/std/var/min/max/median/sum/EWM) via theRecursiveFeaturetrait — recomputed at every horizon step using the rolling history buffer, with built-inlag >= 1leakage guard
-
Automatic Model Selection
AutoForecast: Unified selection across ARIMA, ETS, and Theta families (parallel withparallelfeature)AutoEnsemble: Automatic ensemble of top-K best models- Selection by cross-validation error
- Builder API:
AutoForecast::builder().seasonal_period(12).include_arima(true).build() fit_predict()convenience method on all models
-
Batch / Global Forecasting — process many series with shared computation
GlobalETS: shared smoothing params across N series (75-96x faster for seasonal ETS)GlobalAutoETS: automatic model selection across N series (28-32x faster)GlobalCroston: shared α across N intermittent demand series (3-6x faster)GlobalTheta: shared α for Standard Theta Methodbatch::auto_ets(),batch::ets(),batch::mfles(): parallel batch convenience functionsSTL::decompose_batch(): batch decomposition with parallel support- Validated on M5 dataset (30,490 series): identical accuracy, 2x speedup with Reduced pool
-
Ensemble Methods
- Mean, Median, Weighted MSE, InverseAIC, Stacking, HorizonAdaptive combination strategies
- Widest-envelope interval combination for ensemble prediction intervals
- Automatic ensemble construction from model registry
ensemble_best_k(): Auto-select top-k models by holdout performance
-
Hierarchical Forecasting
HierarchyTree: Define parent-children structure for grouped series- Bottom-up, top-down, MiddleOut, MinTrace OLS, and MinTrace Shrink (Ledoit-Wolf) reconciliation
- Scalable MinTrace:
MinTraceVarianceandMinTraceStructwith sparse summing matrix — safe for 100k+ series (no N×N covariance) - Ensures coherent forecasts across hierarchical levels
Analysis & Decomposition
-
Seasonality & Decomposition
SeasonalComponent/TrendComponenttraits — composable, dual-purpose (standalone + feature extraction)- STL (Seasonal-Trend decomposition using LOESS) with
StlBuilder— optimized with running-sum MA and precomputed tricube kernel (2-2.5x faster) - MSTL (Multiple Seasonal-Trend decomposition) for complex seasonality, with pre-regression exogenous regressor support
- Prophet-style Fourier seasonality (
FourierSeasonality) with flexible harmonic modeling - Dummy (one-hot) seasonality (
DummySeasonality) — captures arbitrary seasonal shapes without smoothness assumptions - Seasonal differencing (
SeasonalDifference) — standalone transform with strength/variance-reduction features - Hodrick-Prescott filter (
HodrickPrescottFilter) — smooth trend extraction with cycle decomposition - Christiano-Fitzgerald band-pass filter (
cf_filter) — asymmetric, preserves full series length - Baxter-King band-pass filter (
bk_filter) — symmetric, loses 2k edge observations - Hamilton filter (
hamilton_filter) — regression-based trend-cycle decomposition (avoids HP endpoint bias) - Fractional differencing (
fractional_difference) — memory-preserving stationarity (Lopez de Prado 2018) find_min_fractional_d()— automatic minimum d for ADF stationarity- Piecewise linear trend (
PiecewiseLinearTrend) — PELT-based changepoint detection + per-segment regression - Polynomial trend (
PolynomialTrend) — degree 1-3, Vandermonde + Cholesky solve - Exponential trend (
ExponentialTrend) — log-linear regression for growth/decay - Logistic trend (
LogisticTrend) — S-curve fitting with auto or fixed capacity - Theil-Sen trend (
TheilSenTrend) — robust median-of-pairwise-slopes estimator AutoTrend— automatic selection of best trend component via AICc/BICAutoSeasonal— automatic selection of best seasonal component via AICc/BICRecency— fit on recent data only (last N, last X%, full, or Auto via PELT changepoint detection) for trend-aware forecastingTimeSeries::seasonal_strength()/trend_strength()— quick strength assessment- Convenience:
deseasonalize(),detrend(),seasonal_adjust(),recompose()
-
Time Series Feature Extraction (76+ features)
- Basic statistics (mean, variance, quantiles, energy, etc.)
- Distribution features (skewness, kurtosis, symmetry)
- Autocorrelation and partial autocorrelation
- Entropy features (approximate, sample, permutation, binned, Fourier)
- Complexity measures (C3, CID, Lempel-Ziv)
- Trend analysis and stationarity tests (ADF, KPSS)
- Automated feature selection (variance threshold, correlation filter, top-K)
-
Spectral Analysis
- Welch's periodogram for reduced variance spectral estimation
- For comprehensive periodicity detection, see fdars
-
Changepoint Detection
- PELT algorithm with O(n) average complexity
auto_detect(): automatic penalty selection via CROPS (Haynes et al. 2017) + elbow detectioncrops(): explore the penalty landscape (penalty vs n_changepoints curve)- Builder API:
Pelt::new(CostFunction::L2).min_size(5).penalty(5.0).detect(&data) - Multiple cost functions: L1, L2, Normal, Poisson, LinearTrend, MeanVariance, Cusum
-
Sequential Monitoring of Forecast Errors (
monitor::module)- Online changepoint detection on a stream of forecast residuals — flags the moment a fitted model becomes inaccurate
- Four CUSUM detectors:
PageCusum(default),PageCusum1,Cusum,Cusum1(two-sided and one-sided variants) - Detects mean shifts (raw stream), variance shifts (squared stream), or both in parallel
- Constant-size online state:
SequentialDetector::update(&new_errors)is bit-equivalent to a fresh fit on the full series — production-safe streaming - Baked-in 228-entry critical-value table (4 detectors × 19 γ × 3 α) with reproducible Monte-Carlo regeneration
Forecastertrait integration:monitor_forecaster()(in-sample residuals, cheap) andmonitor_forecaster_cv()(rolling-origin CV residuals, calibrated)- Rust port of
changepoint.forecastby Thomas Grundy (Lancaster), based on Fremdt (2014)
-
Anomaly Detection & Outlier Handling
- Statistical methods (IQR, z-score, modified z-score)
- Automatic threshold selection
TimeSeries::with_outliers_replaced()— automatic outlier replacement with local median
Evaluation & Uncertainty
-
Model Comparison & Evaluation
compare_models(): Side-by-side model evaluation with timingcompare_registry(): Compare all registered models at oncefit_all_and_compare(): Fit all registry models, rank by holdout accuracycross_validate_all(): CV all registry models at once with aggregated metrics- Accuracy metrics: MAE, MSE, RMSE, MAPE, sMAPE, MASE, WAPE, MDA, Theil's U, RMSSE, WRMSSE, MSIS, coverage, skill scores
ForecastMetrics::compute(): All 10 core metrics in a single call- Time series cross-validation: backward-anchored folds, n_folds-driven, expanding/rolling windows, gap/purge/embargo
rolling_forecast(): Walk-forward evaluation with rolling/expanding windows- Streaming CV aggregation with early stopping (
cross_validate_early_stop()) ModelDiagnostics: Ljung-Box, Jarque-Bera, Breusch-Pagan residual diagnosticsIntermittentDiagnostics: Syntetos-Boylan demand classification (Smooth/Erratic/Intermittent/Lumpy)AidAnalyzer: Automatic Identification of Demand — distribution fitting, demand type classification, and per-observation anomaly detection (stockouts, lifecycle events, outliers)
-
Probabilistic Postprocessing
- Conformal Prediction: Distribution-free intervals with coverage guarantees
- Per-horizon-step conformal: separate interval widths per forecast step (tighter at h=1, wider at h=12)
- Binned Conformal Prediction: Heteroscedastic intervals — bins residuals by predicted magnitude for wider intervals where uncertainty is larger
- Bootstrap Prediction Intervals: Model-agnostic residual resampling with cumulative error paths (IID and block bootstrap)
- Historical Simulation: Non-parametric empirical error distribution
- Normal Predictor: Gaussian error assumption baseline
- IDR: Isotonic Distributional Regression (state-of-the-art calibration)
- QRA: Quantile Regression Averaging for ensemble combining
- Multi-quantile forecasts:
predict_quantiles()on Bootstrap and Conformal predictors (e.g., 10th/25th/50th/75th/90th percentiles) - Backtesting: Rolling/expanding window evaluation with horizon-aware calibration
-
Bootstrap Confidence Intervals
- Residual bootstrap and block bootstrap methods
- Empirical confidence intervals for any model
- Configurable sample size and reproducibility
-
Forecast Constraints
NonNegative,LowerBound,UpperBound,Bounds,IntegerRound,Custom- Convenience methods:
forecast.non_negative(),.clamp(lo, hi),.round_to_integer() - Constraints apply to point forecasts and prediction intervals
-
Forecast Explainability
Explainabletrait withForecastExplanation(level, trend, seasonal, residual, named components)- Implemented for ETS, Theta, and MSTL models
- Components sum to forecast values for verification
Data Processing & Pipeline
-
Parallelism
compare_models()/compare_registry(): Parallel model comparison- Cross-validation folds run in parallel when
parallelfeature is enabled - Bootstrap sampling uses
par_iterwhenparallelis enabled
-
Data Transformations
Pipeline: composable transform chains around anyForecaster—Pipeline::builder().transform(BoxCoxTransform::auto()).transform(DifferenceTransform::new(1)).model(Box::new(Naive::new())).build()Transformtrait:DifferenceTransform,SeasonalDifferenceTransform,BoxCoxTransform,ScaleTransform,LogTransform- Scaling: standardization, min-max, robust scaling
- Box-Cox transformation with automatic lambda selection
- Window functions: rolling mean, std, min, max, median
- Exponential weighted moving averages
-
Missing Value Imputation
- Policy-based: Drop, Fill, ForwardFill, BackwardFill, FillMean, FillMedian, Interpolate
- Advanced: moving average imputation, seasonal median imputation
- Convenience: forward-backward fill, regressor imputation
- Metadata: missing mask, per-dimension missing counts
-
TimeSeries Temporal Aggregation
aggregate(period, method)— Sum, Mean, Median, First, Last, Min, Maxdownsample(factor)— decimation with timestamp preservationupsample(factor, method)— Linear, ForwardFill, BackwardFill, Zero interpolationsliding_window_aggregate(window, step, method)— configurable sliding windows
Persistence & Interoperability
-
Model Serialization (optional
serdefeature)- Save/load models to JSON with
to_json()/from_json() - Binary serialization with
to_bincode()/from_bincode()for compact storage - File persistence with
save_to_file()/load_from_file() - Round-trip serialization for all major model families
- Save/load models to JSON with
-
Model Warm-Starting
ETS::with_initial_states()— start from pre-fitted level/trend/seasonal statesSES::with_alpha()— use pre-fitted smoothing parameterARIMA::with_coefficients()— use pre-fitted AR/MA coefficientsTheta::with_theta_value()— use specified theta parameterForecaster::fitted_params()— extract fitted parameters for transfer
Installation
Add this to your Cargo.toml:
[dependencies]
anofox-forecast = "0.6.0"
Optional Features
[dependencies]
# Parallel AutoARIMA (4-8x speedup via rayon, opt-in for embedding contexts like DuckDB)
anofox-forecast = { version = "0.6", features = ["parallel"] }
# Model serialization (save/load to JSON)
anofox-forecast = { version = "0.6", features = ["serde"] }
# Probabilistic postprocessing (conformal, IDR, QRA — enabled by default)
anofox-forecast = { version = "0.6", default-features = false } # to disable
| Feature | Default | Description |
|---|---|---|
postprocess |
Yes | Conformal prediction, IDR, QRA, historical simulation |
parallel |
No | Rayon-based parallelism for AutoARIMA, AutoForecast, bootstrap, and cross-validation (not available on WASM) |
serde |
No | JSON and bincode serialization/deserialization for models |
Quick Start
Creating a Time Series
use anofox_forecast::prelude::*;
use chrono::{TimeZone, Utc};
// Create timestamps
let timestamps: Vec<_> = (0..100)
.map(|i| Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap() + chrono::Duration::days(i))
.collect();
// Create values
let values: Vec<f64> = (0..100).map(|i| (i as f64 * 0.1).sin() + 10.0).collect();
// Build the time series
let ts = TimeSeries::builder()
.timestamps(timestamps)
.values(values)
.build()?;
Automatic Model Selection
use anofox_forecast::prelude::*;
use anofox_forecast::models::auto_forecast::AutoForecast;
// Automatically selects the best model across ARIMA, ETS, and Theta
let mut model = AutoForecast::new();
model.fit(&ts)?;
let forecast = model.predict(12)?;
println!("Best model: {}", model.name());
ARIMA Forecasting
use anofox_forecast::prelude::*;
use anofox_forecast::models::arima::ARIMA;
// Create and fit an ARIMA(1,1,1) model
let mut model = ARIMA::new(1, 1, 1);
model.fit(&ts)?;
// Generate forecasts with 95% confidence intervals
let forecast = model.predict_with_intervals(12, 0.95)?;
println!("Point forecasts: {:?}", forecast.primary());
println!("Lower bounds: {:?}", forecast.lower_series(0));
println!("Upper bounds: {:?}", forecast.upper_series(0));
Holt-Winters Forecasting
use anofox_forecast::models::exponential::HoltWinters;
// Create Holt-Winters with additive seasonality (period = 12)
let mut model = HoltWinters::additive(12);
model.fit(&ts)?;
let forecast = model.predict(24)?;
Model Comparison
use anofox_forecast::models::{BoxedForecaster, ModelRegistry};
use anofox_forecast::utils::comparison::{compare_registry, ComparisonConfig};
// Compare all registered models side-by-side
let config = ComparisonConfig::default();
let table = compare_registry(&ts, &config)?;
println!("{}", table);
Feature Extraction
use anofox_forecast::features::{mean, variance, skewness, approximate_entropy};
let values = ts.values();
let m = mean(values);
let v = variance(values);
let s = skewness(values);
let ae = approximate_entropy(values, 2, 0.2)?;
println!("Mean: {}, Variance: {}, Skewness: {}, ApEn: {}", m, v, s, ae);
STL Decomposition
use anofox_forecast::seasonality::Stl;
// Decompose with seasonal period of 12
let stl = Stl::new(12)?;
let decomposition = stl.decompose(&ts)?;
println!("Trend: {:?}", decomposition.trend());
println!("Seasonal: {:?}", decomposition.seasonal());
println!("Remainder: {:?}", decomposition.remainder());
Rolling Features in Regression Models
use anofox_forecast::models::regression::{
RegressionFeatures, RegressionForecaster, RollingStatKind,
};
// OLS with lag-1, a rolling mean of the last 7 values, and a rolling std.
// Every rolling feature is recomputed at each horizon step using the
// previous predictions — correct recursive multi-step semantics.
let mut model = RegressionForecaster::ols(
RegressionFeatures::new()
.no_trend()
.lags(1)
.with_rolling_mean(7)? // last 7, lag=1
.with_rolling_std(14)? // last 14, lag=1
.with_ewm_mean(20, 0.3)? // EWM window=20, α=0.3
.no_exog(),
);
model.fit(&ts)?;
let forecast = model.predict(12)?;
// All `RollingStatKind` variants: Mean, Std, Var, Min, Max, Median, Sum,
// EwmMean { alpha }, EwmStd { alpha }. Custom lag via `.with_rolling_lagged(w, lag, kind)`.
// `lag == 0` is rejected at build time to prevent target leakage.
Transform Pipeline
use anofox_forecast::transform::pipeline::{Pipeline, PipelineBuilder};
use anofox_forecast::transform::transforms::{BoxCoxTransform, DifferenceTransform};
use anofox_forecast::models::baseline::Naive;
// Chain transforms around any model — Pipeline itself implements Forecaster
let mut pipeline = Pipeline::builder()
.transform(BoxCoxTransform::auto())
.transform(DifferenceTransform::new(1))
.model(Box::new(Naive::new()))
.build();
pipeline.fit(&ts)?;
let forecast = pipeline.predict(12)?;
Batch Forecasting (Many Series)
use anofox_forecast::models::exponential::{GlobalAutoETS, GlobalETS, ETSSpec, ModelPool};
// 1000 series, each a Vec<f64> — all same length
let all_series: Vec<Vec<f64>> = load_my_data();
// GlobalAutoETS: select best model per series, shared optimization (28-32x faster)
let mut model = GlobalAutoETS::new(12, ModelPool::Reduced);
model.fit(&all_series).unwrap();
let forecasts = model.predict(12); // Vec<Vec<f64>>, one per series
// GlobalETS: fit a known spec across all series (75-96x faster)
let mut model = GlobalETS::new(ETSSpec::ana(), 12);
model.fit(&all_series).unwrap();
let forecasts = model.predict(12);
Exogenous Regressors
use anofox_forecast::features::FeatureGenerator;
// Generate deterministic regressors from timestamps
let gen = FeatureGenerator::new()
.fourier(7, 2) // Weekly Fourier terms
.day_of_week() // Day-of-week indicators
.holiday("promo", promo_dates);
gen.add_to(&mut ts); // Attach features to TimeSeries
let mut model = ARIMA::new(1, 1, 1);
model.fit(&ts)?;
// Inspect OLS pre-regression coefficients
if let Some(ols) = model.exog_coefficients() {
println!("Intercept: {:.4}", ols.intercept);
for (name, coef) in ols.regressor_names.iter().zip(&ols.coefficients) {
println!(" {}: {:.4}", name, coef);
}
}
Changepoint Detection
use anofox_forecast::changepoint::{Pelt, CostFunction};
// Automatic penalty selection (recommended)
let result = Pelt::new(CostFunction::L2)
.min_size(5)
.auto_detect(&data);
println!("Found {} changepoints at {:?}", result.result.n_changepoints, result.result.changepoints);
println!("Auto-selected penalty: {:.2}", result.penalty);
// Manual penalty
let result = Pelt::new(CostFunction::L2)
.penalty(10.0)
.detect(&data);
Sequential Monitoring of Forecast Errors
Online detection of when a fitted model has become inaccurate. Port of the R
package changepoint.forecast
by Thomas Grundy, based on
Fremdt (2014).
use anofox_forecast::models::baseline::Naive;
use anofox_forecast::monitor::{
monitor_forecaster, Detector, ForecastErrorType, SequentialConfig, SequentialDetector,
};
// Option A: monitor a fitted forecaster's residuals directly
let mut model = Naive::new();
model.fit(&ts)?;
let cfg = SequentialConfig::new(200) // training window length m
.detector(Detector::PageCusum) // recommended default
.error_type(ForecastErrorType::Both); // monitor mean AND variance
let detector = monitor_forecaster(&model, cfg)?;
if let Some(tau) = detector.first_detection() {
println!("Model drifted at observation {}", tau);
}
// Option B: bring your own residual stream and update it online
let cfg = SequentialConfig::new(100).detector(Detector::PageCusum);
let mut detector = SequentialDetector::fit(&residuals, cfg)?;
// Each time a new actual arrives, compute the new error and stream it in.
// State is constant-size; this is bit-equivalent to a fresh fit on the full
// concatenated series.
detector.update(&[new_error])?;
if detector.has_detected() {
// refit the forecasting model
}
Spectral Analysis
use anofox_forecast::detection::welch_periodogram;
// Welch's periodogram with overlapping windows
let psd = welch_periodogram(&values, 64, 0.5);
// Find dominant period
if let Some((period, power)) = psd.iter().max_by(|a, b| a.1.partial_cmp(&b.1).unwrap()) {
println!("Dominant period: {}, power: {:.4}", period, power);
}
For comprehensive periodicity detection (ACF, FFT, Autoperiod, CFD-Autoperiod, SAZED), see the fdars crate.
Probabilistic Postprocessing
use anofox_forecast::postprocess::{PostProcessor, PointForecasts, BacktestConfig};
// Historical forecasts and actuals for calibration
let train_forecasts = PointForecasts::from_values(train_f);
let train_actuals = vec![/* ... */];
// Create a conformal predictor with 90% coverage
let processor = PostProcessor::conformal(0.90);
// Backtest with horizon-aware calibration
let config = BacktestConfig::new()
.initial_window(100)
.step(10)
.horizon(7)
.horizon_aware(true);
let results = processor.backtest(&train_forecasts, &train_actuals, config)?;
println!("Coverage: {:.1}%", results.coverage() * 100.0);
// Train calibrated model and predict
let trained = processor.train(&train_forecasts, &train_actuals)?;
let new_forecasts = PointForecasts::from_values(new_f);
let intervals = processor.predict_intervals(&trained, &new_forecasts)?;
println!("Lower: {:?}", intervals.lower());
println!("Upper: {:?}", intervals.upper());
API Reference
Core Types
| Type | Description |
|---|---|
TimeSeries |
Main data structure for univariate/multivariate time series |
Forecast |
Prediction results with optional confidence intervals |
Forecaster |
Trait implemented by all forecasting models (exog_coefficients() for OLS inspection) |
Pipeline |
Composable transform → model chain, itself implements Forecaster |
FeatureGenerator |
Deterministic regressor generation (Fourier, DOW, MOY, quarter, holidays) |
AccuracyMetrics |
Model evaluation metrics (MAE, MSE, RMSE, MAPE, etc.) |
Forecasting Models
| Family | Models |
|---|---|
| Auto Selection | AutoForecast, AutoEnsemble |
| ARIMA | ARIMA, SARIMA, AutoARIMA |
| Exponential Smoothing | SES, Holt, HoltWinters, SeasonalES, ETS, AutoETS (with ModelPool) |
| Theta | Theta, OptimizedTheta, DynamicTheta, AutoTheta |
| Baseline | Naive, Mean, SeasonalNaive, RandomWalkWithDrift, SMA, WindowAverage, SeasonalWindowAverage |
| Intermittent | Croston, TSB, ADIDA, IMAPA |
| Complex Seasonality | TBATS, AutoTBATS, MFLES, MSTLForecaster |
| Volatility | GARCH |
| Multivariate | VAR (Vector Autoregression) |
| State-Space | KalmanFilter, StateSpaceModel (local level, local linear trend) |
| Ensemble | Ensemble (Mean, Median, Weighted MSE, InverseAIC, Stacking, HorizonAdaptive) |
| Regression | RegressionForecaster (OLS, Ridge, ElasticNet, Quantile, WLS, RLS, Tweedie, Poisson, BLS, Dynamic) |
| Hierarchical | HierarchyTree (BottomUp, TopDown, MiddleOut, MinTraceOls, MinTraceShrink, MinTraceVariance, MinTraceStruct) |
| Batch/Global | GlobalETS, GlobalAutoETS, GlobalCroston, GlobalTheta, batch::auto_ets, batch::ets, batch::mfles |
Utilities
| Function / Type | Description |
|---|---|
compare_models() |
Compare forecasters on the same data with timing |
compare_registry() |
Compare all registered models at once |
cross_validate() |
Time series cross-validation (parallel with parallel feature) |
cross_validate_early_stop() |
CV with convergence-based early stopping |
rolling_forecast() |
Walk-forward evaluation with rolling/expanding windows |
StreamingCVAggregator |
Online metric aggregation using Welford's algorithm |
bootstrap_forecast() |
Bootstrap confidence intervals for any model |
diagnose_residuals() |
Unified residual diagnostics (Ljung-Box, DW, Jarque-Bera) |
ModelDiagnostics |
Comprehensive diagnostics: Ljung-Box, Jarque-Bera, Breusch-Pagan |
IntermittentDiagnostics |
Syntetos-Boylan demand classification with model recommendations |
AidAnalyzer |
Automatic Identification of Demand: distribution fitting, anomaly detection |
rmsse() / wrmsse() |
Root Mean Squared Scaled Error and Weighted RMSSE (M5 competition metric) |
bias() / periods_in_stock() |
Signed bias and inventory-focused PIS metric |
ForecastMetrics::compute() |
All 10 metrics in one call (MAE through Theil's U) |
fit_all_and_compare() |
Fit all registry models, rank by holdout accuracy |
cross_validate_all() |
CV all registry models with aggregated metrics |
ensemble_best_k() |
Auto-select top-k models into an ensemble |
SeasonalComponent / TrendComponent |
Composable traits for seasonal/trend components (standalone + features) |
DummySeasonality |
One-hot seasonal encoding — arbitrary seasonal shapes |
SeasonalDifference |
Standalone seasonal differencing with strength/variance features |
HodrickPrescottFilter |
Smooth trend extraction with cycle decomposition |
PiecewiseLinearTrend |
PELT-based piecewise linear trend with per-segment regression |
PolynomialTrend |
Polynomial trend (degree 1-3) with Cholesky solve |
ExponentialTrend |
Log-linear exponential growth/decay trend |
LogisticTrend |
Logistic S-curve trend with auto/fixed capacity |
TheilSenTrend |
Robust Theil-Sen median-slope trend estimator |
AutoTrend |
Automatic best-trend selection via AICc/BIC/holdout |
AutoSeasonal |
Automatic best-seasonal selection via AICc/BIC |
Recency |
Fit on recent data only (Window, Fraction, Full, Auto via PELT) |
BinnedConformalPredictor |
Heteroscedastic prediction intervals binned by predicted magnitude |
RegressionForecaster |
Multi-backend regression: OLS, Ridge, ElasticNet, Quantile, WLS, RLS, Tweedie, Poisson, BLS, Dynamic |
RegressionBackend |
Backend selection enum with convenience constructors (ridge(), quantile(), wls_decay(), etc.) |
RegressionFeatures |
Feature builder for regression models (trend, seasonal, lags, structural, recursive, exog) |
FeatureSafety |
Feature leakage classification: Deterministic, DataDependent, Structural, External |
StructuralFeature |
Trait for forward-filled features during prediction (changepoints, outlier indicators) |
ChangepointFeature |
Structural feature for regime indicators (StepFunctions, RegimeIndex, CumulativeCount) |
RecursiveFeature |
Trait for features recomputed at every horizon step from the rolling history buffer |
RollingFeature / RollingStatKind |
Rolling window statistics (Mean, Std, Var, Min, Max, Median, Sum, EwmMean, EwmStd) as regression features |
Pipeline / PipelineBuilder |
Composable transform → model chains (BoxCox → Difference → Model → inverse) |
Transform trait |
Reversible transforms: DifferenceTransform, SeasonalDifferenceTransform, BoxCoxTransform, ScaleTransform, LogTransform |
FeatureGenerator |
Deterministic feature generation: fourier(), day_of_week(), month_of_year(), quarter(), holiday() |
OLSResult / exog_coefficients() |
Inspect OLS pre-regression coefficients (intercept, betas, regressor names) |
deseasonalize() / seasonal_adjust() |
Remove seasonal component from data or TimeSeries |
select_features() |
Automated feature selection (variance, correlation, top-K) |
to_json() / from_json() |
Serialization for models, Forecast, and TimeSeries (requires serde feature) |
to_bincode() / from_bincode() |
Binary serialization (requires serde feature) |
Feature Categories
| Category | Examples |
|---|---|
| Basic | mean, variance, minimum, maximum, quantile |
| Distribution | skewness, kurtosis, variation_coefficient |
| Autocorrelation | autocorrelation, partial_autocorrelation |
| Entropy | approximate_entropy, sample_entropy, permutation_entropy |
| Complexity | c3, cid_ce, lempel_ziv_complexity |
| Trend | linear_trend, adf_test, ar_coefficient, hp_trend_strength, piecewise_n_segments |
| Seasonality | dummy_seasonal_strength, seasonal_diff_strength, seasonal_diff_variance_reduction |
| Selection | select_features, rank_features |
Postprocessing Types
| Type | Description |
|---|---|
PostProcessor |
Unified API for all postprocessing methods |
ConformalPredictor |
Distribution-free prediction intervals |
BinnedConformalPredictor |
Heteroscedastic intervals — bins by predicted magnitude |
HistoricalSimulator |
Empirical error distribution |
IDRPredictor |
Isotonic Distributional Regression |
QRAPredictor |
Quantile Regression Averaging |
Examples
48 runnable examples covering all major features, each with a companion .md description. See examples/README.md for the full categorized index.
cargo run --example quickstart # End-to-end forecasting
cargo run --example arima # ARIMA family
cargo run --example regression # 11 regression backends
cargo run --example cross_validation # Time series CV
cargo run --example postprocess_conformal # Conformal prediction intervals
Guides
- Model Selection Guide — Which model to use for your data
- M5 ETS Benchmark — AutoETS Complete vs Reduced pool on 30,490 M5 series
Dependencies
- chrono - Date and time handling
- trueno - Linear algebra operations
- anofox-statistics - Statistical hypothesis tests (DM, MCS, SPA)
- statrs - Statistical distributions and functions
- thiserror - Error handling
- rand - Random number generation
- rustfft - Fast Fourier Transform for spectral analysis
Acknowledgments
The postprocessing module is a Rust port of PostForecasts.jl. The sequential monitoring module (monitor::) is a Rust port of changepoint.forecast by Thomas Grundy (Lancaster University), based on Fremdt (2014). Feature extraction is inspired by tsfresh. Forecasting models are validated against StatsForecast by Nixtla. See THIRDPARTY_NOTICE.md for full attribution and references to the research papers that inspired this implementation.
License
MIT License - see LICENSE for details.