Large time-series models (LTMs) are to time-series data what LLMs are to language. LTMs are on the cusp of delivering state-of-the-art forecasting capability across a wide range of domains, changing the way we analyse time-series data forever. Calda AI researchers Justin Alsing and Benjamin Wandelt (with collaborators from Johns Hopkins, University of Amsterdam and Capital One) made an important step forward in the pursuit of LTMs last week, demonstrating favourable performance scaling with model size, data, and compute. Watch this space for breakthrough foundation models for time-series forecasting!
Large time-series models (LTMs) enjoy similar power-law scaling behaviour to LLMs. We just put out a paper (https://lnkd.in/givY528D) establishing power-law like scaling-laws for large time-series models as a function of data, compute, and model size. Similar scaling-laws for LLMs (from the landmark Kaplan et al. paper https://lnkd.in/g9KHYN9u) have provided key guidance in allocating enormous resources for predictable - and eventually breakthrough - performance gains. The demonstration of similarly favourable scaling behaviour for large time-series models provides both a motivation and guide in the pursuit of foundation models for time-series forecasting. Foundation models for time-series are coming (with enough data and compute). Thanks to Thomas Edwards, James Alvey, Benjamin Wandelt and Nam Nguyen for the hard work!