add arXiv link and citation
Browse files
README.md
CHANGED
|
@@ -1,12 +1,18 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
datasets:
|
| 4 |
-
- Salesforce/GiftEvalPretrain
|
| 5 |
-
- autogluon/chronos_datasets
|
| 6 |
pipeline_tag: time-series-forecasting
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
# Cisco Time Series Model
|
| 9 |
-
The Cisco Time Series Model is a foundation model trained to perform univariate zero-shot forecasting. Its core is a sequence of decoder-only transformer layers. It is
|
| 10 |
|
| 11 |
For convenience, we provide utilities for preparing a multiresolution context from a single resolution context (with length up to 512 x 60 = 30,720) directly.
|
| 12 |
|
|
@@ -18,7 +24,7 @@ For convenience, we provide utilities for preparing a multiresolution context fr
|
|
| 18 |
|
| 19 |
Despite not conforming to the TimesFM architecture, the pre-training of the Cisco Time Series Model began from the weights of TimesFM. The dataset used for the additional training contains over 300B unique datapoints. Slightly more than 50% of the data is derived from metric time series data from internal deployments of the Splunk Observability Cloud, with about 35% at (1-hour, 1-minute) resolution, and the remaining 15% at (5-hour, 5-minute) resolution. Additional multiresolution data, comprising about 30% of the training set, was derived from the [GIFT-Eval](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain) pretraining corpus. Another 5% was derived from the [Chronos](https://huggingface.co/datasets/autogluon/chronos_datasets) dataset collection (less overlap with GIFT-Eval test). The final 15% is synthetic multiresolution data.
|
| 20 |
|
| 21 |
-
**Note:** A PyTorch implementation of the model architecture can be found in our [GitHub repository](https://github.com/splunk/cisco-time-series-model). A more detailed technical report
|
| 22 |
|
| 23 |
### Example Visualization of Multiresolution Time Series Input to the Model
|
| 24 |
<figure>
|
|
@@ -99,6 +105,20 @@ long_horizon_forecasts = model.forecast(input_series_1, horizon_len=240)
|
|
| 99 |
|
| 100 |
```
|
| 101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
<b>Authored by:</b>
|
| 103 |
- Liang Gou \*
|
| 104 |
- Archit Khare \*
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
datasets:
|
| 4 |
+
- Salesforce/GiftEvalPretrain
|
| 5 |
+
- autogluon/chronos_datasets
|
| 6 |
pipeline_tag: time-series-forecasting
|
| 7 |
+
paper:
|
| 8 |
+
- https://arxiv.org/abs/2511.19841
|
| 9 |
+
tags:
|
| 10 |
+
- time series
|
| 11 |
+
- foundation model
|
| 12 |
+
- forecasting
|
| 13 |
---
|
| 14 |
# Cisco Time Series Model
|
| 15 |
+
The Cisco Time Series Model is a foundation model trained to perform univariate zero-shot forecasting. Its core is a sequence of decoder-only transformer layers. It is based on the [TimesFM2.0 model](https://huggingface.co/google/timesfm-2.0-500m-pytorch), with multiresolution modifications aimed at efficient use of long context. It expects a multiresolution context (x<sub>c</sub>, x<sub>f</sub>), where the resolution (i.e., space between data points) of x<sub>c</sub> is 60 times the resolution of x<sub>f</sub>. Both x<sub>c</sub> and x<sub>f</sub> can have length up to 512. The input contexts should be aligned “on the right,” e.g., if x<sub>f</sub> consists of the 512 minutes terminating at 11:00AM on November 11, then x<sub>c</sub> should consist of the 512 hours terminating at the same time. The output is a forecast of 128 points, which should be interpreted at the finer resolution; and corresponding quantiles for these points.
|
| 16 |
|
| 17 |
For convenience, we provide utilities for preparing a multiresolution context from a single resolution context (with length up to 512 x 60 = 30,720) directly.
|
| 18 |
|
|
|
|
| 24 |
|
| 25 |
Despite not conforming to the TimesFM architecture, the pre-training of the Cisco Time Series Model began from the weights of TimesFM. The dataset used for the additional training contains over 300B unique datapoints. Slightly more than 50% of the data is derived from metric time series data from internal deployments of the Splunk Observability Cloud, with about 35% at (1-hour, 1-minute) resolution, and the remaining 15% at (5-hour, 5-minute) resolution. Additional multiresolution data, comprising about 30% of the training set, was derived from the [GIFT-Eval](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain) pretraining corpus. Another 5% was derived from the [Chronos](https://huggingface.co/datasets/autogluon/chronos_datasets) dataset collection (less overlap with GIFT-Eval test). The final 15% is synthetic multiresolution data.
|
| 26 |
|
| 27 |
+
**Note:** A PyTorch implementation of the model architecture can be found in our [GitHub repository](https://github.com/splunk/cisco-time-series-model). A more detailed technical report is now available on [arXiv](https://arxiv.org/abs/2511.19841); you can also access a local copy [here](https://github.com/splunk/cisco-time-series-model/blob/main/1.0-preview/technical_report/Cisco-Time-Series-Model-Technical-Report.pdf).
|
| 28 |
|
| 29 |
### Example Visualization of Multiresolution Time Series Input to the Model
|
| 30 |
<figure>
|
|
|
|
| 105 |
|
| 106 |
```
|
| 107 |
|
| 108 |
+
## Citation
|
| 109 |
+
If you find Cisco Time Series Model useful for your research, please consider citing the associated technical report:
|
| 110 |
+
```
|
| 111 |
+
@misc{gou2025ciscotimeseriesmodel,
|
| 112 |
+
title={Cisco Time Series Model Technical Report},
|
| 113 |
+
author={Liang Gou and Archit Khare and Praneet Pabolu and Prachi Patel and Joseph Ross and Hercy Shen and Yuhan and Song and Jingze Sun and Kristal Curtis and Vedant Dharnidharka and Abhinav Mathur and Hao Yang},
|
| 114 |
+
year={2025},
|
| 115 |
+
eprint={2511.19841},
|
| 116 |
+
archivePrefix={arXiv},
|
| 117 |
+
primaryClass={cs.LG},
|
| 118 |
+
url={https://arxiv.org/abs/2511.19841},
|
| 119 |
+
}
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
<b>Authored by:</b>
|
| 123 |
- Liang Gou \*
|
| 124 |
- Archit Khare \*
|