Update README.md
Browse files
README.md
CHANGED
|
@@ -36,6 +36,25 @@ dataset (~700M samples) which can be accessed from [here](https://huggingface.co
|
|
| 36 |
TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
|
| 37 |
try both R1 and R2 variants and pick the best for your data.
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
## Model Releases (along with the branch name where the models are stored):
|
| 40 |
|
| 41 |
|
|
@@ -79,23 +98,6 @@ uploaded in the main branch. For other variants (TTM-B, TTM-E and TTM-A) please
|
|
| 79 |
impact the model performance.
|
| 80 |
|
| 81 |
|
| 82 |
-
## Model Description
|
| 83 |
-
|
| 84 |
-
TTM falls under the category of “focused pre-trained models”, wherein each pre-trained TTM is tailored for a particular forecasting
|
| 85 |
-
setting (governed by the context length and forecast length). Instead of building one massive model supporting all forecasting settings,
|
| 86 |
-
we opt for the approach of constructing smaller pre-trained models, each focusing on a specific forecasting setting, thereby
|
| 87 |
-
yielding more accurate results. Furthermore, this approach ensures that our models remain extremely small and exceptionally fast,
|
| 88 |
-
facilitating easy deployment without demanding a ton of resources.
|
| 89 |
-
|
| 90 |
-
Hence, in this model card, we plan to release several pre-trained
|
| 91 |
-
TTMs that can cater to many common forecasting settings in practice. Additionally, we have released our source code along with
|
| 92 |
-
our pretraining scripts that users can utilize to pretrain models on their own. Pretraining TTMs is very easy and fast, taking
|
| 93 |
-
only 3-6 hours using 6 A100 GPUs, as opposed to several days or weeks in traditional approaches.
|
| 94 |
-
|
| 95 |
-
Each pre-trained model will be released in a different branch name in this model card. Kindly access the required model using our
|
| 96 |
-
getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) mentioning the branch name.
|
| 97 |
-
|
| 98 |
-
|
| 99 |
## Model Details
|
| 100 |
|
| 101 |
For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
|
|
|
|
| 36 |
TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
|
| 37 |
try both R1 and R2 variants and pick the best for your data.
|
| 38 |
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## Model Description
|
| 42 |
+
|
| 43 |
+
TTM falls under the category of “focused pre-trained models”, wherein each pre-trained TTM is tailored for a particular forecasting
|
| 44 |
+
setting (governed by the context length and forecast length). Instead of building one massive model supporting all forecasting settings,
|
| 45 |
+
we opt for the approach of constructing smaller pre-trained models, each focusing on a specific forecasting setting, thereby
|
| 46 |
+
yielding more accurate results. Furthermore, this approach ensures that our models remain extremely small and exceptionally fast,
|
| 47 |
+
facilitating easy deployment without demanding a ton of resources.
|
| 48 |
+
|
| 49 |
+
Hence, in this model card, we plan to release several pre-trained
|
| 50 |
+
TTMs that can cater to many common forecasting settings in practice. Additionally, we have released our source code along with
|
| 51 |
+
our pretraining scripts that users can utilize to pretrain models on their own. Pretraining TTMs is very easy and fast, taking
|
| 52 |
+
only 3-6 hours using 6 A100 GPUs, as opposed to several days or weeks in traditional approaches.
|
| 53 |
+
|
| 54 |
+
Each pre-trained model will be released in a different branch name in this model card. Kindly access the required model using our
|
| 55 |
+
getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) mentioning the branch name.
|
| 56 |
+
|
| 57 |
+
|
| 58 |
## Model Releases (along with the branch name where the models are stored):
|
| 59 |
|
| 60 |
|
|
|
|
| 98 |
impact the model performance.
|
| 99 |
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
## Model Details
|
| 102 |
|
| 103 |
For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
|