You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Positional Encoding Benchmark for Time Series Classification
5
5
6
-
This repository provides a comprehensive benchmark for evaluating different positional encoding techniques in Transformer models, specifically for time series classification tasks. The project includes implementations of several positional encoding methods and Transformer architectures to test their effectiveness on various time series datasets.
7
-
8
-
9
-
10
-
<!-- DESCRIPTION -->
11
-
## Description
12
-
13
-
14
-
This project aims to analyze how positional encodings impact Transformer-based models in time series classification. The benchmark includes both fixed and learnable encoding techniques and explores advanced approaches like relative positional encoding. The project evaluates performance on a diverse set of datasets from different domains, such as human activity recognition, financial data, EEG recordings, etc.
This repository provides a comprehensive evaluation framework for positional encoding methods in transformer-based time series models, along with implementations and benchmarking results.
7
+
8
+
Our work is available on arXiv: [Positional Encoding in Transformer-Based Time Series Models: A Survey](https://arxiv.org/abs/2502.12370)
9
+
10
+
## Models
11
+
12
+
We present a systematic analysis of positional encoding methods evaluated on two transformer architectures:
13
+
1.[Multivariate Time Series Transformer Framework (TST)](https://github.com/gzerveas/mvts_transformer)
14
+
2. Time Series Transformer with Patch Embedding
15
+
16
+
17
+
18
+
### Positional Encoding Methods
19
+
We implement and evaluate eight positional encoding methods:
20
+
21
+
| Method | Type | Injection Technique | Parameters |
Our experimental evaluation encompasses eight distinct positional encoding methods tested across eleven diverse time series datasets using two transformer architectures.
88
+
89
+
### Key Findings
90
+
91
+
#### 1. Sequence Length Impact
92
+
-**Long sequences** (>100 steps): 5-6% improvement with advanced methods
0 commit comments