.Expanding the Prediction Capacity in Long Sequence Time-Series Forecasting:Artificial Intelligence Journal,2023
Haoyi Zhou,Siyang Xiao, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li.Jump Self-attention: Capturing High-order Statistics in Transformers.[Conference]:NeurIPS,2022
Haoyi Zhou,Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang.Informer: Beyond Efficient Transformer for Long Sequence Timer-Series Forecasting:AAAI,2021
Haoyi Zhou,Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li.Triplet Attention: Rethinking the Similarity in Transformers:KDD,2021
Haoyi Zhou,Hao Peng, Jieqi Peng, Shuai Zhang, Jianxin Li.POLLA: Enhancing the Local Structure Awareness in Long Sequence Spatial-temporal Modeling:ACM TIST,2021
|