Firrice Space
  • Home
  • Archives
  • Categories
  • Tags
  • About

2021-arxiv Gated Transformer Networks for Multivariate Time Series Classification

MotivationIn this work, researchers explored a simple extension of the current Transformer Networks with gating, named Gated Transformer Networks (GTN) for the multivariate time series classification
2024-06-26
Time Series Classification
#MTSC #transformer

2021-KDD A TRANSFORMER-BASED FRAMEWORK FOR MULTIVARIATE TIME SERIES REPRESENTATION LEARNING

MotivationIn this paper, a transformer-based framework for unsupervised learning of multivariate time series was proposed for the first time. MethodThe overall pipeline is as follow: As we can see
2024-06-26
Time Series Classification
#MTSC #transformer

2020-IJCAI A New Attention Mechanism to Classify Multivariate Time Series

MotivationFirst, the long-range dependencies of the time-series sequences are not well captured. Second, the interactions of multiple variables are generally not represented in features. To address th
2024-06-26
Time Series Classification
#MTSC #attention

2019-NIPs Unsupervised Scalable Representation Learning for Multivariate Time Series

MotivationThere exists highly variable lengths and sparse labeling in practice, to tackle the issues, a kind of unsupervised scalable representation was proposed this paper. Method Backbone module use
2024-06-26
Time Series Classification
#TSC #unsupervised learning

2018-arxiv The UEA multivariate time series classification archive(2018)

MethodIt’s the first iteration of MTSC datasets, which contains 30 datasets with equal series length per problem. The MTSC datasets of UEA archive consist of three types of sources, they are HAR(Human
2024-06-26
Time Series Classification
#TSC #survey

2017-ICDM Generating synthetic time series to augment sparse datasets

MotivationA new synthetic series genaration method was proposed, and the average time series based on the extention of DBA(Dynamic Barycenter Averaging) was as a synthetic sample. MethodsThe paper f
2024-06-26
Time Series Classification
#Time Series Generating

2023-arxiv Retrieval-Augmented Generation for Large Language Models-A Survey

IntroductionThe paper mainly introcude the solution for optimizing the RAG system, and the evaluation framework&tools are also included, which can be seen as below: More specifically, three p
2024-06-26
Large Language Model
#survey #RAG

2024-NIPS QLoRA-Efficient Finetuning of Quantized LLMs

MotivationA new efficient finetuning method based on quantile quantization was proposed, which realized the goal finetuning a 65B model on a single 48GB GPU, preserving full 16-bit task performance at
2024-06-26
Large Language Model
#LLM-finetune

2021-ICLR LORA LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS

MotivationFinetuning full-parameter model is expensive. MethodUsing the low-rank A&B to represent the finetuned parameters, the detailed pipeline is as follow: ResultsThere are really abundant
2024-06-26
Large Language Model
#LLM-finetune

2022-ACL/short P-Tuning v2 Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

MotivationIn the context of NLU tasks, the prior work reveals that prompt-tuning doesn’t perform well for normal-sized pretrained model and hard sequence labeling tasks(that means difficult tasks, lik
2024-06-26
Large Language Model
#LLM-finetune
123

Search

Hexo Fluid
Total page view times Total Unique visitor persons