Large Language Models (LLMs) have shown promising capabilities in handling structured data tasks, such as time series forecasting, by leveraging their pretrained knowledge and natural language understanding. Approaches like Time-LLM introduce a reprogramming module to convert time series data into a textual format that can be interpreted by a frozen LLM. However, training such modules often requires significant computational resources and long training times.
In this work, we explore a more lightweight alternative that eliminates the need for a reprogramming module or fine-tuning. Inspired by TABLLM, we use data serialization to convert time series data into natural language prompts and apply prompt engineering strategies to guide the LLM in making predictions or classifications. The goal is to evaluate the extent to which LLMs can perform time series tasks in a zero-shot or few-shot setting, relying solely on serialized input and cleverly designed prompts.
This approach seeks to offer a resource-efficient solution to time series modeling, making the use of LLMs more accessible in environments with limited computational capacity.