Research
home
Research Areas
home

DeepTimeSeries

Paper

Github page

DeepTimeSeries (currently under development)
DeepTimeSeries
BET-lab

Summary

Buildings and their energy systems have unique characteristics, such as changes in state and performance over time. Additionally, the building sector experiences seasonality in the surrounding environment. Consequently, when training and forecasting models, one must consider the behavioral variability of the target building and boundary conditions. Therefore, deep learning models that can easily adapt to changing circumstances and perform well with less data are necessary for accurately predicting and controlling building energy systems.
However, obtaining practical information on which deep learning architecture is best suited for predictive control of buildings can be challenging. Studies often compare the performance of deep learning models using tweaked variants, and publicly available data is not always used. Furthermore, comparisons are conducted under different conditions such as materials, geometry, use, energy systems, and microclimates. Additionally, information on relatively recent deep learning architectures such as transformers and dilated convolutional neural networks is less accessible, as they have rarely been applied to architectural time series forecasting.
To address these challenges, a benchmark study was conducted to compare the performance of six deep learning architectures: multilayer perceptron (MLP), simple recurrent neural network (RNN), long short-term memory (LSTM) networks, gated recurrent unit (GRU), dilated convolutional neural network (DCNN), and transformer. We also developed a data similarity analysis method to analyze the effect of data seasonality on the forecasting performance. To ensure the reproducibility and accessibility of the benchmark, we used a publicly accessible data generator and the open-source Python library DeepTimeSeries (currently under development, available at https://github.com/BET-lab/DeepTimeSeries).
In our benchmark study, we evaluated the performance of six deep learning architectures in forecasting 6 zone temperatures and 10 thermal loads (5 heating and 5 cooling) for the next 24 hours. The results showed that the transformer architecture outperformed the other models, especially on small training datasets ranging from 0.3 to 0.9 years. GRUs and RNNs ranked second and third, respectively, while the rankings of other architectures varied significantly depending on the training dataset size and the forecasted variable. In particular, LSTM, MLP, and DCNN showed poor performance on small training datasets.
Rank of six deep learning architectures for regions belonging to different climate zones
Moreover, data similarity analysis revealed that simply increasing the size of the training dataset does not necessarily improve model performance. This highlights the importance of training models using very similar data during the forecast period.
Results of simliarity analysis
We hope this study provides useful insights for predictive control of buildings using deep learning models.