Search
Learning Sensible Representations for Time Series

Another track of research I have been following over the past years is the learning of latent representations for time series. These latent representations can either be mixture coefficients (cf. Sec 2.1) -- in which case time series are represented as multinomial distributions over latent topics -- or intermediate neural networks feature maps (as in Sec 2.2 and Sec 2.3) -- and then time series are represented through filter activations they trigger.

More specifically, in Sec 2.3, we focus on the task of early classification of time series. In this context, a method is introduced. This method learns an intermediate representation from which both the decision of triggering classification and the classification itself can be computed.