Deep learning models perform remarkably well on many classification tasks recently. The superior performance of deep neural networks relies on the large number of training data, which at the same time must have an equal class distribution in order to be efficient. However, in most real-world applications the labeled data may be limited with high imbalance ratios among the classes and thus the learning process of most classification algorithms is adversely affected resulting to unstable predictions and low performance. Three main categories of approaches address the problem of imbalanced learning, i.e. data level, algorithmic level and hybrid methods, which combine the two aforementioned approaches. Data generative methods are typically based on Generative Adversarial Networks, which require significant amounts of data, while model level methods entail extensive domain expert knowledge to craft the learning objectives, thereby being less accessible for users without such knowledge. Moreover, the vast majority of these approaches is designed and applied to imaging applications, less to time series, and extremely rarely to both of them. To address the above issues, we introduce GENDA, a generative neighborhood-based deep autoencoder, which is simple yet effective in its design and can be successfully applied to both image and time series data. GENDA is based on learning latent representations that rely on the neighboring embedding space of the samples. Extensive experiments, conducted on a variety of widely-used real datasets demonstrate the efficacy of the proposed method.