Résumé : Time series forecasting deals with the prediction of future values of time-dependent quantities (e.g. stock price, energy load, city traffic) on the basis of their historical observations. In its simplest form, forecasting concerns a single variable (ie univariate problem) and deals with the prediction of a single future value (ie one-step-ahead).Existing studies in the literature focused on extending the univariate time series framework to address multiple future predictions (also called multiple-step-ahead) and multiple variables (multivariate approaches) accounting for their interdependencies. However, most of the approaches deal either with the multiple-step-ahead aspect or the multivariate one, rarely with both. Moreover state-of-the-art multivariate forecasting methods are restricted to low dimensional problems, linear dependencies among the values and short forecasting horizons. The recent technological advances (notably the Big Data revolution) are instead shifting the focus to problems characterized by a large number of variables, non-linear dependencies and long forecasting horizons.Those forecasting tasks are recently more and more addressed with a representation learning approach, by feeding the data into large scale deep neural networks and letting the model learn the most suitable data representation for the task at hand. This approach, despite its success and effectiveness, often requires considerable computational power, intensive model calibration and lacks interpretability of the learned model.The motivation of this thesis is that the potential of more interpretable approaches to multi-variate and multi-step-ahead tasks has not been sufficiently explored and that the use of complex neural methods is often neither necessary nor advantageous. In this perspective we explore two multivariate and multiple-step-ahead forecasting strategies based on dimensionality reduction :The first strategy, called Dynamic Factor Machine Learning,is a machine learning extension of a famous technique in econometrics: it transforms the original high-dimension multivariate forecasting problem by extracting first a (small) set of latent variables (also called factors) and forecasting them independently in a multi-step-ahead yet univariate manner. Once the multi-step-ahead forecast of factors is computed, the predictions are transformed back to the original space.The second strategy, called Selective Multivariate to Univariate Reduction through Feature Engineering and Selection, addresses the dimensionality issue in the original space and deals with the combinatorial explosion of possible spatial and temporal dependencies by feature selection. The resulting strategies combines expert-based feature engineering, effective feature selection strategies (based on filters), and ensembles of simple models in order to develop a set of computationally inexpensive yet effective models.The thesis is structured as follows. After the introduction, we present a description of the fundamentals of time series analysis and a review of the state-of-the-art in the domain of multivariate, multiple-step-ahead forecasting.Then, we provide a theoretical description of the two original contributions, along with their positioning in the current scientific literature.%The final part of thesis is devoted to their empirical assessment on several synthetic and real data benchmarks (notably from the domain of finance, traffic and wind forecasting) and discusses their strengths and weaknesses.The experimental results show that the proposed strategies are a promising alternative to state-of-the-art models, overcoming their limitations in terms of problem size (in case of statistical models) and interpretability (in case of large scale black-box machine learning models, such as Deep Learning techniques).Moreover, the findings show a potential for implementation of these strategies on large-scale ($>10^2$ variables and $>10^3$ samples) real forecasting tasks, providing competitive results both in terms of computational efficiency and forecasting accuracy with respect to state-of-the-art and deep learning strategies.