Quantitative information extracted from observational data is a necessary input for evaluating, understanding, and designing political reforms. However, extracting information from data requires appropriate econometric and statistical methods. We will provide and apply such methods for the analysis of high-dimensional data-sets as well as for improved inference in relevant model set-ups. The empirical research on the various dimensions of political reforms conducted in this and other projects of the SFB will thereby be facilitated. Our focus will be on the empirical analyses of macroeconomic policies and of political text data. In doing so, we will contribute to two main pillars of the SFB: the evaluation of reforms and the understanding of reform processes. The empirical analyses will also provide valuable information for the design of (economic) policies.
In high-dimensional set-ups, the number of potential explanatory variables used to describe a variable of interest can be very large relative to the sample size or even exceed the number of observations. As a consequence, high-dimensional problems create numerous challenges with respect to statistical theory, methods, and implementations. Unfortunately, one cannot rely on classical techniques because they fail as a result of (almost) perfect multicollinearity and since they increase the prediction inaccuracy that results from high-dimensionality. However, the use of so-called model reduction techniques is one way to deal with the intrinsic complex nature of high-dimensional problems.
High-dimensional frameworks are widespread in economics and political science. Two set-ups are of special interest to us. First, we are interested in the application of time-varying parameter models and highdimensional multiple time series models to short-time series data in order to analyse macroeconomic policies such as business cycle and monetary policies. Secondly, we consider political text data such as party manifestos or speeches that are used for determining political positions and assessing the (in)stability of the political lexicon in a dynamic context.
To address the empirical set-ups we suggest new, and improve on existing, model reduction techniques. In particular, we focus on the so-called Lasso approach in relation to panel and time series data. Hence, our methodological advances will improve the analysis of dynamic effects while allowing for heterogeneity and obtaining a higher degree of efficiency as a result of the panel dimension. Furthermore, we suggest bootstrap methods in order to provide appropriate, and usually more accurate, inference tools for the analysis of multiple time series and political text data. The non- and semiparametric nature of our methods generates the flexibility needed to address the aforementioned empirical frameworks.
Together with projects C2, C4, and C6 (formerly B1) we are part of the political text analysis network that is coordinated within project Z1. The methods and models we suggest will be valuable tools for the empirical analyses conducted in these projects. We will develop our own studies on political text data in close cooperation with them. With projects B5, B7, B8, and B9 we share a common interest in econometric methods as well as in the analysis of economic policies and the evaluation of policy reforms.
While Uta Pigorsch will leave the project, Anne Leucht joins the board of principal investigators for a potential second funding period. Her strong expertise in statistical inference for dependent data will, for instance, be required to check the validity of cross-validation methods in the time series settings, or to investigate suitable INAR models for (political) text data. Moreover, the project will strongly benefit from Anne Leucht’s excellent knowledge of bootstrap methods for time series and count data that can be used to design Markov-model bootstrap procedures for text data and to establish bootstrap-aided inference for VAR models under (conditional) heteroskedasticity.