Central banks' communication strategies can change to fit the need to be more "persuasive" or to retain more "flexibility". For example, they may wish to stimulate the economy with clear indication that interest rates will be kept low for an extended period. At the same time, this can restrict their freedom to make future decisions appropriately, to deal with the unexpected. These trade-offs can vary according to economic conditions, especially during crisis versus normal times. This paper utilizes natural language tools to examine the textual complexity of policy statements from various central banks to derive not only conventional measurements of textual properties such as readability, but also other features, including abstractness, informativeness, and disunity. We find patterns showing that complexity intensifies during extremely low-growth periods or when economic stimulus is needed. Furthermore, the results reveal significant geographic variation, with differences driven more by regional context than by language. There is also evidence that statements targeting households and firms are far from negligible, underscoring the importance of communicating effectively with the general public. By mapping these patterns, the study provides a deeper understanding of how central banks adapt their communication strategy in times of crisis, contributing to the broader investigation of central bank communication and credibility
How do measures, especially complexity measures, based directly on regulation texts relate to economic figures? Can they explain things that conventional numerical variables cannot? To answer this question, we first need to examine and present all possible ways of extracting information directly from legal texts, including those that already exist and newly proposed methods. Regarding the complexity aspect, we found that the new and traditional methods can differ drastically over time and across industries. The new method measurements can also potentially explain some variation in economic figures not reflected by the traditional counterparts. Finally, we also present methods for extracting various textual features that are not complexity for a broader view of legal text patterns.
One of the main threats to the validity of a regression arises when the independent variable is correlated with the error term, resulting in an endogeneity problem that leads to biased estimates. Two-stage least squares (2SLS) is typically used to address this issue, relying on several assumptions such as exclusion restrictions and strong instruments. This paper applies a new method proposed by Gautier et al. (2018), known as Sparse Non-Instrumental Variables (SNIV), which incorporates the idea of sparsity to relax the standard 2SLS assumptions regarding instrument specification and strength. We first adopt the same exclusion restrictions as 2SLS, then proceed to relax the assumption about the exact location of the excluded instruments. This marks the first application of SNIV to real-world datasets. By comparing the results, the observed similarities and discrepancies help assess the validity of the assumptions underlying 2SLS. The findings show that under similar exclusion restrictions, 2SLS and SNIV yield comparable results when instruments are strong, whereas 2SLS produces misleading estimates and standard errors when instruments are weak. Relaxing the exclusion restrictions further sheds light on the behavior of SNIV across different datasets, including variations in sample size and instrument strength.