Abstract:This study explores the risk preferences of Large Language Models (LLMs) and how the process of aligning them with human ethical standards influences their economic decision-making. By analyzing 30 LLMs, we uncover a broad range of inherent risk profiles ranging from risk-averse to risk-seeking. We then explore how different types of AI alignment, a process that ensures models act according to human values and that focuses on harmlessness, helpfulness, and honesty, alter these base risk preferences. Alignment significantly shifts LLMs towards risk aversion, with models that incorporate all three ethical dimensions exhibiting the most conservative investment behavior. Replicating a prior study that used LLMs to predict corporate investments from company earnings call transcripts, we demonstrate that although some alignment can improve the accuracy of investment forecasts, excessive alignment results in overly cautious predictions. These findings suggest that deploying excessively aligned LLMs in financial decision-making could lead to severe underinvestment. We underline the need for a nuanced approach that carefully balances the degree of ethical alignment with the specific requirements of economic domains when leveraging LLMs within finance.