The RNN-HAR model, integrating Recurrent Neural Networks with the heterogeneous autoregressive (HAR) model, is proposed for Value at Risk (VaR) forecasting. It effectively captures long memory and non-linear dynamics. Empirical analysis from 2000 to 2022 shows RNN-HAR outperforms traditional HAR models in one-step-ahead VaR forecasting across 31 market indices.
This paper develops a k-generation risk contagion model in a tree-shaped network for cyber insurance pricing. It accounts for contagion location and security level heterogeneity. Using Bayesian network principles, it derives mean and variance of aggregate losses, aiding accurate cyber insurance pricing. Key findings benefit risk managers and insurers.
This paper examines the rise of algorithmic harms from AI, such as privacy erosion and inequality, exacerbated by accountability gaps and algorithmic opacity. It critiques existing legal frameworks in the US, EU, and Japan as insufficient, and proposes refined impact assessments, individual rights, and disclosure duties to enhance AI governance and mitigate harms.
The paper analyzes the EU's Artificial Intelligence Act and its impact on AI regulation in banking and finance. It highlights the Act's potential to enhance governance, address high-risk applications, and the need for better coordination among regulators. Findings suggest challenges remain, including the necessity for adaptive frameworks to ensure ethical AI deployment.
Interdependent economic shocks, modeled through a two-sector approach (banks and insurers), impact the financial system by amplifying initial shocks via feedback mechanisms. Stress tests on UK data show improved profit expectations and reduced tail losses post-COVID-19, with insurers more vulnerable to credit risks and banks to fire sale losses.
The paper explores credit card fraud detection (CCFD) using machine learning, reviewing various algorithms like K-nearest neighbors, decision trees, random forests, and XGBoost. It compares their performance, highlighting Random Forest as the most accurate. The study addresses challenges like imbalanced datasets, data quality, and evolving fraud tactics.
This paper emphasizes the need for metrics to assess discriminatory effects and trade-offs. It introduces a sensitivity-based measure for proxy discrimination, defining admissible prices and using L2-distance for measurement, and proposes local measures for policyholder-specific analysis.
The paper examines optimal insurance solutions using $\Lambda\VaR$. It finds truncated stop-loss indemnity optimal with the expected value premium principle and provides a deductible parameter expression. Using $\Lambda'\VaR$, full or no insurance is optimal. It also addresses model uncertainty, offering solutions for various uncertainty scenarios.
This paper highlights the risks of assuming finite mean or variance in statistical models, especially for datasets with heavy tails, like in finance. It stresses that infinite-mean models can lead to different or opposite outcomes, requiring caution when applying classic methods in finance and insurance.
The paper introduces a new approach to risk scaling, addressing challenges like limited data and heavy tails in risk assessment. It offers a robust, conservative method for estimating capital reserves, going beyond traditional scaling laws. The proposed framework improves long-term risk estimation, risk transfers, and backtesting performance, with empirical validation.