A stochastic Gordon‑Loeb model for optimal cybersecurity investment under clustered attacks

This study extends the Gordon–Loeb model for cybersecurity investment by incorporating a Hawkes process to model temporally clustered cyberattacks, reflecting real‑world attack bursts. Formulated as a stochastic optimal control problem, it maximizes net benefits through adaptive investment policies that respond to attack arrivals. Numerical results show these dynamic strategies outperform static and Poisson‑based models, which overlook clustering, especially in high‑risk scenarios. The framework aids risk managers in tailoring responsive cybersecurity strategies. Future work includes empirical calibration, risk‑averse loss modeling, cyber‑insurance integration, and multivariate Hawkes processes for diverse attack types.