"We develop an approach for solving time-consistent risk-sensitive stochastic optimization problems using model-free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time-consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules. We further develop an actor-critic style algorithm using neural networks to optimize over policies. Finally, we demonstrate the performance and flexibility of our approach by applying it to optimization problems in statistical arbitrage trading and obstacle avoidance robot control."
top of page
Rechercher
Posts récents
Voir toutThe main vulnerability in data protection is ineffective risk management, often subjective and superficial. GDPR outlines what to achieve...
00
This paper introduces a dynamic, proactive cyber risk assessment methodology that combines internal and external data, converting...
10
Cybersecurity investment models often mislead practitioners due to unreliable data, unverified assumptions, and false premises. These...
00
bottom of page
Comments