The field of project optimization is moving towards the development of more robust and adaptive methods for handling uncertainty and complexity. Researchers are exploring the use of deep reinforcement learning, graph neural networks, and digital-twin frameworks to improve project scheduling, cost estimation, and schedule control. These approaches have shown promising results in terms of performance and generalization, and are able to provide reliable frameworks for stable and effective policy implementation. Noteworthy papers include:
- One that proposes a Double Deep Q-Network approach for maximizing the net present value of stochastic projects, which outperforms traditional rigid and dynamic strategies.
- Another that leverages Graph Neural Networks and Deep Reinforcement Learning to develop an effective policy for task scheduling in the Resource-Constrained Project Scheduling Problem.
- A study that presents an integrated 4D/5D digital-twin framework for construction cost and schedule control, which automates project-control functions and demonstrates improved estimation accuracy and responsiveness.
- A novel model called HGCN2SP, which uses a hierarchical graph convolutional network for two-stage stochastic programming and provides high-quality decisions in a short computational time.