The field of artificial intelligence is witnessing significant advancements in the application of Large Language Models (LLMs) to complex decision-making tasks and symbolic computation. Researchers are exploring the potential of LLMs to handle tasks such as resource allocation, symbolic integration, and privacy-aware predictions, with a focus on improving transparency, planning efficiency, and civic engagement. The use of tree-based deep learning models and fine-tuning techniques is also being investigated to enhance the performance of LLMs in these tasks. Noteworthy papers in this area include:
- A study that leverages Participatory Budgeting to infer preferences and evaluate LLMs' reasoning capabilities, demonstrating the role of prompt design in mechanism design with unstructured inputs.
- A paper that proposes a fine-tuning approach for LLMs to improve their symbolic regression capabilities, introducing a dedicated dataset and a heuristics metric to quantify form-level consistency.