Abstract:Optimization modeling via mixed-integer linear programming (MILP) is fundamental to industrial planning and scheduling, yet translating natural-language requirements into solver-executable models and maintaining them under evolving business rules remains highly expertise-intensive. While large language models (LLMs) offer promising avenues for automation, existing methods often suffer from low data efficiency, limited solver-level validity, and poor scalability to industrial-scale problems. To address these challenges, we present EvoOpt-LLM, a unified LLM-based framework supporting the full lifecycle of industrial optimization modeling, including automated model construction, dynamic business-constraint injection, and end-to-end variable pruning. Built on a 7B-parameter LLM and adapted via parameter-efficient LoRA fine-tuning, EvoOpt-LLM achieves a generation rate of 91% and an executability rate of 65.9% with only 3,000 training samples, with critical performance gains emerging under 1,500 samples. The constraint injection module reliably augments existing MILP models while preserving original objectives, and the variable pruning module enhances computational efficiency, achieving an F1 score of ~0.56 on medium-sized LP models with only 400 samples. EvoOpt-LLM demonstrates a practical, data-efficient approach to industrial optimization modeling, reducing reliance on expert intervention while improving adaptability and solver efficiency.




Abstract:Modern power systems are experiencing a variety of challenges driven by renewable energy, which calls for developing novel dispatch methods such as reinforcement learning (RL). Evaluation of these methods as well as the RL agents are largely under explored. In this paper, we propose an evaluation approach to analyze the performance of RL agents in a look-ahead economic dispatch scheme. This approach is conducted by scanning multiple operational scenarios. In particular, a scenario generation method is developed to generate the network scenarios and demand scenarios for evaluation, and network structures are aggregated according to the change rates of power flow. Then several metrics are defined to evaluate the agents' performance from the perspective of economy and security. In the case study, we use a modified IEEE 30-bus system to illustrate the effectiveness of the proposed evaluation approach, and the simulation results reveal good and rapid adaptation to different scenarios. The comparison between different RL agents is also informative to offer advice for a better design of the learning strategies.