University of Alberta
Abstract:This paper proposes a Semi-Centralized Multi-Agent Reinforcement Learning (SCMARL) approach for irrigation scheduling in spatially variable agricultural fields, where management zones address spatial variability. The SCMARL framework is hierarchical in nature, with a centralized coordinator agent at the top level and decentralized local agents at the second level. The coordinator agent makes daily binary irrigation decisions based on field-wide conditions, which are communicated to the local agents. Local agents determine appropriate irrigation amounts for specific management zones using local conditions. The framework employs state augmentation approach to handle non-stationarity in the local agents' environments. An extensive evaluation on a large-scale field in Lethbridge, Canada, compares the SCMARL approach with a learning-based multi-agent model predictive control scheduling approach, highlighting its enhanced performance, resulting in water conservation and improved Irrigation Water Use Efficiency (IWUE). Notably, the proposed approach achieved a 4.0% savings in irrigation water while enhancing the IWUE by 6.3%.
Abstract:Monoclonal antibodies (mAbs) have emerged as indispensable assets in medicine, and are currently at the forefront of biopharmaceutical product development. However, the growing market demand and the substantial doses required for mAb clinical treatments necessitate significant progress in its large-scale production. Most of the processes for industrial mAb production rely on batch operations, which result in significant downtime. The shift towards a fully continuous and integrated manufacturing process holds the potential to boost product yield and quality, while eliminating the extra expenses associated with storing intermediate products. The integrated continuous mAb production process can be divided into the upstream and downstream processes. One crucial aspect that ensures the continuity of the integrated process is the switching of the capture columns, which are typically chromatography columns operated in a fed-batch manner downstream. Due to the discrete nature of the switching operation, advanced process control algorithms such as economic MPC (EMPC) are computationally difficult to implement. This is because an integer nonlinear program (INLP) needs to be solved online at each sampling time. This paper introduces two computationally-efficient approaches for EMPC implementation, namely, a sigmoid function approximation approach and a rectified linear unit (ReLU) approximation approach. It also explores the application of deep reinforcement learning (DRL). These three methods are compared to the traditional switching approach which is based on a 1% product breakthrough rule and which involves no optimization.
Abstract:The agricultural sector currently faces significant challenges in water resource conservation and crop yield optimization, primarily due to concerns over freshwater scarcity. Traditional irrigation scheduling methods often prove inadequate in meeting the needs of large-scale irrigation systems. To address this issue, this paper proposes a predictive irrigation scheduler that leverages the three paradigms of machine learning to optimize irrigation schedules. The proposed scheduler employs the k-means clustering approach to divide the field into distinct irrigation management zones based on soil hydraulic parameters and topology information. Furthermore, a long short-term memory network is employed to develop dynamic models for each management zone, enabling accurate predictions of soil moisture dynamics. Formulated as a mixed-integer model predictive control problem, the scheduler aims to maximize water uptake while minimizing overall water consumption and irrigation costs. To tackle the mixed-integer optimization challenge, the proximal policy optimization algorithm is utilized to train a reinforcement learning agent responsible for making daily irrigation decisions. To evaluate the performance of the proposed scheduler, a 26.4-hectare field in Lethbridge, Canada, was chosen as a case study for the 2015 and 2022 growing seasons. The results demonstrate the superiority of the proposed scheduler compared to a traditional irrigation scheduling method in terms of water use efficiency and crop yield improvement for both growing seasons. Notably, the proposed scheduler achieved water savings ranging from 6.4% to 22.8%, along with yield increases ranging from 2.3% to 4.3%.
Abstract:Reinforcement learning (RL) is an area of significant research interest, and safe RL in particular is attracting attention due to its ability to handle safety-driven constraints that are crucial for real-world applications. This work proposes a novel approach to RL training, called control invariant set (CIS) enhanced RL, which leverages the advantages of utilizing the explicit form of CIS to improve stability guarantees and sampling efficiency. Furthermore, the robustness of the proposed approach is investigated in the presence of uncertainty. The approach consists of two learning stages: offline and online. In the offline stage, CIS is incorporated into the reward design, initial state sampling, and state reset procedures. This incorporation of CIS facilitates improved sampling efficiency during the offline training process. In the online stage, RL is retrained whenever the predicted next step state is outside of the CIS, which serves as a stability criterion, by introducing a Safety Supervisor to examine the safety of the action and make necessary corrections. The stability analysis is conducted for both cases, with and without uncertainty. To evaluate the proposed approach, we apply it to a simulated chemical reactor. The results show a significant improvement in sampling efficiency during offline training and closed-loop stability guarantee in the online implementation, with and without uncertainty.