IEEE Open Journal of Control Systems

IEEE Open Journal of Control Systems

Periodical Publishing

La Jolla, CA 136 followers

Covering the theory, design, optimization, and applications of dynamic systems and control.

About us

The IEEE Open Journal of Control Systems covers the theory, design, optimization, and applications of dynamic systems and control. The field integrates elements of sensing, communication, decision and actuation components, as relevant for the analysis, design and operation of dynamic systems and control. The systems considered include: technological, physical, biological, economic, organizational and other entities, and combinations thereof.

Website
https://ojcsys.github.io/
Industry
Periodical Publishing
Company size
11-50 employees
Headquarters
La Jolla, CA
Type
Educational
Founded
2021

Locations

Updates

  • Now published in OJ-CSYS: "Distributionally Robust Policy and Lyapunov-Certificate Learning," by Kehan Long, Jorge Cortés and Nikolay Atanasov. Link: https://lnkd.in/g5e8hduc This paper introduces a novel approach to designing neural controllers for uncertain control systems with Lyapunov stability guarantees. We develop a distributionally robust formulation of the Lyapunov derivative constraint, which is transformed into deterministic convex constraints that allow for training a neural network-based controller. This method enables the synthesis of neural controllers and Lyapunov certificates that maintain global asymptotic stability with high confidence, even under out-of-distribution model uncertainties. The performance of our approach is compared with uncertainty-agnostic baselines and several reinforcement learning methods in simulated control problems. #lyapunovmethods #neuralnetworks #controlsystems #uncertainty

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Stable Inverse Reinforcement Learning: Policies From Control Lyapunov Landscapes," by Samuel Tesfazgi, Leonhard Sprandl, Armin Lederer and Sandra Hirche. Link: https://lnkd.in/gnXxGn76 Learning from expert demonstrations to flexibly program an autonomous system with complex behaviors or to predict an agent's behavior is a powerful tool, especially in collaborative control settings. A common method to solve this problem is inverse reinforcement learning (IRL), where the observed agent, e.g., a human demonstrator, is assumed to behave according to the optimization of an intrinsic cost function that reflects its intent and informs its control actions. While the framework is expressive, the inferred control policies generally lack convergence guarantees, which are critical for safe deployment in real-world settings. We therefore propose a novel, stability-certified IRL approach by reformulating the cost function inference problem to learning control Lyapunov functions (CLF) from demonstrations data. #reinforcementlearning #costs #optimalcontrol

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Learning to Boost the Performance of Stable Nonlinear Systems," by Luca Furieri, Clara Lucía Galimberti and Giancarlo Ferrari-Trecate. Link: https://lnkd.in/gDvdVTCn The growing scale and complexity of safety-critical control systems underscore the need to evolve current control architectures aiming for the unparalleled performances achievable through state-of-the-art optimization and machine learning algorithms. However, maintaining closed-loop stability while boosting the performance of nonlinear control systems using data-driven and deep-learning approaches stands as an important unsolved challenge. In this paper, we tackle the performance-boosting problem with closed-loop stability guarantees. Specifically, we establish a synergy between the Internal Model Control (IMC) principle for nonlinear systems and state-of-the-art unconstrained optimization approaches for learning stable dynamics. #controlsystems #distributedcontrol #optimalcontrol

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Control of Linear-Threshold Brain Networks via Reservoir Computing," by Michael McCreesh and Jorge Cortés. Link: https://lnkd.in/dbBhVTHG The ability of the brain to exhibit specific activity patterns to correspond with particular behaviors is important to be replicated in any computational model. In regards to a control system model, this corresponds with reference tracking. In this paper we consider a linear-threshold model of the brain and utilize a reservoir computing approach to achieve reference tracking. This is illustrated with simulations of selective attention and seizure replication and control. #brainmodeling #reservoircomputing #biologicalcontrolsystems

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Hamilton-Jacobi Reachability in Reinforcement Learning: A Survey," by Milan Ganai, Sicun Gao and Sylvia L. Herbert. Link: https://lnkd.in/gkGqnw6b Recent literature has proposed approaches that learn control policies with high performance while maintaining safety guarantees. Synthesizing Hamilton-Jacobi (HJ) reachable sets has become an effective tool for verifying safety and supervising the training of reinforcement learning-based control policies for complex, high-dimensional systems. Previously, HJ reachability was restricted to verifying low-dimensional dynamical systems primarily because the computational complexity of the dynamic programming approach it relied on grows exponentially with the number of system states. In recent years, a litany of proposed methods addresses this limitation by computing the reachability value function simultaneously with learning control policies to scale HJ reachability analysis while still maintaining a reliable estimate of the true reachable set. These HJ reachability approximations are used to improve the safety, and even reward performance, of learned control policies and can solve challenging tasks such as those with dynamic obstacles and/or with lidar-based or vision-based observations. In this survey paper, we review the recent developments in the field of HJ reachability estimation in reinforcement learning that would provide a foundational basis for further research into reliability in high-dimensional systems. #optimization #reinforcementlearning #robotics

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Solving Decision-Dependent Games by Learning From Feedback," by Killian Wood, Ahmed S. Zamzam and Emiliano Dall'Anese. Link: https://lnkd.in/gEKd3uj7 This paper tackles the problem of solving stochastic optimization problems with a decision-dependent distribution in the setting of stochastic strongly-monotone games and when the distributional dependence is unknown. A two-stage approach is proposed, which initially involves estimating the distributional dependence on decision variables, and subsequently optimizing over the estimated distributional map. The paper presents guarantees for the approximation of the cost of each agent. Furthermore, a stochastic gradient-based algorithm is developed and analyzed for finding the Nash equilibrium in a distributed fashion. Numerical simulations are provided for a novel electric vehicle charging market formulation using real-world data. #optimization #games #learning #stochasticprocesses

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Risk-Aware Stochastic MPC for Chance-Constrained Linear Systems," by Pouria Tooranjipour, Bahare Kiumarsi and Hamidreza Modares. Link: https://lnkd.in/gJRhumY2 This paper presents a fully risk-aware model predictive control (MPC) framework for chance-constrained discrete-time linear control systems with process noise. Conditional value-at-risk (CVaR) as a popular coherent risk measure is incorporated in both the constraints and the cost function of the MPC framework. This allows the system to navigate the entire spectrum of risk assessments, from worst-case to risk-neutral scenarios, ensuring both constraint satisfaction and performance optimization in stochastic environments. The recursive feasibility and risk-aware exponential stability of the resulting risk-aware MPC are demonstrated through rigorous theoretical analysis by considering the disturbance feedback policy parameterization. In the end, two numerical examples are given to elucidate the efficacy of the proposed method. #modelpredictivecontrol #optimization #robustconstraints #linearsystem

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Concurrent Learning of Control Policy and Unknown Safety Specifications in Reinforcement Learning," by Lunet Yifru and Ali Baheri. Link: https://lnkd.in/gUNmVtHh This research addresses a critical challenge in reinforcement learning (RL) - ensuring safety in environments where safety constraints are not predefined. We propose an innovative approach that concurrently learns an optimal control policy and identifies unknown safety constraint parameters. Using a bilevel optimization framework, the method integrates Bayesian optimization for learning safety specifications with constrained RL for policy optimization. Human expert feedback is leveraged to iteratively refine the learned constraints. Experiments across multiple case studies demonstrate the approach's effectiveness in substantially reducing constraint violations while maintaining high performance, closely matching results from scenarios with complete prior knowledge of safety constraints. This work represents an important step towards making RL safer and more applicable in real-world settings where safety requirements may not be fully known in advance. #optimization #lyapunov #controlsystems #safety

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Sorta Solving the OPF by Not Solving the OPF: DAE Control Theory and the Price of Realtime Regulation," by Muhammad Nadeem and Ahmad F. Taha. Link: https://lnkd.in/gbbgmefV This paper presents a new approach to approximate the AC optimal power flow (ACOPF). By eliminating the need to solve the ACOPF every few minutes, the paper showcases how a realtime feedback controller can be utilized in lieu of ACOPF and its variants. By i) forming the grid dynamics as a system of differential-algebraic equations (DAE) that naturally encode the non-convex OPF power flow constraints, ii) utilizing DAE-Lyapunov theory, and iii) designing a feedback controller that captures realtime uncertainty while being uncertainty-unaware, the presented approach demonstrates promises of obtaining solutions that are close to the OPF ones without needing to solve the OPF. 

    • No alternative text description for this image

Similar pages