IEEE Open Journal of Control Systems

IEEE Open Journal of Control Systems

Periodical Publishing

La Jolla, CA 158 followers

Covering the theory, design, optimization, and applications of dynamic systems and control.

About us

The IEEE Open Journal of Control Systems covers the theory, design, optimization, and applications of dynamic systems and control. The field integrates elements of sensing, communication, decision and actuation components, as relevant for the analysis, design and operation of dynamic systems and control. The systems considered include: technological, physical, biological, economic, organizational and other entities, and combinations thereof.

Website
https://ojcsys.github.io/
Industry
Periodical Publishing
Company size
11-50 employees
Headquarters
La Jolla, CA
Type
Educational
Founded
2021

Locations

Updates

  • Now published in OJ-CSYS: "Pareto-Optimal Event-Based Scheme for Station and Inter-Station Control of Electric and Automated Buses," by Cecilia Pasquale, Simona Sacone, Silvia Siri and Antonella Ferrara. Link: https://lnkd.in/gqC6-5uK This paper considers electric and automated buses required to follow a given line and respect a given timetable in an inter-city road. The main goal of this work is to design a control scheme in order to optimally decide, in real time, the speed profile of the bus along the line, as well as the dwell and charging times at stops. This must be done by accounting for the traffic conditions encountered in the road and by jointly minimizing the deviations from the timetable and the lack of energy in the bus battery compared with a desired level. For the resulting multi-objective optimal control problem a Pareto front analysis is performed in the paper, also considering a real test case. #optimalcontrol #predictivemodels #electricvehicles #openaccess

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Resilient Synchronization of Pulse-Coupled Oscillators Under Stealthy Attacks," by Yugo Iori and Hideaki Ishii. Link: https://lnkd.in/gXh_RKw6 This paper studies a clock synchronization problem for wireless sensor networks employing pulse-based communication when some of the nodes are faulty or even adversarial. The objective is to design resilient distributed algorithms for the nonfaulty nodes to keep the influence of the malicious nodes minimal and to arrive at synchronization in a safe manner. Compared with conventional approaches, our algorithms are more capable in the sense that they are applicable to networks taking noncomplete graph structures. Our approach is to extend the class of mean subsequence reduced (MSR) algorithms from the area of multi-agent consensus. First, we provide a simple detection method to find malicious nodes that transmit pulses irregularly. Then, we demonstrate that in the presence of adversaries avoiding to be detected, the normal nodes can reach synchronization by ignoring suspicious pulses. #resilientsynchronization #oscillators #distributedalgorithms #controlsystems #OpenAccess

    • No alternative text description for this image
  • Now published in OJ-CSYS: "A Control-Theoretical Zero-Knowledge Proof Scheme for Networked Control Systems," by Camilla Fioravanti, Christoforos N. Hadjicostis and Gabriele Oliva. Link: https://lnkd.in/g5A2cjiz A novel Zero-Knowledge Proof (ZKP) scheme for networked control systems is presented, which allows a controller to demonstrate to a sensor its knowledge of the system’s dynamic model and its ability to control it, without revealing model information. The scheme is further extended by considering the presence of delays and output noise, and a dual scenario is explored in which the sensor demonstrates its model knowledge to the controller. #networkedcontrolsystems #controlapplications #resilientcontrolsystems #OpenAccess

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Global Multi-Phase Path Planning Through High-Level Reinforcement Learning," by Babak Salamat, Sebastian-Sven Olzem, Gerhard Elsbacher and Andrea M. Tonello. Link: https://lnkd.in/gNMPcy8R In this paper, we introduce the Global Multi-Phase Path Planning (GMP3) algorithm in planner problems, which computes fast and feasible trajectories in environments with obstacles, considering physical and kinematic constraints. Our approach utilizes a Markov Decision Process (MDP) framework and high-level reinforcement learning techniques to ensure trajectory smoothness, continuity, and compliance with constraints. Through extensive simulations, we demonstrate the algorithm's effectiveness and efficiency across various scenarios. We highlight existing path planning challenges, particularly in integrating dynamic adaptability and computational efficiency. The results validate our method's convergence guarantees using Lyapunov’s stability theorem and underscore its computational advantages. #reinforcementlearning #trajectory #controlsystems #OpenAccess

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Leveraging the Turnpike Effect for Mean Field Games Numerics," by René A. Carmona and Claire Zeng. Link: https://lnkd.in/g4aaUDrR Recently, a deep-learning algorithm referred to as Deep Galerkin Method (DGM), has gained a lot of attention among those trying to solve numerically Mean Field Games with finite horizon, even if the performance seems to be decreasing significantly with increasing horizon. On the other hand, it has been proven that some specific classes of Mean Field Games enjoy some form of the turnpike property identified over seven decades ago by economists. The gist of this phenomenon is a proof that the solution of an optimal control problem over a long time interval spends most of its time near the stationary solution of the ergodic version of the corresponding infinite horizon optimization problem. After reviewing the implementation of DGM for finite horizon Mean Field Games, we introduce a “turnpike-accelerated” version that incorporates the turnpike estimates in the loss function to be optimized, and we perform a comparative numerical analysis to show the advantages of this accelerated version over the baseline DGM algorithm. #games #stochasticprocesses #convergence #meanfieldgame #turnpikeeffect #OpenAccess

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Distributionally Robust Policy and Lyapunov-Certificate Learning," by Kehan Long, Jorge Cortés and Nikolay Atanasov. Link: https://lnkd.in/g5e8hduc This paper introduces a novel approach to designing neural controllers for uncertain control systems with Lyapunov stability guarantees. We develop a distributionally robust formulation of the Lyapunov derivative constraint, which is transformed into deterministic convex constraints that allow for training a neural network-based controller. This method enables the synthesis of neural controllers and Lyapunov certificates that maintain global asymptotic stability with high confidence, even under out-of-distribution model uncertainties. The performance of our approach is compared with uncertainty-agnostic baselines and several reinforcement learning methods in simulated control problems. #lyapunovmethods #neuralnetworks #controlsystems #uncertainty

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Stable Inverse Reinforcement Learning: Policies From Control Lyapunov Landscapes," by Samuel Tesfazgi, Leonhard Sprandl, Armin Lederer and Sandra Hirche. Link: https://lnkd.in/gnXxGn76 Learning from expert demonstrations to flexibly program an autonomous system with complex behaviors or to predict an agent's behavior is a powerful tool, especially in collaborative control settings. A common method to solve this problem is inverse reinforcement learning (IRL), where the observed agent, e.g., a human demonstrator, is assumed to behave according to the optimization of an intrinsic cost function that reflects its intent and informs its control actions. While the framework is expressive, the inferred control policies generally lack convergence guarantees, which are critical for safe deployment in real-world settings. We therefore propose a novel, stability-certified IRL approach by reformulating the cost function inference problem to learning control Lyapunov functions (CLF) from demonstrations data. #reinforcementlearning #costs #optimalcontrol

    • No alternative text description for this image
  • Now published in OJ-CSYS: "Learning to Boost the Performance of Stable Nonlinear Systems," by Luca Furieri, Clara Lucía Galimberti and Giancarlo Ferrari-Trecate. Link: https://lnkd.in/gDvdVTCn The growing scale and complexity of safety-critical control systems underscore the need to evolve current control architectures aiming for the unparalleled performances achievable through state-of-the-art optimization and machine learning algorithms. However, maintaining closed-loop stability while boosting the performance of nonlinear control systems using data-driven and deep-learning approaches stands as an important unsolved challenge. In this paper, we tackle the performance-boosting problem with closed-loop stability guarantees. Specifically, we establish a synergy between the Internal Model Control (IMC) principle for nonlinear systems and state-of-the-art unconstrained optimization approaches for learning stable dynamics. #controlsystems #distributedcontrol #optimalcontrol

    • No alternative text description for this image

Similar pages