Nowadays, constellations of satellites have to deal with heterogeneous and complex observation requests, such as one-shot, video, stereoscopic, and periodic requests. In this paper, we consider the problem of scheduling these requests in order to maximize a measure of global utility. To solve this problem, we propose two Large Neighborhood Search algorithms that exploit problem decompositions. These algorithms explore large neighborhoods respectively based on heuristic search and Constraint Programming. The experiments performed on instances generated from real constellation features and weather data show that the approaches improve the state of the art.
Lieu : grande salle de réunions dans la coursive du bâtiment R de l'ONERA à Toulouse.
Cyclic Queuing and Forwarding (CQF) is a mechanism defined by IEEE TSN for providing low jitter in a deterministic network. CQF uses a common time cycle and two buffers per node output port: during one cycle incoming packets are stored in one buffer while packets in the other buffer are being transmitted; at the end of a cycle, the roles of the two buffers are exchanged. CQF provides very simple bounds on latency and jitter, assumed that each buffer is empty at the end of its transmission cycle. The cycle start times are determined by a time offset that may be different for every output buffer. A guard band at both cycle ends is devised in order to compensate for misalignment and timing inaccuracies. The proper operation of CQF requires that the guard band and the offsets are computed such that nodes are sufficiently time-aligned.
I will present the work done to configure CQF, show the properties of the parameters related to CQF and explain the behaviour of them regarding the network parameter.
Lieu : grande salle de réunions dans la coursive du bâtiment R de l'ONERA à Toulouse.
Nous étudions la fonction d'estimation d'état dans un système discret. Cette fonction d'estimation permet d'alléger la tâche de décision (planification, opération manuelle) dans un système sujet à des aléas divers. Nous présentons un modèle de la fonction d'estimation d'état, ainsi que notre langage à base de contraintes et préférences pour modéliser des stratégies d'estimation. Nous présentons divers problèmes de vérification, dont celui de la présence d'impasses, et différentes manières d'y répondre. Enfin, nous faisons un retour d'expérience sur les tentatives d'application de cet outil sur des systèmes multi-robots, et proposons un lien vers des travaux de conception des opérations.
Lieu : grande salle de réunions dans la coursive du bâtiment R de l'ONERA à Toulouse.
Current validation and verification (V&V) activities in aerospace industry mostly rely on time-consuming simulation-based tools. These classical Monte Carlo approaches have been widely used for decades to assess performance of AOCS/GNC systems containing multiple uncertain parameters. They are able to quantify the probability of sufficiently frequent phenomena, but they may fail in detecting rare but critical combinations of parameters. As the complexity of modern space systems increases, this limitation plays an ever-increasing role. In recent years, model-based worst-case analysis methods have reached a good level of maturity. Without the need of simulations, these tools can fully explore the space of all possible combinations of uncertain parameters and provide guaranteed mathematical bounds on robust stability margins and worst-case performance levels.Problematic parameter configurations, identified using these methods, can be used to guide the final Monte Carlo campaigns, thereby drastically shortening the standard V&V process. A limitation of classical model-based worst-case analysis methods is that they assume the uncertain parameters can take any value within a given range with equal probability. The probability of occurrence of a worst-case parameter combination is thus not measured. A system design can therefore be rejected based on a very rare and extremely unlikely scenario. Probabilistic μ-analysis combines worst-case information with probabilistic information. It tempts to bridges the analysis gap between efficient Monte Carlo simulations and deterministic μ-analysis. This research makes advances in probabilistic µ-analysis to develop new cheap and reliable tools to improve the characterization of rare but nonetheless possible events. This to tighten the aforementioned V&V analysis gap. The seminar will focus on the recently developed probabilistic gain, phase, disk and delay margin analysis algorithms.
Les modèles d'écoulement sont décrits par les équations aux dérivées partielles de Navier-Stokes. Ils présentent deux difficultés majeures : la dimension du fait des dérivées partielles où chaque point de l'espace représente une variable d'état et les non-linéarités issues du terme dit convectif. La seule simulation de tels modèles peut prendre plusieurs heures à plusieurs jours. Nous verrons au travers des 3 exemples représentatifs des 2 classes d'écoulement fluide, amplificateurs et résonateurs, comment ces difficultés peuvent être traitées.
In this presentation I will discuss the use case of a commercial optical Earth Observing Satellite (EOS) embedding the capacity of producing and executing autonomous plans to observe areas on the surface of the planet.
Modern EOS applications include multiple acquisition requests with different degrees of priority, and the need of reasoning taking into account information present on-board only (e.g. visibility of targets, actual satellite state, exact volume of observation data). The need to move certain satellite functions into the space segment derives from the limitations of the mission plans generated by ground control stations, which cannot take in account these variables.
During this talk, I will present the proposed hierarchical architecture for the use case, where planning and execution deals with the arrival of urgent acquisition requests, and other relevant information, while meeting several operational requirements from the end-users. The architecture leverages ONERA's Architecture for Autonomy (OARA) actors and skillsets; and HDDL 2.1, a new planning formalism for hierarchical temporal planning derived from PDDL.
Lieu : grande salle de réunions dans la coursive du bâtiment R de l'ONERA à Toulouse.
Complex engineering applications may require dealing with dynamics that cannot be easily modeled and included in a control framework. In addition, an uncertain environment might expose the system to disturbances that are unforeseeable during the control design phase. In this presentation, a solution to control uncertain constrained systems is proposed. The objective is twofold: track a reference signal in presence of unmodelled dynamics, and enforce the system state and input constraints. Under the assumption that the linear dominant plant dynamics are available, a Model Reference Adaptive Control strategy is employed to handle the unknown system dynamics. The proposed controller is then enhanced with a RobustCommand Governor scheme to enforce the system constraints. Unlike other optimization-based methods, such as Model Predictive Control, Reference Governors (RG) and Command Governors (CG) do not act directly on the closed-loop dynamics; instead, they evaluate the desired reference signal and predict the closed-loop states on a predefined horizon. If constraints are not satisfied, then RG and CG compute the closest signal to the desired reference and use it as a virtual reference in the closed-loop system. The optimization-based process is then repeated at each time step. Numerical simulations illustrate the methodology applied to a geostationary satellite subject to unmodelled dynamics.
Lieu : grande salle de réunions dans la coursive du bâtiment R de l'ONERA à Toulouse.
L'équipe de recherche Inria/QuaCS est spécialisée dans les outils formels pour raisonner sur le calcul quantique. Dans cet exposé je présenterai les grandes lignes directrices de recherche de l'équipe de l'université Paris Saclay: l'aspect langage (syntactique et graphique), l'aspect spécifications et preuves, et l'aspect modèle de calcul.
Coverage is an important property for understanding the behaviour of a system during testing, quantifying the quality of a set of observations both in terms of input scenarios and responses from the system. Code coverage criteria are thus often part of the requirements for the validation of safety critical software. In this seminar, we will explore the use of coverage through two different studies. Test Automation for Coverage (TACO) uses test automation and code coverage criterion to support the timing analysis of a FADEC control system. Then Safety Analysis using Simulation-based Situation coverage (SASSI) leverages the definition of coverage to situations, to assess the safety of a collaborative industrial robot.
Lieu : Grande salle de réunions dans la coursive du bâtiment R à ONERA Toulouse