Inverse optimisation is an interesting and important field that seeks to solve the inverse problem of optimisation by determining the objective function and constraints that led to the optimal solution. In this blog post, we explore three areas of inverse optimisation: single-stage inverse optimisation problems, discrete-time multi-stage optimisation problems, and inverse calculus of variations. These areas have various applications in finance, genetics, robotics, physics, and engineering. By studying the inverse problem, we can gain insight into the underlying principles and constraints that led to the optimal solution and make inferences based on observed solutions.

Optimisation problems are ubiquitous in various fields, from finance to healthcare, wherein the objective is to maximise or minimise an objective function while satisfying constraints. Inverse optimisation is the process of working backward to identify the objective function and constraints that lead to an optimal solution. This blog post delves into three specific areas of inverse optimisation: single-stage inverse optimisation problems, discrete-time multi-stage optimisation problems, and inverse calculus of variations.

The “inverse knapsack problem” is an example of a single-stage inverse optimisation problem. We are given the optimal solution of the problem – the set of items that were packed into the knapsack – and we want to determine the values and weights of each item. This problem is used in domains such as finance and genetics and can be solved using linear programming or convex optimisation techniques. See a video I made on the topic here: https://www.youtube.com/watch?v=YRwqjBH-ZRg

Discrete time optimal control deals with finding the optimal control inputs for a dynamic system over a finite time horizon, subject to constraints. In inverse discrete time optimal control, we are given the system dynamics, optimal control inputs and the resulting system states, and we want to determine the the cost function, and the constraints that led to that solution. This problem has applications in various domains such as robotics and finance, and can be solved using numerical optimisation methods, such as gradient-based optimisation or nonlinear programming.

Inverse calculus of variations deals with the inverse problem of calculus of variations. Specifically, given the optimal solution of an integral functional, inverse calculus of variations seeks to infer the functional that led to that solution. This area of study has applications in a variety of fields, including physics, engineering, and economics. One common example is determining whether some equations of motion are critical points of an action functional, in other words, testing whether some equations of motion could be explained by the theory of Lagrangian mechanics. In order to solve this problem we can consider an important set of necessary and sufficient conditions called the Helmholtz conditions. These conditions, if satisfied, prove the existence of such a functional.

Inverse optimisation is a fascinating area with wide-ranging applications. By examining the inverse problem, we can better understand the underlying principles and constraints that led to the optimal solution, making it useful for making inferences based on observed solutions or limited information. We explored three areas, including single-stage and discrete-time multi-stage inverse optimisation, and inverse calculus of variations. Each of these areas presents unique challenges and opportunities, critical to understanding the broader field of inverse optimisation. Ongoing research in this field is expected to lead to even more exciting applications in the future.

Michael Nefiodovas
University of Western Australia

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text.