Abstracts
Jean-Pierre CROUZEIX
Revealed Preferences
Abstract: When the preferences of a consumer can be represented by a utility function, his demand correspondence is obtained by the maximization of the utility under his budget constraint and the prices of the goods. Actually, what is known is the demand correspondence obtained from observations and not one hypothetical utility function. The revealed preferences consists in constructing utility functions from demand correspondences. We present a short state of art on the question in the differentiable case as well in the non differentiable case. Things are not so simple as shown from some pathological examples.
Robert DEVILLE (en collaboration avec Elizabeth STROUSE)
Points récurrents en dynamique linéaire.
Abstract: Des phénomènes non linéaires apparaissent en dynamique linéaire lorsque l’espace de Banach sous-jacent est de dimension infinie. Nous exhibons plusieurs de ces phénomènes, et nous montrons en particulier que sur tout espace de Banach séparable \(X\), de dimension infinie, on peut construire un opérateur linéaire \(T\) tel qu’il n’existe pas de \(x\) dans \(X\) tel que \(||T_n(x)||\) tend vers l’infini, mais à la fois l’ensemble des points récurrents pour \(T\) et son complémentaire sont d’intérieur non vide.
Jean-Baptiste HIRIART URRUTY
The (classical) mean value theorem : du neuf avec du vieux
Abstract: The so-called Mean Value Theorem (MVT) or, by another name, the Finite Increment Theorem, is one of my favorite results in Real Analysis. In the older times, any mathematician working in Analysis, even among the greatest, went for his result on the MVT. I myself have written extensively on the subject and continue to be interested in it. What we are considering in this presentation is the classical MVT, the one for differentiable functions (real-valued or vector-valued), without entering the world of MVTs for non-differentiable functions in which we have also happened to wander. The recent periods of lockdowns, plus the fact that we could temporarily have access to databases of sometimes old mathematical articles, gave me the opportunity to revisit dozens, actually hundreds, of references on the subject. We discovered some gems, somehow forgotten. My presentation will be in the form of questions-answers, a kind of quiz. There will be added a few comments on differential calculus and its teaching, as well as on statements that are a little too peremptory by mathematicians (e. g. J. Dieudonné) on what is possible or important in a MVT.
Alexander IOFFE
Maximum principle for optimal control problems with differential inclusions
Abstract: We shall discuss optimal control problems with dynamics described by differential inclusions with possibly non-convex and unbounded values and consider the three most popular (and not equivalent) types of minima, namely, the weak minimum, the minimum with respect to the norm \(W^{1,1}\textrm{-topology}\) and the strong minimum. Several sets of corresponding necessary conditions will be presented with the emphasis on the properties of the data that make possible one or another result.
Abderrahim JOURANI
Geometric characterizations of some differentiability concepts of sets
Keywords : Epi-Lipschitzian set, Compactly epi-Lipschitzian set, Clarke Tangent cone, Contingent cone, Clarke subdifferential
In this talk, we will investigate geometrical characterizations of two concepts of differentiabilty:
- Clarke regularity of subanalytic sets;
- Strictly Hadamard differentiabilty of epi-Lipschitzian sets.
In a finite dimensional space \(X\), we will show that for a closed subanalytic subset \(S\), the Clarke tangential regularity of \(S\) at \(\bar{x} \in S\) is equivalent to the coincidence of the Clarke’s tangent cone to \(S\) at \(\bar{x}\) with the set \[\mathcal{L}(S, \bar{x}):= \bigg\{\dot{c}_+(0) \in X: \, c:[0,1]\longrightarrow S\;\;\mbox{\it is Lipschitz}, \, c(0)=\bar{x}\bigg\}.\]
In a Banach space, we will show that for an epi-Lipschitzian set \(S\) at \(\bar{x}\) in the boundary of \(S\), the following assertions are equivalent:Moreover when \(X\) is of finite dimension, \(Y\) is a Banach space and \(g: X \mapsto Y\) is a locally Lipschitz mapping around \(\bar{x}\), we show that \(g\) is strictly Hadamard differentiable at \(\bar{x}\) IFF \(T(\mathrm{graph}\,g, (\bar{x}, g(\bar{x})))\) is isomorphic to \(X\) IFF the set-valued mapping \(x\rightrightarrows K(\mathrm{graph}\, g, (x, g(x)))\) is continuous at \(\bar{x}\) and \(K(\mathrm{graph}\, g, (\bar{x}, g(\bar{x})))\) is isomorphic to \(X\), where \(K(A, a)\) denotes the contingent cone to a set \(A\) at \(a \in A\).
References:
Alexander KRUGER
Decoupled infimum and uniform/joint lower semicontinuity
Abstract: We are going to revisit the decoupled infimum approach to optimality conditions and subdifferential calculus developed and discussed in [1-5] as well as some recent developments. Given extended-real-value functions \(f_1\) and \(f_2\) on a metric space and a subset \(U\), properties of the type \[inf_U (f_1+f_2 )= \liminf_{\substack{d(x_1,x_2)\longrightarrow 0 \\ x_1,x_2\in U}}(f_1 (x_1 )+f_2 (x_2 ))\] are of major importance in many areas of analysis and appear (often implicitly) in many publications. The quantity in the right-hand side of the above equality is known as decoupled/uniform infimum, while the property itself is often referred to as uniform lower semicontinuity. The talk is about some extensions of such properties and their consequences.
References:
[1] | Borwein, J.M., Ioffe, A.: Proximal analysis in smooth spaces. Set-Valued Anal. 4, 1–24 (1996). |
[2] | Borwein, J.M., Zhu, Q.J.: Viscosity solutions and viscosity subderivatives in smooth Banach spaces with applications to metric regularity. SIAM J. Control Optim. 34(5), 1568–1591 (1996). |
[3] | Borwein, J.M., Zhu, Q.J.: Techniques of Variational Analysis. Springer, New York (2005). |
[4] | Lassonde, M.: First-order rules for nonsmooth constrained optimization. Nonlin. Anal. 44(8), 1031–1056 (2001). |
[5] | Penot, J.P.: Calculus Without Derivatives, Graduate Texts in Mathematics, vol. 266. Springer, New York (2013). |
Juan Enrique MARTINEZ LEGAZ
A mean value theorem for tangentially convex functions
Abstract: The main result is an equality type mean value theorem for tangentially convex functions in terms of tangential subdifferentials, which generalizes the classical one for differentiable functions, as well as Wegge theorem for convex functions. The new mean value theorem is then applied, analogously to what is done in the classical case, to characterize, in the tangentially convex context, Lipschitz functions, increasingness with respect to the ordering induced by a closed convex cone, convexity, and quasiconvexity.
Florent NACRY
Strongly and weakly convex sets : separation and metric properties
Joint works with Samir Adly (Univ. Limoges), Thuong Nguyen (Univ. Perpignan) and Lionel Thibault (Univ. Montpellier)
Abstract: Roughly speaking, a (closed) subset \(C\) of a Hilbert space \(X\) is said to be strongly convex (resp. prox-regular) provided that the farthest (resp. the nearest point) multimapping \(\mathrm{Far}_C\) (resp. \(\mathrm{Proj}_C\)) is single-valued and continuous on a suitable neighborhood of the set \(C\) and continuous therein.
In this talk, we first develop several ball separation properties between a strongly convex set and a prox-regular set. We then extend for both class of sets the following fundamental result of convex analysis:
The distance \(d_C(x)\) coincides with the maximum of distances \(d_H(x)\) taken over all hyperplanes \(H\) separating \(C\) and \(x\notin C\) and this maximum is attained for one and only one hyperplane.
We conclude the talk by characterizing the set whose farthest (resp. usual) distance function is weakly concave (resp. weakly convex).
Nelly PUSTELNIK
On the strong convexity for the understanding and design of (unfolded) algorithms
Abstract: Being fast, being flexible, handling with large dimensionality, and relying on a simple architecture are key issues for algorithms to be largely used in applicative fields such as image processing. Among the last twenty years, a huge number of proximal algorithms satisfying these constraints have been developed, but identifying the most appropriate for a specific problem stays a challenging task. One of the simplest tools to compare algorithmic schemes is the convergence rate, which can be at the price of assumptions such as the strong convexity.
Motivated by data processing problems that turn out to be strongly convex, we first establish a regime diagram with respect to Lipschitz and strong convexity constants allowing tight theoretical comparisons between standard first-order proximal schemes (forward-backward, Douglas-Rachford, Peaceman-Rachford). Numerical experiments in the context of signal denoising and texture segmentation illustrate the tightness of these bounds.
In the second part, we will detail how to take benefit of the strong convexity assumption in the design of supervised deep architecture. We explore the possibility to integrate knowledge about proximal algorithms, especially fast procedures, in order to design more efficient and robust neural network architectures. We place ourselves in the classical framework of image denoising and we study four unrolled architectures designed from forward-backward iterations in the dual, FISTA in the dual, Chambolle-Pock, and Chambolle-Pock exploiting the strong convexity. Performance and stability obtained with each of these networks will be discussed. A comparison between these architectures and state-of-the-art black-box approaches is also provided.
Terry ROCKAFELLAR
Variational Analysis of Preference Relations and Utility
Abstract: In microeconomics it’s important to understand when an agent might prefer one vector of goods over another. Economists have agreed on a set of axioms for this, under which preferences can be characterized by utility functions: a vector is better if the utility assigned to it is higher. The axioms make the utility function be quasi-concave and perhaps quasi-smooth, in having smooth boundaries to its upper-level sets, and yet troublingly far from being unique. This is a serious impediment to the use of such preferences in optimization.
In this talk I’ll explain how variational analysis, applied to set-valued preference mappings is able to identify additional preference axioms, likely acceptable to economists, which guarantee representation by second-order smooth utility functions. Moreover on arbitrarily large compact convex subsets of the positive orthant of goods vectors, those functions can be taken to be concave, not just quasi-concave, and be unique that way in a minimalist sense, up to affine rescaling (like Celsius versus Fahrenheit).
Claudia Alejandra SAGASTIZABAL
Weak convexity and approximate subdifferentials
Abstract: In “Hunting for a Smaller Convex Subdifferential”, Journal of Global Optimization 10: 305–326, 1997, V. F. Demyanov and V. Jeyakumar describe Jean-Paul Penot as “the most productive hunter of different generalizations of the concept of gradient”. The authors then examine pros and cons of different concepts related to certain “safari season that started in the Wilderness of Endolandia (the Land of NDO-Nondifferentiable Optimization – a term introduced by M. Balinski)”.
In the talk, we embark on a similar adventure, to define approximate subdifferentials for functions that are weakly convex.
Sylvain SORIN
On no-regret algorithms
Abstract: We study the implications of the « no-regret property » for algorithms in continuous and discrete time for on-line learning, game theory dynamics and convex optimization. The analysis cover first order dynamics like projected gradient, mirror descent and dual averaging.
Christiane TAMMER
Optimality conditions in optimization under uncertainty
Keywords: Robust Optimization, Nonlinear Scalarization, Vector Optimization, Set-valued Optimization, Stochastic Optimization, Necessary optimality conditions
Abstract: Most optimization problems involve uncertain data due to measurement errors, unknown future developments and modeling approximations. Stochastic optimization assumes that the uncertain parameter is probabilistic. An other approach is called robust optimization which expects the uncertain parameter to belong to a set that is known prior. In this talk, we consider scalar optimization problems under uncertainty with infinite scenario sets. We apply methods from vector optimization in general spaces, set-valued optimization and scalarization techniques to derive necessary optimality conditions for solutions of robust optimization problems.
Lionel THIBAULT
Weak Compactness of Sublevels: Three Consequences of a General Theorem
Abstract: The talk will be concerned with diverse important consequences of a theorem by Pedro Perez and myself on weak compactness of sublevels of extended real-valued functions.
Nadia ZLATEVA and Milen Ivanov
Hadamard Inverse Function Theorem Proved by Variational Analysis
Abstract: We present a proof of Hadamard Inverse Function Theorem by the methods of Variational Analysis, adapting an idea of I. Ekeland and E. Séré.
Constantin ZALINESCU
Compactly locally uniformly convex functions
Abstract: The Banach space \((X,\left\Vert \cdot\right\Vert )\) is called compactly locally uniformly convex (or rotund) if \((x_{n})\subset S_{X}\) has a convergent subsequence whenever \(x\in S_{X}\) and \(\left\Vert x_{n} +x\right\Vert \rightarrow 2\); this property was introduced by L. P. Vlasov in 1973. In a natural way, we say that the proper convex function \(f:(X,\left\Vert \cdot\right\Vert )\rightarrow\overline{\mathbb{R}}\) is compactly locally uniformly convex at \(x_{0}\in \operatorname*{dom}f\) if \((x_{n})\subset\operatorname*{dom}f\) has a convergent subsequence whenever \(\tfrac{1}{2}f(x_{n})+\tfrac{1}{2}f(x_{0})-f(\tfrac{1} {2}x_{n}+\tfrac{1}{2}x_{0}) \rightarrow 0\). In our talk we present several characterization of this notion, as well as its relation with other notions of convexity for functions.