Talks displayed in this page are ordered by alphabetical order of the speakers' surnames. The schedule can be consulted in the schedule page.
The invited talks will take place at the Einstein-Saal of the Berlin-Brandenburg Academy of Sciences and Humanities, and the plenary talk by Felipe Cucker and the talks by Héctor Rodríguez and by Jean-Pierre Bourguignon will take place at the room MA004 of the the Institut für Mathematik of the Technische Universität Berlin. More information about the location can be found in the venue page.
All talks will be probably recorded in video, but the video will be released publicly only upon permission of the speaker.
In this talk I will summarize a general idea of the relationship between art and mathematics and I will then focus on a few instances of this relationship. In the measure that time allows me, I will be generous with examples.
More information about Felipe Cucker in this link.
An explanation of the background and artistic ideas involved in the production of the work Approximation Theory, to be exhibited in this conference.
Talks are ordered alphabetically with respect the speakers. Invited talks will take place at the...
We discuss the construction of the Henselization of a local ring from the point of view of “constructive mathematics”, in the line of the work of Richman, Coquand, and Lombardi. We focus on the multivariate Hensel lemma and on the representation of algebraic power series, which are usually obtained as a consequence of ZMT, both of them not totally clear from “constructive point of view”, although they are crucial results in the theory. Our constructive approach: concrete statements of the hypothesis and algorithmic arguments allows us to discover a new result, namely "a local Bezout theorem" for abstract henselian rings.
This talk is based on different results obtained together with Henri Lombardi, Thierry Coquand, Paco Castro and Herwig Hauser.
The roots of random polynomials and of random polynomial systems have been extensively studied in the past. While the picture is quite complete in the case of polynomials, the case of systems is much harder. Kostlan, Shub and Smale random polynomial systems were introduced in the nineties and the expectation of the number of their real roots is known since then. In this talk we will review some recent results on this problem.
...where a gorgeous result by Felipe Cucker is revisited, the solution to a famous problem is shown, and a theorem that is not a theorem is proved to be useful, but not necessarily in that order.
Thanks to major advances in cognitive neuroscience, we are on the brink of a scientific understanding of how the brain achieves consciousness. This talk will describe cognitive neuroscientist Bernard Baars' Global Workspace Model (GWM) of the brain, its implications for understanding consciousness, and a novel computer architecture that it inspires. The major contribution of this talk lies in the precise formal model of a Conscious Turing Machine (CTM) aka a Conscious AI (CAI) to be presented.
This is joint work with Avrim Blum.
In this talk I will present a few instances where it has been possible to get artists and mathematicians to contribute to public events based on interactions between them.
Traditionally, the theory of condition numbers is assuming errors in the data, which are constrained to be tangential to the input space. In this talk, I want to discuss what happens if we remove this constraint. Moreover, I want to discuss the condition number of the following critical point problem: let \(M\) be a manifold embedded in a real euclidean vector space. The critical point problem for input \(a\) is to compute critical points of the distance function from \(M\) to \(a\). It turns out that the condition number of this problem can be expressed in terms of classical objects like principal curvatures and the Weingarten map. This is joint work with Nick Vannieuwenhoven.
Suppose a reductive group \(G\) acts linearly on a finite dimensional complex vector space \(V\). The corresponding null cone, which may be thought of the set of “singular objects”, consists of those vectors that cannot be distinguished from the zero vector by means of polynomial invariants. The null cone was introduced by Hilbert in his seminal work on invariant theory around 1900.
Quite surprisingly, the computational problem of testing membership to the null cone turned out to be of relevance for geometric complexity theory, quantum information theory, and other areas. Notably, a thorough study of the null cone for the simultanous left/right action on tuples of matrices was crucial for finding a deterministic polynomial time algorithm for verifying algebraic identies.
Despite the algebraic nature of the problem, numerical optimization algorithms seem to be the most efficient general methods for solving the null cone problem. This also applies to the related problem of testing membership to moment polytopes, which e.g., generalizes Horn's problem on the eigenvalues of sums of matrices.
The long term goal is to find polynomial time algorithms for these problems. In the special case of abelian groups, such algorithms are provided by linear programming, e.g. by interior methods. Condition numbers are likely to play an important role in this endeavour.
This is joint work with Cole Franks, Ankit Garg, Rafael Oliveira, Michael Walter and Avi Wigderson.
We give an overview of results regarding the global evaluation complexity of local nonconvex optimization, focusing on two antipodal algorithmic cases. In particular, we discuss high-order regularisation methods that use higher order derivative information and can certify higher order criticality, and that have better complexity and universal properties. We then consider an inexact, probabilistic setting, where derivatives and even function evaluations may only be sufficiently accurate occasionally, but whose worst-case expected complexity can still be quantified. A story of robust methods emerges, with continuously-varying, often sharp and sometimes even optimal, complexity.
In statistical learning, the regression problem refers to the search of a function u which best explains an output y from an input $x$, when the joint distribution of $(x,y)$ is unknown and a finite sample of this distribution is available. This setting was explored in depth in the seminal paper by Cucker and Smale. Active learning refers in contrast to the setting where the user has the possibility of choosing the values of the input and query the output. This is the case in many applications such as sensing physical systems, or building response surfaces from numerical codes. In this talk, we review recent results on active learning based on weighted least-squares, which in particular reveal the existence of optimal sampling strategies.
(Joint work with Colin McDiarmid and Dieter Mitsche)
Suppose that there is a family of \(n\) random points independently and uniformly distributed in the square with area \(n\). We do not see these points, but learn about them in one of the following two ways:
Suppose first that we are given the corresponding random geometric graph G, where distinct vertices \(u\) and \(v\) are adjacent when their Euclidean distance is at most \(r\). Assume that the threshold distance \(r\) satisfies certain conditions with respect to \(n\). We shall see that the following holds with high probability: Given the graph \(G\) (without any geometric information), in polynomial time we can approximately reconstruct the hidden embedding, in the sense that, 'up to symmetries', for each vertex \(v\) we find an embedding with 'displacement' at most \(r\).
Now suppose that, instead of being given the graph \(G\), we are given, for each vertex \(v\), the ordering of the other vertices by increasing Euclidean distance from \(v\). Then, with high probability, in polynomial time we can find an embedding with the much smaller displacement error.
Descartes rule of signs for univariate real polynomials is a beautifully simple upper bound for the number of positive real roots. Moreover, it gives the exact number of positive real roots when the polynomial is real rooted, for instance, for characteristic polynomials of symmetric matrices. A general multivariate Descartes rule is certainly more complex and still elusive. I will recall the few known multivariate cases and will present a new optimal Descartes rule for polynomials supported on circuits, obtained in collaboration with Frédéric Bihan and Jens Forsgård.
I'll describe an extension of the classical algebraic theory of quadratic from fields to a broad class of rings. This extension applies to:
It is achieved by: (i) Suitably extending the notion of matrix isometry of quadratic forms, and (ii) Introducing three axioms expressing simple properties of the representation of ring elements by quadratic forms, known to hold in the field case.
Under these axioms the ring-theoretic approach based on this extended notion of isometry coincides with the formal, abstract approach formulated in terms of reduced special groups. Preordered rings \(\langle A, T \rangle\) satisfying these axioms are called \(T\)-faithfully quadratic. This class includes (among others):
Most of the structural properties involving quadratic forms known to hold for fields extend to \((T)\)-faithfully quadratic rings. Joint work with F. Miraglia (Universidade de Sao Paulo, Brazil).
We refer to collections of points and weights where the points belong to a compact Riemannian manifold \(\mathcal{M}\) and the weights are positive real numbers. We say that such collections are a perfect quadrature formulae if we can compute exactly the integral of any function \(f\) (\(f\) belonging to a space of diffusion polynomials) on the manifold \(\mathcal{M}\) by computing the weighted sum of \(f\) evaluated on the points. We will show how obtaining such sets of points for fixed weights is an extremely challenging problem and nevertheless, their existence can be proved using rather simple tools. During this talk, we will review some recent results on this problem.
Bohemian matrices are families of matrices whose entries come from a fixed discrete set of small integers. A symmetric matrix is a correlation matrix if it has ones on the diagonal and its eigenvalues are nonnegative. We will show that the number of Bohemian Correlation Matrices over \(0\) and \(1\) are linked with Bell numbers (OEIS A000110) and will introduce a new characterisation of correlation matrices in order to analyse what happens with these numbers when the entries come from \(-1\), \(0\) and \(1\) and to solve some correlation matrix completion problems arising in risk management and insurance.
Tropical analogs of the combinatorial Nullstellensatz, universal testing set for fewnomials and \(\tau\)-conjecture are provided.
While approximation theory in an interval is thoroughly understood, the real line represents something of a mystery. In this talk we review the state of the art in this area, commencing from the familiar Hermite functions and moving to recent results characterising all orthonormal sets on \(L_2(-\infty,\infty)\) that have a skew-symmetric (or skew-Hermitian) tridiagonal differentiation matrix and such that their first \(n\) expansion coefficients can be calculated in \(O(n \log n)\) operations. In particular, we describe the generalised Malmquist–Takenaka system. The talk concludes with a (too!) long list of open problems and challenges.
This talk brings many areas together: discrete geometry, statistics, intersection theory, classical algebraic geometry, and geometric modeling. First, we recall the definition of the adjoint of a polytope given by Warren in 1996 in the context of geometric modeling. He defined this polynomial to generalize barycentric coordinates from simplices to arbitrary polytopes. Secondly, we show how this polynomial appears in statistics. It is the numerator of a generating function over all moments of the uniform probability distribution on a polytope. Thirdly, we prove the conjecture that the adjoint is the unique polynomial of minimal degree which vanishes on the non-faces of a simple polytope. In addition, we see that the adjoint appears as the central piece in Segre classes of monomial schemes. Finally, we observe that adjoints of polytopes are special cases of the classical notion of adjoints of divisors. Since the adjoint of a simple polytope is unique, the corresponding divisors have unique canonical curves. In the case of three-dimensional polytopes, we show that these divisors are either \(K3\)- or elliptic surfaces.
This talk is based on joint works with Kristian Ranestad, Boris Shapiro and Bernd Sturmfels.
In the 1957 movie “the incredible shrinking man”, a businessman is exposed to radioactive dust and begins to shrink while researchers try and fail to stop that process. In this talk we will see what happens if, instead of a human being, we shrink the computation model that the researchers study.
We introduce a notion of local analysis meant to capture the behavior of conic condition numbers around a point. This is inspired on the concept of smoothed analysis introduced some years ago by Spielman and Teng, which takes as cost the supremum of the average on small balls around points. Here we are interested in studying directly the average on small balls around points, without considering the supremum on the whole space.
This is joint work with Felipe Cucker.
Research on Smale's 17th problem succeeded in designing polynomial average time algorithms for computing numerically one root of a polynomial system. The input size of the polynomial size (with respect to which is measured the complexity) is the number of monomials in the dense representation of the system. It is an enormous number, exponential in both the dimension and the degree of the system.
Now, if the system is given as a black-box evaluation program, what can we say about the complexity of computing one root?
We propose a new approach to the theory of conditioning for numerical analysis problems for which both classical and stochastic perturbation theory fail to predict the observed accuracy of computed solutions. As motivation, we present examples of problems that are discontinuous at a given input and have infinite classical and stochastic condition number, but where the solution is still computed to machine precision without relying on structured algorithms. Stimulated by the failure of classical and stochastic perturbation theory in capturing such phenomena, we define and analyse a weak worst-case and a weak stochastic condition number. This new theory is a more powerful predictor of the accuracy of computations.
We propose a complexity theory for approximate real computations. The aim is to extend the BSS theory to allow for condition numbers and for approximate real computations. This is done in two steps, and I plan to explain the first one (BSS theory with exact computations and condition numbers). This will address recent criticisms of the BSS theory as inadequate for describing fractal objects and other morally easy decision problems.
The input size depends on a condition number, which is not assumed known by the machine. This theory admits deterministic and nondeterministic polynomial time recognizable problems. We prove that \(\mathsf{P}\) is not \(\mathsf{NP}\) in this theory if and only if \(\mathsf{P}\) is not \(\mathsf{NP}\) in the BSS theory over the reals.
If time allows, I will outline the theory with approximate computations. This theory also admits
classes \(\mathsf{P}\) and \(\mathsf{NP}\) and also an \(\mathsf{NP}\)-complete problem. The \(\mathsf{P}\) vs \(\mathsf{NP}\) question in this new theory is
related (but not known to be equivalent) to the classical \(\mathsf{P}\) vs \(\mathsf{NP}\) problem.
This is joint work with Mike Shub.
There is a long and fruitful interplay between the fields of logic and computation. One of its examples is the field of descriptive complexity. Among the many fields in which Felipe Cucker worked (and works) descriptive complexity has been one. In this talk we want to review some of the results in this area with particular emphasis on joint work done many years ago with Felipe Cucker.
What is the complexity of multivariate polynomial computations such as polynomial multiplication, derivation and integration?
In 1-D, interpolation in Chebyshev nodes enjoy the optimal logarithmic Lebesgue constant and allow the computation of polynomial multiplication, derivation and quadrature via the Fast Fourier Transform (FFT). In 2D, Padua points are known to be unisolvent (just enough points to represent all bivariate polynomials up to given order uniquely), and they also enjoy optimal Lebesgue numbers.
We will in this talk discuss another set of unisolvent points for multivariate polynomial interpolation in any dimension, again with optimal Lebesgue numbers and they allow for fast (\(N \log N\)) computations of multiplication, derivation and quadrature via FFT and recursions. The points are based on the theory of Weyl groups and root lattices, an area of mathematics which originates from the representation theory of Lie groups.
It sounds like a perfect computational device, but we will discuss some issues which so-far has limited the practical use of these interpolation points.
A classical result of Alan Hoffman from 1952 shows that if a point almost satisfies a system of linear inequalities, then it is near an actual solution to the system of linear inequalities. We provide a novel characterization of the sharpest Hoffman constant associated to this "almost implies near" relationship. Our characterization extends to the more general case when some inequalities are easy to satisfy as in the case of box constraints. Our characterization also yields a procedure to compute the Hoffman constant – a notoriously difficult and largely unexplored computational challenge.
Given a finite collection of convex sets for each of which an interior point is known, we present an elementary algorithm for computing a point in the intersection, and bound the algorithm’s complexity in terms of a natural condition number.
We begin with a very short story about how Felipe Cucker and Steve Smale helped Maurice meet a Brasilian historian in Barcelona. We then see an efficient reduction of point counting for hypersurfaces over \(\mathbb{Z}/(p^k)\) to point counting over prime fields, generalizing earlier work of Cheng, Dwiwedi, Gao, Mittal, Saxena, and Wan. This leads to a new, much simpler proof of the rationality of Igusa zeta functions. The latter zeta functions are related to p-adic integration (thanks to work of Igusa and Denef) and an unusual class of pseudorandom generators (due to Anshel and Goldfeld). This is joint work with Caleb Robelle and Yuyu Zhu.
The facial structure of convex sets can be surprisingly complex, and unexpected irregularities of the arrangements of faces give rise to badly behaved sets and various counterexamples. I will focus on specific properties that capture irregularities (dimensions of faces, singularity degree, facial exposure and facial dual completeness) and will introduce some constructive conditions that can be used to detect irregular facial arrangements.
This talk is based on joint work with Prof. Levent Tunçel, University of Waterloo.
Using subresultants, we modify a recent real-algebraic proof due to Eisermann of the Fundamental Theorem of Algebra ([FTA]) to obtain the following quantitative information: in order to prove the [FTA] for polynomials of degree d, the Intermediate Value Theorem ([IVT]) is requested to hold for real polynomials of degree at most d^2. We also explain that the classical proof due to Laplace requires [IVT] for real polynomials of exponential degree. These quantitative results highlight the difference in nature of these two proofs. See arXiv:1803.04358v3.
This is joint wok with Danel Perrucci.
In part one I will describe the pitchfork bifurcation from an index centered approach. This is joint work with Pujals and Yang generalizing the approach of Smale and Rajapakse.
In part two I will discuss a complexity theory of decision problems in exact arithmetic whose input size depends on the condition number as well as the dimension. The polynomial class will only need to output yes for the set and will not need to terminate on the complement. This approach relates to the approaches of Braverman-Yampolsky and Cucker and Smale. It is joint work with Gregorio Malajovich.
Certain cells in the heart called myocytes put into a petri dish will oscillate individually until they become close together when they start to beat in unison. The mathematics goes back more than 350 years with Huygens and is still trying to give a good picture. I have been working with Indika Rajapakse and Charles Pugh on these things and we have been led to a fascinating complex of problems and insights into the dynamics of the heart beat.
In statistical mechanics, the independence polynomial of a graph arises as the partition function of the hard-core lattice gas model. The distribution of the zeros of these polynomials at the thermodynamic limit is expected to be relevant for the study of the model and, in particular, for the determination of its phase transitions. In this talk, I will review the known results on the location of these zeros, with emphasis on the case of rooted regular trees of fixed degree and varying depth. Our main result states that for these graphs, the zero sets of their independence polynomials converge as the depth increases towards the bifurcation measure of a certain family of dynamical systems on the Riemann sphere.
This is ongoing work with Juan Rivera-Letelier (Rochester)
In 1999 Felipe Cucker introduced the real global condition number, fondly known as \(\kappa\) (kappa), in his seminal paper "Approximate zeros and condition numbers". This condition number and its variations are widely believed to capture the numerical complexity of problems in real algebraic geometry. In this talk, we show that this is not the case and that a new condition-based approach might open the door to finite expectation algorithms in numerical real algebraic geometry.
We will discuss algebraic and geometric condition numbers in a context of complexity analysis of algorithms for convex optimization. Goffin-Cheung-Cucker measure of convex cones will come up during the talk.
In a variety of topological problems about sets, definable in o-minimal structures over the reals (e.g., semialgebraic or subanalytic sets), approximations of sets by monotone definable families turn out to be useful. In the presence of blow-ups, such a family can exhibit a complex behaviour. Hence, it is desirable to classify monotone families into a finite number of standard types. In the talk I will describe a triangulation of an ambient definable set of a family in dimension 3, such that, in each 3-simplex, the family belongs to one of nine types, corresponding to nine possible lexicographically monotone Boolean functions in three variables.
Joint work with S. Basu and A. Gabrielov.
The extension of the Shub-Smale theory to systems with multiple zeroes is proposed in the book “Condition – The Geometry of Numerical Algorithm” as an open problem. We propose such a study restricted to the singular value decomposition problem in the case where there are clusters of singular values.