Talks displayed in this page are ordered by alphabetical order of the speakers' surnames. The schedule can be consulted in the schedule page.

The talks will take place at the Einstein-Saal of the Berlin-Brandenburg Academy of Sciences and Humanities. More information about the location can be found in the venue page.

All talks will be probably recorded in video, but the video will be released publicly only upon permission of the speaker.

TBA

Talks are ordered alphabetically with respect the speakers. Abstracts will be added as they are being received.

The roots of random polynomials and of random polynomial systems have been extensively studied in the past. While the picture is quite complete in the case of polynomials, the case of systems is much harder. Kostlan, Shub and Smale random polynomial systems were introduced in the nineties and the expectation of the number of their real roots is known since then. In this talk we will review some recent results on this problem.

...where a gorgeous result by Felipe Cucker is revisited, the solution to a famous problem is shown, and a theorem that is not a theorem is proved to be useful, but not necessarily in that order.

Thanks to major advances in cognitive neuroscience, we are on the brink of a scientific understanding of how the brain achieves consciousness. This talk will describe cognitive neuroscientist Bernard Baars' Global Workspace Model (GWM) of the brain, its implications for understanding consciousness, and a novel computer architecture that it inspires. The major contribution of this talk lies in the precise formal model of a Conscious Turing Machine (CTM) aka a Conscious AI (CAI) to be presented.

This is joint work with Avrim Blum.

We give an overview of results regarding the global evaluation complexity of local nonconvex optimization, focusing on two antipodal algorithmic cases. In particular, we discuss high-order regularisation methods that use higher order derivative information and can certify higher order criticality, and that have better complexity and universal properties. We then consider an inexact, probabilistic setting, where derivatives and even function evaluations may only be sufficiently accurate occasionally, but whose worst-case expected complexity can still be quantified. A story of robust methods emerges, with continuously-varying, often sharp and sometimes even optimal, complexity.

It is well known that a surface can be recovered from its two fundamental forms if they satisfy the Gauss and Codazzi-Mainardi compatibility equations on a simply-connected domain, in which case the surface is uniquely determined up to isometric equivalence.

It is less known that in this case the surface becomes a continuous function of its fundamental forms, again up to isometric equivalence, for various topologies, such as the Fréchet topology of continuously differentiable functions, or those corresponding to various Sobolev norms.

In this talk, we will review such continuity results obtained during the past fifteen years, with special emphasis on those that can be derived by means of nonlinear Korn inequalities on a surface.We will also mention potential applications of such results, such as the intrinsic approach to nonlinear shell theory, where the unknowns are the fundamental forms of the deformed middle surface of a shell.

In statistical learning, the regression problem refers to the search of a function u which best explains an output y from an input $x$, when the joint distribution of $(x,y)$ is unknown and a finite sample of this distribution is available. This setting was explored in depth in the seminal paper by Cucker and Smale. Active learning refers in contrast to the setting where the user has the possibility of choosing the values of the input and query the output. This is the case in many applications such as sensing physical systems, or building response surfaces from numerical codes. In this talk, we review recent results on active learning based on weighted least-squares, which in particular reveal the existence of optimal sampling strategies.

(Joint work with Colin McDiarmid and Dieter Mitsche)

Suppose that there is a family of \(n\) random points independently and uniformly distributed in the square with area \(n\). We do not see these points, but learn about them in one of the following two ways:

Suppose first that we are given the corresponding random geometric graph G, where distinct vertices \(u\) and \(v\) are adjacent when their Euclidean distance is at most \(r\). Assume that the threshold distance \(r\) satisfies certain conditions with respect to \(n\). We shall see that the following holds with high probability: Given the graph \(G\) (without any geometric information), in polynomial time we can approximately reconstruct the hidden embedding, in the sense that, 'up to symmetries', for each vertex \(v\) we find an embedding with 'displacement' at most \(r\).

Now suppose that, instead of being given the graph \(G\), we are given, for each vertex \(v\), the ordering of the other vertices by increasing Euclidean distance from \(v\). Then, with high probability, in polynomial time we can find an embedding with the much smaller displacement error.

Descartes rule of signs for univariate real polynomials is a beautifully simple upper bound for the number of positive real roots. Moreover, it gives the exact number of positive real roots when the polynomial is real rooted, for instance, for characteristic polynomials of symmetric matrices. A general multivariate Descartes rule is certainly more complex and still elusive. I will recall the few known multivariate cases and will present a new optimal Descartes rule for polynomials supported on circuits, obtained in collaboration with Frédéric Bihan and Jens Forsgård.

Bohemian matrices are families of matrices whose entries come from a fixed discrete set of small integers. A symmetric matrix is a correlation matrix if it has ones on the diagonal and its eigenvalues are nonnegative. We will introduce a new characterization of correlation matrices that will be used to show that the number of Bohemian Correlation Matrices over \(-1\), \(0\) and \(1\) corresponds to OEIS A000110 and to solve some correlation matrix completion problems arising in risk management and insurance.

Tropical analogs of the combinatorial Nullstellensatz, universal testing set for fewnomials and \(\tau\)-conjecture are provided.

The development of stochastic gradient descent type algorithms (SGD) have been a major factor in allowing optimisation methods to be successfully applied to ever larger data analytics and machine learning problems. In applications it is often observed that if SGD is stopped early rather than being left to complete its calculations to a high accuracy termination criterion, the resulting solutions are of greater practical relevance. In this talk we will shed some light on this phenomenon, by relating the problem of minimising objective functions in the form of a sum of simple functions to the maximum feasible subsystem problem and perceptron boundedness, the latter being characterised by a condition number. Under this interpretation, early stopping of SGD yields solutions to a robust version of the underlying optimisation problem.

While approximation theory in an interval is thoroughly understood, the real line represents something of a mystery. In this talk we review the state of the art in this area, commencing from the familiar Hermite functions and moving to recent results characterising all orthonormal sets on \(L_2(-\infty,\infty)\) that have a skew-symmetric (or skew-Hermitian) tridiagonal differentiation matrix and such that their first \(n\) expansion coefficients can be calculated in \(O(n \log n)\) operations. In particular, we describe the generalised Malmquist–Takenaka system. The talk concludes with a (too!) long list of open problems and challenges.

In the 1957 movie "the incredible shrinking man", a businessman is exposed to radioactive dust and begins to shrink while researchers try and fail to stop that process. In this talk we will see what happens if, instead of a human being, we shrink the computation model that the researchers study.

Weak condition numbers, with an application to singular polynomial eigenproblems

We propose a new approach to the theory of conditioning for numerical analysis problems for which both classical and stochastic perturbation theory fail to predict the observed accuracy of computed solutions. As motivation, we present examples of problems that are discontinuous at a given input and have infinite classical and stochastic condition number, but where the solution is still computed to machine precision without relying on structured algorithms. Stimulated by the failure of classical and stochastic perturbation theory in capturing such phenomena, we define and analyse a weak worst-case and a weak stochastic condition number. This new theory is a more powerful predictor of the accuracy of computations.

We propose a complexity theory for approximate real computations. The aim is to extend the BSS
theory to allow for condition numbers and for approximate real computations. This is done in two
steps, and I plan to explain the first one (BSS theory with exact computations and condition
numbers). This will address recent criticisms of the BSS theory as inadequate for describing fractal
objects and other morally easy decision problems.

The input size depends on a condition number, which is not assumed known by the machine. This
theory admits deterministic and nondeterministic polynomial time recognizable problems. We
prove that \(\mathsf{P}\) is not \(\mathsf{NP}\) in this theory if and only if \(\mathsf{P}\) is not \(\mathsf{NP}\) in the BSS theory over the reals.

If time allows, I will outline the theory with approximate computations. This theory also admits
classes \(\mathsf{P}\) and \(\mathsf{NP}\) and also an \(\mathsf{NP}\)-complete problem. The \(\mathsf{P}\) vs \(\mathsf{NP}\) question in this new theory is
related (but not known to be equivalent) to the classical \(\mathsf{P}\) vs \(\mathsf{NP}\) problem.

This is joint work with Mike Shub.

There is a long and fruitful interplay between the fields of logic and computation. One of its examples is the field of descriptive complexity. Among the many fields in which Felipe Cucker worked (and works) descriptive complexity has been one. In this talk we want to review some of the results in this area with particular emphasis on joint work done many years ago with Felipe Cucker.

What is the complexity of multivariate polynomial computations such as polynomial multiplication, derivation and integration?

In 1-D, interpolation in Chebyshev nodes enjoy the optimal logarithmic Lebesgue constant and allow the computation of polynomial multiplication, derivation and quadrature via the Fast Fourier Transform (FFT). In 2D, Padua points are known to be unisolvent (just enough points to represent all bivariate polynomials up to given order uniquely), and they also enjoy optimal Lebesgue numbers.

We will in this talk discuss another set of unisolvent points for multivariate polynomial interpolation in any dimension, again with optimal Lebesgue numbers and they allow for fast (\(N \log N\)) computations of multiplication, derivation and quadrature via FFT and recursions. The points are based on the theory of Weyl groups and root lattices, an area of mathematics which originates from the representation theory of Lie groups.

It sounds like a perfect computational device, but we will discuss some issues which so-far has limited the practical use of these interpolation points.

A classical result of Alan Hoffman from 1952 shows that if a point almost satisfies a system of linear inequalities, then it is near an actual solution to the system of linear inequalities. We provide a ovel characterization of the sharpest Hoffman constant associated to tis "almost implies near" relationship. Our characterization extends othe more general case when some inequalities are easy to satisfy as i he case of box constraints. Our characterization also yields a rcdure to compute the Hoffman constant – a notoriously difficult and lrey unexplored computational challenge.

We begin with a very short story about how Felipe Cucker and Steve Smale helped Maurice meet a Brasilian historian in Barcelona. We then see an efficient reduction of point counting for hypersurfaces over \(\mathbb{Z}/(p^k)\) to point counting over prime fields, generalizing earlier work of Cheng, Dwiwedi, Gao, Mittal, Saxena, and Wan. This leads to a new, much simpler proof of the rationality of Igusa zeta functions. The latter zeta functions are related to p-adic integration (thanks to work of Igusa and Denef) and an unusual class of pseudorandom generators (due to Anshel and Goldfeld). This is joint work with Yuyu Zhu.

Using subresultants, we modify a recent real-algebraic proof due to Eisermann of the
Fundamental Theorem of Algebra ([FTA]) to obtain the following quantitative information:
in order to prove the [FTA] for polynomials of degree d, the Intermediate Value Theorem
([IVT]) is requested to hold for real polynomials of degree at most d^2. We also explain
that the classical proof due to Laplace requires [IVT] for real polynomials of exponential
degree. These quantitative results highlight the difference in nature of these two proofs.
See arXiv:1803.04358v3.

This is joint wok with Danel Perrucci.

TBA

Certain cells in the heart called myocytes put into a petri dish will oscillate individually until they become close together when they start to beat in unison. The mathematics goes back more than 350 years with Huygens and is still trying to give a good picture. I have been working with Indika Rajapakse and Charles Pugh on these things and we have been led to a fascinating complex of problems and insights into the dynamics of the heart beat.

In statistical mechanics, the independence polynomial of a graph arises as the partition function of the hard-core lattice gas model. The distribution of the zeros of these polynomials at the thermodynamic limit is expected to be relevant for the study of the model and, in particular, for the determination of its phase transitions. In this talk, I will review the known results on the location of these zeros, with emphasis on the case of rooted regular trees of fixed degree and varying depth. Our main result states that for these graphs, the zero sets of their independence polynomials converge as the depth increases towards the bifurcation measure of a certain family of dynamical systems on the Riemann sphere.

This is ongoing work with Juan Rivera-Letelier (Rochester)

In 1999 Felipe Cucker introduced the real global condition number, fondly known as \(\kappa\) (kappa), in his seminal paper "Approximate zeros and condition numbers". This condition number and its variations are widely believed to capture the numerical complexity of problems in real algebraic geometry. In this talk, we show that this is not the case and that a new condition-based approach might open the door to finite expectation algorithms in numerical real algebraic geometry.

We will discuss algebraic and geometric condition numbers in a context of complexity analysis of algorithms for convex optimization. Goffin-Cheung-Cucker measure of convex cones will come up during the talk.

In a variety of topological problems about sets, definable in o-minimal structures over the reals (e.g., semialgebraic or subanalytic sets), approximations of sets by monotone definable families turn out to be useful. In the presence of blow-ups, such a family can exhibit a complex behaviour. Hence, it is desirable to classify monotone families into a finite number of standard types. In the talk I will describe a triangulation of an ambient definable set of a family in dimension 3, such that, in each 3-simplex, the family belongs to one of nine types, corresponding to nine possible lexicographically monotone Boolean functions in three variables.

Joint work with S. Basu and A. Gabrielov.