Tutorial: An introduction to algorithmic randomness
ABSTRACT. In this tutorial, I will provide an introduction to the theory of algorithmic randomness, a branch of computability theory that studies what it means for an individual mathematical object (such as a real number or a sequence of symbols from some alphabet) to exhibit no algorithmically discernible patterns and to be an effectively typical outcome relative to a given probability measure. I will present the three main paradigms for defining algorithmic randomness notions and explain how they relate to each other. Then, I will discuss some recent applications of algorithmic randomness in computable analysis and measure theory, as well as outline some of its applications in philosophy and the foundations of Bayesian learning.
ABSTRACT. In both formal learning theory and modal approaches to belief revision, the phenomenon of convergence to the truth is often studied from a topological perspective. Ideally, we would like our beliefs, or the conjectures of our learning methods, to converge to the truth along every possible data stream (along every possible sequence of observations). However, there are inductive problems that are simply too difficult for this ideal to be attainable. And if convergence to the truth cannot occur everywhere, at least we would like it to happen along the vast majority of possible data streams. Topology provides a common framework for thinking about typicality, so achieving convergence to the truth along a topologically typical, or “large”, collection of data streams is often taken to be the next best epistemic ideal.
Convergence to the truth is also fundamental to Bayesian epistemology. In this setting, however, convergence-to-the-truth results have a rather different flavor: they establish that, under mild assumptions, a Bayesian learner’s beliefs are guaranteed to converge to the truth with probability one—where, crucially, the probability-one qualification is relative to the learner’s subjective prior. So, Bayesian convergence-to-the-truth theorems are best understood as establishing that Bayesian learners enjoy a certain type of internal coherence: they cannot be skeptics about their own ability to be inductively successful in the long run.
The purpose of this talk is to clarify the extent to which these two perspectives on convergence to the truth, the probabilistic one and the topological one, are compatible with each other in the setting of Bayesian learning. This question is especially pressing because of a recent criticism of the Bayesian framework due to Gordon Belot, who argued that Bayesian convergence-to-the-truth results mandate a pernicious epistemic immodesty, in that Bayesian learners are forced to believe that they will be inductively successful even when they are guaranteed to face inductive failure on a co-meager (i.e., a topologically typical) collection of data streams.
My goal is to shed light on when Bayesian convergence to the truth occurs not only on a set of probability one, but also on a co-meager set of data streams. I will show that, by classifying the inductive problems faced by a Bayesian learner (the random variables that a Bayesian learner needs to successfully estimate) using the tools of descriptive set theory and computability theory, one can identify many natural classes of inductive problems for which convergence to the truth indeed happens on a co-meager set. Moreover, I will show that appealing to computability theory allows to offer a much more fine-grained analysis of the phenomenon of Bayesian convergence to the truth. In particular, we will see that the theories of algorithmic randomness and effective genericity can be used to single out specific co-meager sets of data streams along which successful learning provably occurs for several classes of effective inductive problems.
ABSTRACT. An interaction between dependency relations and quantification underlies numerous phenomena in natural language. Dependencies are responsible for inverting scope, as illustrated by the example 'a day of every month'. The part-whole relation, expressed by the preposition 'of' in this example, introduces a dependency between wholes (months) and their respective parts (days). Quantifying over this dependency yields the inverse scope reading: for every month, there is a different day that belongs to it. Dependencies are also needed for tracking anaphoric reference to quantifier domains, as illustrated by the sentence 'Every farmer who owns a donkey beats it'. By quantifying universally over the dependency between each of the farmers and the donkeys owned by them, we obtain the intended reading that every farmer beats every donkey he owns. In this paper, we show that polyadic quantifiers on dependent types are well-suited for modelling the interaction of dependency relations and quantification in these phenomena. Then we present, in a conceptual way, the mathematics behind the process of polyadic quantification over dependent types. The main new feature is the left strength on the cartesian monad over a basic fibration of a topos. It combines with what we call the right strength into an operation (pile'up) that turns tuples of quantifiers into polyadic ones.
ABSTRACT. We construct a conditional logic using selection models in a three-valued setting. When the selection function selects an empty set, the corresponding conditional is neither true nor false. The semantic consequence is defined as in Strict-Tolerant logic. The resulted logic validates Conditional Excluded Middle without assuming that a selection function always chooses a single world, reconciling Stalnaker and Lewis. We give a labelled sequent calculus for the logic and consider some extension of it. It turns out that the extended logics are strongly connexive, hyperconnexive, superconnexive, and is almost totally connexive.
Completeness of Finitely Weighted Kleene Algebra With Tests
ABSTRACT. Building on Ésik and Kuich's completeness result for finitely weighted Kleene algebra, we establish relational and language completeness for finitely weighted Kleene algebra with tests. Similarly as Ésik and Kuich, we assume that the finite semiring of weights is commutative, partially ordered and zero-bounded, but we also assume that it is integral. We argue that finitely weighted Kleene algebra with tests is a natural framework for equational reasoning about weighted programs in cases where an upper bound on admissible weights is assumed.
Syntactic concept lattice models for infinitary action logic
ABSTRACT. We introduce models for infinitary action logic, i.e., the infinitary extension of multiplicative-additive Lambek calculus with the Kleene star, on syntactic concept lattices. This semantics is a variant of language semantics, which is in a sense more natural from the linguistic point of view. Extending the argument of Wurm (2017), we prove completeness for the whole infinitary action logic, while standard language models enjoy completeness only for small fragments of this system. As a corollary, we obtain completeness of infinitary action logic w.r.t. action lattices which are complete in the lattice-theoretic sense.