Math


2023

From Gödel’s Incompleteness Theorem to the completeness of bot religions

D.P. and Temra Pavlovic

Hilbert and Ackermann asked for a method to consistently extend incomplete theories to complete theories. Gödel essentially proved that any theory capable of encoding its own statements and their proofs contains statements that are true but not provable. Hilbert did not accept that Gödel’s construction answered his question, and in his late writings and lectures, Gödel agreed that it did not, since theories can be completed incrementally, by adding axioms to prove ever more true statements, as science normally does, with completeness as the vanishing point. This pragmatic view of validity is familiar not only to scientists who conjecture test hypotheses but also to real estate agents and other dealers, who conjure claims, albeit invalid, as necessary to close a deal, confident that they will be able to conjure other claims, albeit invalid, sufficient to make the first claims valid. We study the underlying logical process and describe the trajectories leading to testable but unfalsifiable theories to which bots and other automated learners are likely to converge.


2022

Tight limits and completions from Dedekind-MacNeille to Lambek-Isbell

D.P. and Dominic Hughes

While any infimum in a poset can also be computed as a supremum, and vice versa, categorical limits and colimits do not always approximate each other. If I approach a point from below, and you approach it from above, then we will surely meet if we live in a poset, but we may miss each other in a category. Can we characterize the limits and the colimits that approximate each other, and guarantee that we will meet? Such limits and colimits are called tight. Some critically important network applications depend on them. This paper characterizes tight limits and colimits, and describes tight completions, derived by applying the nucleus construction to adjunctions between loose completions. Just as the Dedekind-MacNeille completion of a poset preserves any existing infima and suprema, the tight completion of a category preserves any existing tight limits and colimits and is therefore idempotent.


2021

Lambek pregroups are Frobenius spiders in preorders

“Spider” is a nickname of special Frobenius algebras, a fundamental structure from mathematics, physics, and computer science. Pregroups are a fundamental structure from linguistics. Pregroups and spiders have been used together in natural language processing: one for syntax, the other for semantics. It turns out that pregroups themselves can be characterized as pointed spiders in the category of preordered relations, where they naturally arise from grammars. The other way around, preordered spider algebras in general can be characterized as unions of pregroups. This extends the characterization of relational spider algebras as disjoint unions of groups. The compositional framework that emerged with the results suggests new ways to understand and apply the basis structures in machine learning and data analysis.


2020

The nucleus of an adjunction and the Street monad on monads

D.P. and Dominic Hughes

An adjunction is a pair of functors related by a pair of natural transformations, and relating a pair of categories. It displays how a structure, or a concept, projects from each category to the other, and back. Adjunctions are the common denominator of Galois connections, representation theories, spectra, and generalized quantifiers. We call an adjunction nuclear when its categories determine each other. We show that every adjunction can be resolved into a nuclear adjunction. This resolution is idempotent in a strong sense. The nucleus of an adjunction displays its conceptual core, just as the singular value decomposition of an adjoint pair of linear operators displays their canonical bases.

The two composites of an adjoint pair of functors induce a monad and a comonad. Monads and comonads generalize the closure and the interior operators from topology, or modalities from logic, while providing a saturated view of algebraic structures and compositions on one side, and of coalgebraic dynamics and decompositions on the other. They are resolved back into adjunctions over the induced categories of algebras and of coalgebras. The nucleus of an adjunction is an adjunction between the induced categories of algebras and coalgebras. It provides new presentations for both, revealing the meaning of constructing algebras for a comonad and coalgebras for a monad.

In his seminal early work, Ross Street described an adjunction between monads and comonads in 2-categories. Lifting the nucleus construction, we show that the resulting Street monad on monads is strongly idempotent, and extracts the nucleus of a monad. A dual treatment achieves the same for comonads. Applying a notable fragment of pure 2-category theory to an acute practical problem of data analysis thus led to a new theoretical result.


2018

Universal Probability-Free Prediction

Vladimir Vovk and D.P.

We construct universal prediction systems in the spirit of Popper’s falsifiability
and Kolmogorov complexity and randomness. These prediction systems do not depend on
any statistical assumptions (but under the IID assumption they dominate, to within the usual
accuracy, conformal prediction). Our constructions give rise to a theory of algorithmic com-
plexity and randomness of time containing analogues of several notions and results of the
classical theory of Kolmogorov complexity and randomness.

The COPA 2016 version of the same paper: Universal Probability-Free Conformal Prediction.


2015

Testing randomness by Matching Pennies

In the game of Matching Pennies, Alice and Bob each hold a penny, and at every tick of the clock they simultaneously display the head or the tail sides of their coins. If they both display the same side, then Alice wins Bob’s penny; if they display different sides, then Bob wins Alice’s penny. To avoid giving the opponent a chance to win, both players seem to have nothing else to do but to randomly play heads and tails with equal frequencies. However, while not losing in this game is easy, not missing an opportunity to win is not. Randomizing your own moves can be made easy. Recognizing when the opponent’s moves are not random can be arbitrarily hard.
The notion of randomness is central in game theory, but it is usually taken for granted. The notion of outsmarting is not central in game theory, but it is central in the practice of gaming. We pursue the idea that these two notions can be usefully viewed as two sides of the same coin.

Smooth coalgebra: testing vector analysis

with Bertfried Fauser

Processes are often viewed as coalgebras, with the structure maps specifying the state transitions. In the simplest case, the state spaces are discrete, and the structure map simply takes each state to the next states. But the coalgebraic view is also quite effective for studying processes over structured state spaces, e.g. measurable, or continuous. In the present paper we consider coalgebras over manifolds. This means that the captured processes evolve over state spaces that are not just continuous, but also locally homeomorphic to Banach spaces, and thus carry a differential structure. Both dynamical systems and differential forms arise as coalgebras over such state spaces, for two different endofunctors over manifolds. A duality induced by these two endofunctors provides a formal underpinning for the informal geometric intuitions linking differential forms and dynamical systems in the various practical applications, e.g. in physics. This joint functorial reconstruction of tangent bundles and cotangent bundles uncovers the universal properties and a high level view of these fundamental structures, which are implemented rather intricately in their standard form. The succinct coalgebraic presentation provides unexpected insights even about the situations as familiar as Newton’s laws.


2014

Monoidal computer II: Normal complexity by string diagrams

In Monoidal Computer I, we introduced a categorical model of computation where the formal reasoning about computability was supported by the simple and popular diagrammatic language of string diagrams. In the present paper, we refine and extend that model of computation to support a formal complexity theory as well. This formalization brings to the foreground the concept of normal complexity measures, which allow decompositions akin to Kleene’s normal form. Such measures turn out to be just those where evaluating the complexity of a program does not require substantially more resources than evaluating the program itself. The usual time and space complexity are thus normal measures, whereas the average and the randomized complexity measures are not. While the measures that are not normal provide important design time information about algorithms, and for theoretical analyses, normal measures can also be used at run time, as practical tools of computation, e.g. to set the bounds for hypothesis testing, inductive inference and algorithmic learning.


2013

Bicompletions of distance matrices

In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in distance spaces, which generalize metric spaces, but are slightly less general than Lawvere’s generalized metric spaces. In this framework, the task of extracting the ‘principal components’ from a given matrix of data boils down to a bicompletion, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security.


2012

Monoidal computer I: Basic computability by string diagrams

We present a new model of computation, described in terms of monoidal categories. It conforms the Church-Turing Thesis, and captures the same computable functions as the standard models. It provides a succinct categorical interface to most of them, free of their diverse implementation details, using the ideas and structures that in the meantime emerged from research in semantics of computation and programming. The salient feature of the language of monoidal categories is that it is supported by a sound and complete graphical formalism, string diagrams, which provide a concrete and intuitive interface for abstract reasoning about computation. The original motivation and the ultimate goal of this effort is to provide a convenient high level programming language for a theory of computational resources, such as one-way functions, and trapdoor functions, by adopting the methods for hiding the low level implementation details that emerged from practice. Only a modest first step towards this ambitious goal is presented.


2008

A new description of orthogonal bases

with Bob Coecke and Jamie Vicary

We show that an orthogonal basis for a finite-dimensional Hilbert space can be equivalently characterised as a commutative dagger-Frobenius monoid in the category FHilb, which has finite-dimensional Hilbert spaces as objects and continuous linear maps as morphisms, and tensor product for the monoidal structure. The basis is normalised exactly when the corresponding commutative dagger-Frobenius monoid is special. Hence orthogonal and orthonormal bases can be axiomatised in terms of composition of operations and tensor product only, without any explicit reference to the underlying vector spaces. This axiomatisation moreover admits an operational interpretation, as the comultiplication copies the basis vectors and the counit uniformly deletes them. That is, we rely on the distinct ability to clone and delete classical data as compared to quantum data to capture basis vectors. For this reason our result has important implications for categorical quantum mechanics.


1998

Calculus in coinductive form

with Martin Escardo

Coinduction is often seen as a way of implementing infinite objects. Since real numbers are typical infinite objects, it may not come as a surprise that calculus, when presented in a suitable way, is permeated by coinductive reasoning. What is surprising is that mathematical techniques, recently developed in the context of computer science, seem to be shedding a new light on some basic methods of calculus.

We introduce a coinductive formalization of elementary calculus that can be used as a tool for symbolic computation, and geared towards computer algebra and theorem proving. So far, we have covered ordinary differential and difference equations, Taylor series, Laplace transform and the basics of operator calculus.


1996

Maps II: chasing diagrams in categorical proof theory

In categorical proof theory, propositions and proofs are presented as objects and arrows in a category. It thus embodies the strong constructivist paradigms of propositions-as-types and proofs-as-constructions, which lie in the foundation of computational logic. Moreover, in the categorical setting, a third paradigm arises, not available elsewhere: logical-operations-as-adjunctions. It offers an answer to the notorious question of the equality of proofs. Diagram chasing comes about in algebra of proofs.

On the basis of these ideas, the present paper investigates proof theory of regular logic: the {\and,\exists}-fragment of the first order logic with equality. The corresponding categorical structure is regular fibration. The examples include stable factorisations, sites, triposes. Regular logic is exactly what is needed to talk about maps, as total and single-valued relations. However, when enriched with proofs-as-arrows, this familiar concept must be supplied with an additional conversion rule, connecting the proof of the totality with the proof of the single-valuedness. We explain the logical meaning of this rule, and then determine precise conditions under which a regular fibration supports the principle of function comprehension (that each map corresponds to a unique function viz arrow in the base), thus lifting a basic theorem from regular categories (that Map(Rel(C) = C), which was recently relativized to factorisation systems. The obtained results open an interesting possibility of extending the P-set construction from triposes to non-posetal fibrations.


1995

 

A categorical setting for the 4-Colour Theorem

The 4-Colour Theorem has been proved in the late seventies , after more than a century of fruitless efforts. But the proof has provided very little new information about the map colouring itself. While trying to understand this phenomenon, we analyze colouring in terms of universal properties and adjoint functors.

It is well known that the 4-colouring of maps is equivalent to the 3-colouring of the edges of some graphs. We show that every slice of the category of 3-coloured graphs is a topos. The forgetful functor to the category of graphs is cotripleable; every loop-free graph is covered by a 3-coloured one in a universal way. In this context, the 4-Color Theorem becomes a statement about the existence of coalgebra structure on graphs.

In a sense, this approach seems complementary to the known combinatorial colouring procedures.

 

On completeness and cocompleteness in and around small categories

The simple connection of completeness and cocompleteness of lattices grows in categories into the Adjoint Functor Theorem. The connection of completeness and cocompleteness of Boolean algebras – even simpler – is similarly related to Pare’s Theorem for toposes. We explain these relations and then study the fibrational versionso of both theorems, for small complete categories. They can be interpreted as definability results in logic with proofs-as-constructions, and transferred into type theory.

Maps I: relative to a factorisation system

Originally, categorical calculus of relations was developed using the canonical factorisations in regular categories. More recently, relations restricted to a proper factorisation systems have been studied by several authors. In the present paper, we consider the general situation, where relations are induced by an arbitrary stable factorisation. This extension of the calculus of relations is necessary for categorical development of strongly constructivist logic, and of semantics of computation, where non-monic relations come about naturally.

In this setting we analyze the correspondence of maps, i.e. the total, single-valued relations, and functions, as given by the arrows of the underlying category.


1993

Chu I: cofree equivalences, dualities and *-autonomous categories

We study three comonads derived from the comma construction. The induced coalgebras correspond to the three concepts displayed in the title of the paper. The comonad that yields the cofree *-autonomous categories is, in essence, the Chu construction, which has recently awaken much interest in computer science. We describe its couniversal property. It is right adjoint to the inclusion of *-autonomous categories among autonomous categories, with lax structure-preserving morphisms. Moreover, this inclusion turns out to be comonadic: *-autonomous categories are exactly the Chu-coalgebras.


1992

On the structure of paradoxes

Paradox is a logical phenomenon. Usually, it is produced in type theory, over a type of “truth values”. A formula p, as a term p:Prop is constructed, such that p is equivalent to its negation – whereupon everything can be proved. We describe a general pattern which many constructions of the formula follow: for example, the well known arguments of Cantor, Russell, and Goedel. We generalize their common underlying structure, and prove that Reynolds’ construction of the type isomorphic to its second power type, contrary to his conjecture, cannot be extended to give a fixed point of every variable type derived from exponentiation. In particular, for contravariant types, such a fixed point would lead to a paradox.


1991

Constructions and Predicates

In this paper, Coquand and Huet’s Theory of Constructions is reinterpreted as a type theory of “sets” and “predicates”. Following some set-theoretical intuitions, this popular type system is modified at two points: (1) a simple new operation is added – to represent a constructive version of the comprehension principle; (2) a restriction on contexts is imposed – “sets” must not depend on “proofs” of “predicates”. The resulting theory is called Theory of Predicates. Sufficiently constructive arguments from naive set theory can be directly written down in it. On the other hand, modification (2) is relevant from a computational point of view, since it corresponds to a necessary condition of the modular approach to programming.

Our main result tells that, despite (2), Theory of Predicates is as expressive as Theory of Constructions: the type constructors preempted by (2) are recovered in an isomorphic form using (1). In fact, Theory of Constructions is equivalent to a quotient of Theory of Predicates.


1990

Categorical Interpolation: Descent and the Beck-Chevalley Condition without Direct Images

Fibred categories have been introduced by Grothendieck (1959, 1970) as the setting for his theory of descent. The present paper contains (in section 4) a characterisation of the effective descent morphisms under an arbitrary fibred category. This essentially geometric result complements a logical analysis of the Beck-Chevalley property (presented in section 1). This property was crucial in the well-known theorem on sufficient conditions for the descent under bifibrations, due to Benabou and Roubaud (1970) and Beck (unpublished). We describe the notion of interpolant as the common denominator of the concepts of descent and the Beck-Chevalley property.

 

Predicates and Fibrations

(thesis, 18MB scan)

The commonplace simplification of constructivist logic as a logic without the excluded middle conceals a more authentic idea of constructivism: that proofs are constructive functions, while the quantifiers should be sums and products. An exact realisation of this idea requires a very special mathematical setting. Expressed categorically, the idea is that there is a small category (of “propositions” and “proofs”), with all small sums and products (as “quantifiers”). In ordinary category theory (or over a Grothendieck topos) such a category must be a preorder, and there can be at most one “proof” leading from one “proposition” to another. In Hyland’s Effective Topos, however, a nondegenerate small complete category has been discovered recently. It can be regarded as the first mathematical model of logic with constructive proofs. On the other hand, a significant impluse to a formal development of such logics came from Computer Science, especially through the work of Coquand, Huet and their collaborators on the Calculus of Constructions.

In this thesis, we consider two mathematical formulations of constructivist logic: a type theoretical and a category theoretical. In the end, the former is completely interpreted in the latter. The purpose of such a connection is to yield a characterisation of type theoreticstructures by categorical properties.