Buckets:

|
download
raw
130 kB

Title: Higher Order Automatic Differentiation of Higher Order Functions

URL Source: https://arxiv.org/html/2101.06757

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Rudiments of differentiation: how to calculate with dual numbers and Taylor approximations 3A Higher Order Forward-Mode AD Translation 4Semantics of differentiation 5Extending the language: variant and inductive types 6Categorical analysis of (higher order) forward AD and its correctness 7Discussion: What are derivatives of higher order functions? 8Discussion and future work References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: suffix failed: mathpartir

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: CC BY 4.0 arXiv:2101.06757v8 [cs.PL] 21 Jan 2025 Higher Order Automatic Differentiation of Higher Order Functions Mathieu Huot University of Oxford Sam Staton University of Oxford Matthijs Vákár Utrecht University Abstract.

We present semantic correctness proofs of automatic differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it as the unique structure preserving macro given a choice of derivatives for basic operations. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that our characterisation of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Throughout, we show how the analysis extends to AD methods for computing higher order derivatives using a Taylor approximation.

Key words and phrases: automatic differentiation, software correctness, denotational semantics The authors contributed equally to this work. 1.Introduction

Automatic differentiation (AD), loosely speaking, is the process of taking a program describing a function, and constructing the derivative of that function by applying the chain rule across the program code. As gradients play a central role in many aspects of machine learning, so too do automatic differentiation systems such as TensorFlow [AAB+16], PyTorch [PGC+17] or Stan [CHB+15].

Programs denotational semantics automatic differentiation Programs denotational semantics Differential geometry math differentiation Differential geometry

Figure 1.Overview of semantics/correctness of AD.

Differentiation has a well-developed mathematical theory in terms of differential geometry. The aim of this paper is to formalize this connection between differential geometry and the syntactic operations of AD, particularly for AD methods that calculate higher order derivatives. In this way we achieve two things: (1) a compositional, denotational understanding of differentiable programming and AD; (2) an explanation of the correctness of AD.

This intuitive correspondence (summarized in Fig. 1) is in fact rather complicated. In this paper, we focus on resolving the following problem: higher order functions play a key role in programming, and yet they have no counterpart in traditional differential geometry. Moreover, we resolve this problem while retaining the compositionality of denotational semantics.

1.0.1.Higher order functions and differentiation.

A major application of higher order functions is to support disciplined code reuse. Code reuse is particularly acute in machine learning. For example, a multi-layer neural network might be built of millions of near-identical neurons, as follows.

neuron 𝑛 : ( 𝐫𝐞𝐚𝐥 𝑛 ⁢ ∗ ( 𝐫𝐞𝐚𝐥 𝑛 ⁢ ∗ 𝐫𝐞𝐚𝐥 ) ) → 𝐫𝐞𝐚𝐥

neuron 𝑛

def 𝜆 ⁢ ⟨ 𝑥 , ⟨ 𝑤 , 𝑏 ⟩ ⟩ . 𝜍 ⁢ ( 𝑤 ⋅ 𝑥 + 𝑏 )

layer 𝑛 : ( ( 𝜏 1 ⁢ ∗ 𝑃 ) → 𝜏 2 ) → ( 𝜏 1 ⁢ ∗ 𝑃 𝑛 ) → 𝜏 2 𝑛

layer 𝑛

def 𝜆 ⁢ 𝑓 . 𝜆 ⁢ ⟨ 𝑥 , ⟨ 𝑝 1 , … , 𝑝 𝑛 ⟩ ⟩ . ⟨ 𝑓 ⁢ ⟨ 𝑥 , 𝑝 1 ⟩ , … , 𝑓 ⁢ ⟨ 𝑥 , 𝑝 𝑛 ⟩ ⟩

comp : ( ( ( 𝜏 1 ⁢ ∗ 𝑃 ) → 𝜏 2 ) ⁢ ∗ ( ( 𝜏 2 ⁢ ∗ 𝑄 ) → 𝜏 3 ) ) → ( 𝜏 1 ⁢ ∗ ( 𝑃 ⁢ ∗ 𝑄 ) ) → 𝜏 3

comp

def 𝜆 ⁢ ⟨ 𝑓 , 𝑔 ⟩ . 𝜆 ⁢ ⟨ 𝑥 , ( 𝑝 , 𝑞 ) ⟩ . 𝑔 ⁢ ⟨ 𝑓 ⁢ ⟨ 𝑥 , 𝑝 ⟩ , 𝑞 ⟩

  − 5 0 5 0 0.5 1 𝑥 𝜍 ⁢ ( 𝑥 )  

(Here 𝜍 ⁢ ( 𝑥 )

def 1 1 + 𝑒 − 𝑥 is the sigmoid function, as illustrated.) We can use these functions to build a network as follows (see also Fig. 2):

comp ⁢ ⟨ layer 𝑚 ⁢ ( neuron 𝑘 ) , comp ⁢ ⟨ layer 𝑛 ⁢ ( neuron 𝑚 ) , neuron 𝑛 ⟩ ⟩ : ( 𝐫𝐞𝐚𝐥 𝑘 ⁢ ∗ 𝑃 ) → 𝐫𝐞𝐚𝐥

(1) ⋯ ⋯ ⋯ 1 2 3 𝑘 1 2 𝑚 1 2 𝑛 Figure 2.The network in (1) with 𝑘 inputs and two hidden layers.

Here 𝑃 ≅ 𝐫𝐞𝐚𝐥 𝑝 with 𝑝

( 𝑚 ⁢ ( 𝑘 + 1 ) + 𝑛 ⁢ ( 𝑚 + 1 ) + 𝑛 + 1 ) . This program (1) describes a smooth (infinitely differentiable) function. The goal of automatic differentiation is to find its derivative.

If we 𝛽 -reduce all the 𝜆 ’s, we end up with a very long function expression just built from the sigmoid function and linear algebra. We can then find a program for calculating its derivative by applying the chain rule. However, automatic differentiation can also be expressed without first 𝛽 -reducing, in a compositional way, by explaining how higher order functions like ( layer ) and ( comp ) propagate derivatives. This paper is a semantic analysis of this compositional approach.

The general idea of denotational semantics is to interpret types as spaces and programs as functions between the spaces. In this paper, we propose to use diffeological spaces and smooth functions [Sou80, IZ13] to this end. These satisfy the following three desiderata:

ℝ is a space, and the smooth functions ℝ → ℝ are exactly the functions that are infinitely differentiable;

The set of smooth functions 𝑋 → 𝑌 between spaces again forms a space, so we can interpret function types.

The disjoint union of a sequence of spaces again forms a space, and this enables us to interpret variant types and inductive types, e.g. lists of reals form the space ⨄ 𝑖

0 ∞ ℝ 𝑖 .

We emphasise that the most standard formulation of differential geometry, using manifolds, does not support spaces of functions. Diffeological spaces seem to us the simplest notion of space that satisfies these conditions, but there are other candidates [BH11, Sta11]. A diffeological space is in particular a set 𝑋 equipped with a chosen set of curves 𝐶 𝑋 ⊆ 𝑋 ℝ and a smooth map 𝑓 : 𝑋 → 𝑌 must be such that if 𝛾 ∈ 𝐶 𝑋 then 𝛾 ; 𝑓 ∈ 𝐶 𝑌 . This is reminiscent of the method of logical relations.

1.0.2.From smoothness to automatic derivatives at higher types.

Our denotational semantics in diffeological spaces guarantees that all definable functions are smooth. But we need more than just to know that a definable function happens to have a mathematical derivative: we need to be able to find that derivative.

In this paper we focus on forward mode automatic differentiation methods for computing higher derivatives, which are macro translations on syntax (called  𝒟 → in Section 3). We are able to show that they are correct, using our denotational semantics.

Here there is one subtle point that is central to our development. Although differential geometry provides established derivatives for first order functions (such as neuron above), there is no canonical notion of derivative for higher order functions (such as layer and comp ) in the theory of diffeological spaces (e.g. [CW14]). We propose a new way to resolve this, by interpreting types as triples ( 𝑋 , 𝑋 ′ , 𝑆 ) where, intuitively, 𝑋 is a space of inhabitants of the type, 𝑋 ′ is a space serving as a chosen bundle of tangents (or jets, in the case of higher order derivatives) over 𝑋 , and 𝑆 ⊆ 𝑋 ℝ × 𝑋 ′ ⁣ ℝ is a binary relation between curves, informally relating curves in 𝑋 with their tangent (resp. jet) curves in 𝑋 ′ . This new model gives a denotational semantics for higher order automatic differentiation on a language with higher order functions.

In Section 4 we boil this new approach down to a straightforward and elementary logical relations argument for the correctness of higher order automatic differentiation. The approach is explained in detail in Section 6. We explore some subtleties of non-uniqueness of derivatives of higher order functions in Section 7.

1.0.3.Related work and context.

AD has a long history and has many implementations. AD was perhaps first phrased in a functional setting in [PS08], and there are now a number of teams working on AD in the functional setting (e.g. [WWE+19, SFVPJ19, Ell18]), some providing efficient implementations. Although that work does not involve formal semantics, it is inspired by intuitions from differential geometry and category theory.

This paper adds to a very recent body of work on verified automatic differentiation. In the first order setting, there are recent accounts based on denotational semantics in manifolds [FST19, LYRY20] and based on synthetic differential geometry [CGM19], work making a categorical abstraction [CCG+20] and work connecting operational semantics with denotational semantics [AP20, Plo18], as well as work focussing on how to correctly differentiate programs that operate on tensors [BML+20] and programs that make use of quantum computing [ZHCW20]. Recently there has also been significant progress at higher types. The work of Brunel et al. [BMP20] and Mazza and Pagani [MP21] give formal correctness proofs for reverse-mode derivatives on a linear 𝜆 -calculus with a particular operational semantics. The work of Barthe et al. [BCLG20] provides a general discussion of some new syntactic logical relations arguments including one very similar to our syntactic proof of Theorem 3. Sherman et al. [SMC20] discuss a differential programming technique that works at higher types, based on exact real arithmetic and relate it to a computable semantics. We understand that the authors of [CGM19] are working on higher types. Vákár [Vák21, VS21, LNV21] phrase and prove correct a reverse mode AD technique on a higher order language based on a similar gluing technique. Vákár [Vák20] extends a standard 𝜆 -calculus with type recursion, and proves correct a forward-mode AD on such a higher-order language, also using a gluing argument.

The differential 𝜆 -calculus [ER03] is related to AD, and explicit connections are made in [MO20, Man12]. One difference is that the differential 𝜆 -calculus allows the addition of terms at all types, and hence vector space models are suitable to interpret all types. This choice would appear peculiar with the variant and inductive types that we consider here, as the dimension of a disjoint union of spaces is only defined locally.

This paper builds on our previous work [HSV20a, Vák20] in which we gave denotational correctness proofs for forward mode AD algorithms for computing first derivatives. Here, we explain how these techniques extend to methods that calculate higher derivatives.

The Faà di Bruno construction has also been investigated [CS11] in the context of Cartesian differential categories.

The idea of directly calculating higher order derivatives by using automatic differentiation methods that work with Taylor approximations (also known as jets in differential geometry) is well-known [GUW00] and it has recently gained renewed interest [Bet18, BJD19]. So far, such “Taylor-mode AD” methods have only been applied to first order functional languages, however. This paper shows how to extend these higher order AD methods to languages with support for higher order functions and algebraic data types.

The two main methods for implementing AD are operator overloading and, the method used in this paper, source code transformation [VMBBL18]. Taylor-mode AD has been seen to be significantly faster than iterated AD in the context of operator overloading [BJD19] in Jax [FJL18]. There are other notable implementations of forward Taylor-mode [BS96, BS97, Kar01, PS07, WGP16]. Some of them are implemented in a functional language [Kar01, PS07]. Taylor-mode implementations use the rich algebraic structure of derivatives to avoid a lot of redundant computations occurring via iterated first order methods and share of a lot of redundant computations. Perhaps the simplest example to see this is with the sin function, whose iterated derivatives only involve sin, cos, and negation. Importantly, most AD tools have the right complexity up to a constant factor, but this constant is quite important in practice and Taylor-mode helps achieve better performance. Another stunning result of a version of Taylor-mode was achieved in [LMG18], where a gain of performance of up to two orders of magnitude was achieved for computing certain Hessian-vector products using Ricci calculus. In essence, the algorithm used is mixed-mode that is derived via jets in [Bet18]. This is further improved in [LMG20]. Tayor-mode can also be useful for ODE solvers and hence will be important for neural differential equations [CRBD18].

Finally, we emphasise that we have chosen the neural network (1) as our running example mainly for its simplicity. Indeed one would typically use reverse-mode AD to train neural networks in practice. There are many other examples of AD outside the neural networks literature: AD is useful whenever derivatives need to be calculated on high dimensional spaces. This includes optimization problems more generally, where the derivative is passed to a gradient descent method (e.g. [RM51, KW+52, Qia99, KB14, DHS11, LN89]). Optimization problems involving higher order functions naturally show up in the calculus of variations and its applications in physics, where one typically looks for a function minimizing a certain integral [GSS00]. Other applications of AD are in advanced integration methods, since derivatives play a role in Hamiltonian Monte Carlo [Nea11, HG14] and variational inference [KTR+17]. Second order methods for gradient-descent have also been extensively studied. As the basic second order Newton method requires inverting a high dimentional hessian matrix, several alternatives and approximations have been studied. Some of them still require Taylor-like modes of differentiation and require a matrix-vector product where the matrix resembles the hessian or inverse hessian [KK04, Mar10, Ama12].

1.0.4.Summary of contributions.

We have provided a semantic analysis of higher order automatic differentiation. Our syntactic starting point are higher order forward-mode AD macros on a typed higher order language that extend their well-known first order equivalent (e.g. [SFVPJ19, WWE+19, HSV20a]). We present these in Section 3 for function types, and in Section 5 we extend them to inductive types and variants. The main contributions of this paper are as follows.

We give a denotational semantics for the language in diffeological spaces, showing that every definable expression is smooth (Section 4).

We show correctness of the higher order AD macros by a logical relations argument (Th. 3).

We give a categorical analysis of this correctness argument with two parts: a universal property satisfied by the macro in terms of syntactic categories, and a new notion of glued space that abstracts the logical relation (Section 6).

We then use this analysis to state and prove a correctness argument at all first order types (Th. 8).

Relation to previous work

This paper extends and develops the paper [HSV20a] presented at the 23rd International Conference on Foundations of Software Science and Computation Structure (FoSSaCS 2020). This version includes numerous elaborations, notably the extension of the definition, semantics and correctness of automatic differentiation methods for computing higher order derivatives (introduced in Section  2.2-2.4) and a novel discussion about derivatives of higher-order functions (Section  7).

2.Rudiments of differentiation: how to calculate with dual numbers and Taylor approximations 2.1.First order differentiation: the chain rule and dual numbers.

We will now recall the definition of gradient of a differentiable function, the goal of AD and, and what it means for AD to be correct. Recall that the derivative of a function 𝑓 : ℝ → ℝ , if it exists, is a function ∇ 𝑓 : ℝ → ℝ such that for all 𝑎 , ∇ 𝑓 ⁢ ( 𝑎 ) is the gradient of 𝑓 at 𝑎 in the sense that the function 𝑥 ↦ 𝑓 ⁢ ( 𝑎 ) + ∇ 𝑓 ⁢ ( 𝑎 ) ⋅ ( 𝑥 − 𝑎 ) gives the best linear approximation of 𝑓 at 𝑎 . (The gradient ∇ 𝑓 ⁢ ( 𝑎 ) is often written d ⁢ 𝑓 ⁢ ( 𝑥 ) d ⁢ 𝑥 ⁢ ( 𝑎 ) .)

The chain rule for differentiation tells us that we can calculate ∇ ( 𝑓 ; 𝑔 ) ⁢ ( 𝑎 )

∇ 𝑓 ⁢ ( 𝑎 ) ⋅ ∇ 𝑔 ⁢ ( 𝑓 ⁢ ( 𝑎 ) ) . In that sense, the chain rule tells us how linear approximations to a function transform under post-composition with another function.

To find ∇ 𝑓 in a compositional way, using the chain rule, two generalizations are reasonable:

We need both 𝑓 and ∇ 𝑓 when calculating ∇ ( 𝑓 ; 𝑔 ) of a composition 𝑓 ; 𝑔 , using the chain rule, so we are really interested in the pair ( 𝑓 , ∇ 𝑓 ) : ℝ → ℝ × ℝ ;

In building 𝑓 we will need to consider functions of multiple arguments, such as + : ℝ 2 → ℝ , and these functions should propagate derivatives.

Thus we are more generally interested in transforming a function 𝑔 : ℝ 𝑛 → ℝ into a function ℎ : ( ℝ × ℝ ) 𝑛 → ℝ × ℝ in such a way that for any 𝑓 1 ⁢ … ⁢ 𝑓 𝑛 : ℝ → ℝ ,

( 𝑓 1 , ∇ 𝑓 1 , … , 𝑓 𝑛 , ∇ 𝑓 𝑛 ) ; ℎ

( ( 𝑓 1 , … , 𝑓 𝑛 ) ; 𝑔 , ∇ ( ( 𝑓 1 , … , 𝑓 𝑛 ) ; 𝑔 ) ) ⁢ .

(2)

Computing automatically the program representing ℎ , given a program representing 𝑔 , is the goal of automatic differentiation. An intuition for ℎ is often given in terms of dual numbers. The transformed function operates on pairs of numbers, ( 𝑥 , 𝑥 ′ ) , and it is common to think of such a pair as 𝑥 + 𝑥 ′ ⁢ 𝜖 for an ‘infinitesimal’ 𝜖 . But while this is a helpful intuition, the formalization of infinitesimals can be intricate, and the development in this paper is focussed on the elementary formulation in (2).

A function ℎ satisfying (2) encodes all the partial derivatives of 𝑔 . For example, if 𝑔 : ℝ 2 → ℝ , then with 𝑓 1 ⁢ ( 𝑥 )

def 𝑥 and 𝑓 2 ⁢ ( 𝑥 )

def 𝑥 2 , by applying (2) to 𝑥 1 we obtain ℎ ⁢ ( 𝑥 1 , 1 , 𝑥 2 , 0 )

( 𝑔 ⁢ ( 𝑥 1 , 𝑥 2 ) , ∂ 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ⁢ ( 𝑥 1 ) ) and similarly ℎ ⁢ ( 𝑥 1 , 0 , 𝑥 2 , 1 )

( 𝑔 ⁢ ( 𝑥 1 , 𝑥 2 ) , ∂ 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 ⁢ ( 𝑥 2 ) ) . And conversely, if 𝑔 is differentiable in each argument, then a unique ℎ satisfying (2) can be found by taking linear combinations of partial derivatives, for example:

ℎ ⁢ ( 𝑥 1 , 𝑥 1 ′ , 𝑥 2 , 𝑥 2 ′ )

( 𝑔 ⁢ ( 𝑥 1 , 𝑥 2 ) , 𝑥 1 ′ ⋅ ∂ 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ⁢ ( 𝑥 1 ) + 𝑥 2 ′ ⋅ ∂ 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 ⁢ ( 𝑥 2 ) ) ⁢ .

(Here, recall that the partial derivative ∂ 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ⁢ ( 𝑥 1 ) is a particular notation for the gradient ∇ ( 𝑔 ⁢ ( − , 𝑥 2 ) ) ⁡ ( 𝑥 1 ) , i.e. with 𝑥 2 fixed. )

In summary, the idea of differentiation with dual numbers is to transform a differentiable function 𝑔 : ℝ 𝑛 → ℝ to a function ℎ : ℝ 2 ⁢ 𝑛 → ℝ 2 which captures  𝑔 and all its partial derivatives. We packaged this up in (2) as an invariant which is useful for building derivatives of compound functions ℝ → ℝ in a compositional way. The idea of (first order) forward mode automatic differentiation is to perform this transformation at the source code level.

We say that a macro for AD is correct if, given a semantic model 𝑠 ⁢ 𝑒 ⁢ 𝑚 − , the program 𝑃 representing 𝑔

⟦ 𝑃 ⟧ is transformed by the macro to a program 𝑃 ′ representing ℎ

⟦ 𝑃 ′ ⟧ . This means in particular that 𝑃 ′ computes correct partial derivatives of (the function represented by) 𝑃 .

Smooth functions.

In what follows we will often speak of smooth functions ℝ 𝑘 → ℝ , which are functions that are continuous and differentiable, such that their derivatives are also continuous and differentiable, and so on.

2.2.Higher order differentiation: the Faà di Bruno formula and Taylor approximations.

We now generalize the above in two directions:

We look for the best local approximations to 𝑓 with polynomials of some order  𝑅 , generalizing the above use of linear functions ( 𝑅

1 ).

We can work directly with multivariate functions ℝ 𝑘 → ℝ instead of functions of one variable ℝ → ℝ ( 𝑘

1 ).

To make this precise, we recall that, given a smooth function 𝑓 : ℝ 𝑘 → ℝ and a natural number 𝑅 ≥ 0 , the 𝑅 -th order Taylor approximation of 𝑓 at 𝑎 ∈ ℝ 𝑘 is defined in terms of the partial derivatives of 𝑓 :

ℝ 𝑘

→ ℝ

𝑥

↦ ∑ { ( 𝛼 1 , … , 𝛼 𝑘 ) ∈ ℕ 𝑘 ∣ 𝛼 1 + … + 𝛼 𝑘 ≤ 𝑅 } 1 𝛼 1 ! ⋅ … ⋅ 𝛼 𝑘 ! ⁢ ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ⁢ ( 𝑎 ) ⋅ ( 𝑥 1 − 𝑎 1 ) 𝛼 1 ⋅ … ⋅ ( 𝑥 𝑘 − 𝑎 𝑘 ) 𝛼 𝑘 .

This is an 𝑅 -th order polynomial. Similarly to the case of first order derivatives, we can recover the partial derivatives of 𝑓 up to the 𝑅 -th order from its Taylor approximation by evaluating the series at basis vectors. See Section 2.3 below for an example.

Recall that the ordering of partial derivatives does not matter for smooth functions (Schwarz/Clairaut’s theorem). So there will be ( 𝑅 + 𝑘 − 1 𝑘 − 1 )

𝑅 -th order partial derivatives, and altogether there are ( 𝑅 + 𝑘 𝑘 ) summands in the 𝑅 -th order Taylor approximation. (This can be seen by a ‘stars-and-bars’ argument.)

Since there are ( 𝑅 + 𝑘 𝑘 ) partial derivatives of 𝑓 𝑖 of order ≤ 𝑅 , we can store them in the Euclidean space ℝ ( 𝑅 + 𝑘 𝑘 ) , which can also be regarded as the space of 𝑘 -variate polynomials of degree  ≤ 𝑅 .

We use a convention of coordinates ( 𝑦 𝛼 1 ⁢ … ⁢ 𝛼 𝑘 ∈ ℝ ) ( 𝛼 1 , … , 𝛼 𝑘 ) ∈ { ( 𝛼 1 , … , 𝛼 𝑘 ) ∈ ℕ 𝑘 ∣ 0 ≤ 𝛼 1 + … + 𝛼 𝑘 ≤ 𝑅 } where 𝑦 𝛼 1 ⁢ … ⁢ 𝛼 𝑘 is intended to represent a partial derivative ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ⁢ ( 𝑎 ) for some function 𝑓 : ℝ 𝑘 → ℝ . We will choose these coordinates in lexicographic order of the multi-indices ( 𝛼 1 , … , 𝛼 𝑘 ) , that is, the indexes in the Euclidean space ℝ ( 𝑅 + 𝑘 𝑘 ) will typically range from ( 0 , … , 0 ) to ( 𝑅 , 0 , … , 0 ) .

The ( 𝑘 , 𝑅 ) -Taylor representation of a function 𝑔 : ℝ 𝑛 → ℝ is a function ℎ : ( ℝ ( 𝑅 + 𝑘 𝑘 ) ) 𝑛 → ℝ ( 𝑅 + 𝑘 𝑘 ) that transforms the partial derivatives of 𝑓 : ℝ 𝑘 → ℝ 𝑛 of order ≤ 𝑅 under postcomposition with 𝑔 :

( ( ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) 𝑗

1 𝑛 ; ℎ

( ( ∂ 𝛼 1 + … + 𝛼 𝑘 ( ( 𝑓 1 , … , 𝑓 𝑛 ) ; 𝑔 ) ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) 𝑗

1 𝑛 ⁢ .

(3)

Thus the Taylor representation generalizes the dual numbers representation ( 𝑅

𝑘

1 ).

To explicitly calculate the Taylor representation for a smooth function, we recall a generalization of the chain rule to higher derivatives. The chain rule tells us how the coefficients of linear approximations transform under composition of the functions. The Faà di Bruno formula [Sav06, EM03, CS96] tells us how coefficients of Taylor approximations – that is, higher derivatives – transform under composition. We recall the multivariate form from [Sav06, Theorem 2.1]. Given functions 𝑓

( 𝑓 1 , … , 𝑓 𝑙 ) : ℝ 𝑘 → ℝ 𝑙 and 𝑔 : ℝ 𝑙 → ℝ , for 𝛼 1 + … + 𝛼 𝑘

0 ,

∂ 𝛼 1 + … + 𝛼 𝑘 ( 𝑓 ; 𝑔 ) ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ⁢ ( 𝑎 )

𝛼 1 ! ⋅ … ⋅ 𝛼 𝑘 ! ⋅ ∑ { ( 𝛽 1 , … , 𝛽 𝑙 ) ∈ ℕ 𝑙 ∣ 1 ≤ 𝛽 1 + … + 𝛽 𝑙 ≤ 𝛼 1 + … + 𝛼 𝑘 } ∂ 𝛽 1 + … + 𝛽 𝑙 𝑔 ⁢ ( 𝑦 ) ∂ 𝑦 1 𝛽 1 ⁢ ⋯ ⁢ ∂ 𝑦 𝑙 𝛽 𝑙 ( 𝑓 ( 𝑎 ) ) ⋅

∑ { ( ( 𝑒 1 1 , … , 𝑒 𝑙 1 ) , … , ( 𝑒 1 𝑞 , … , 𝑒 𝑙 𝑞 ) ) ∈ ( ℕ 𝑙 ) 𝑞 ∣ 𝑒 𝑗 1 + … + 𝑒 𝑗 𝑞

𝛽 𝑗 , ( 𝑒 1 1 + … + 𝑒 𝑙 1 ) ⋅ 𝛼 𝑖 1 + … + ( 𝑒 1 𝑞 + … + 𝑒 𝑙 𝑞 ) ⋅ 𝛼 𝑖 𝑞

𝛼 𝑖 }

∏ 𝑟

1 𝑞 ∏ 𝑗

1 𝑙 1 𝑒 𝑗 𝑟 ! ⁢ ( 1 𝛼 1 𝑟 ! ⋅ … ⋅ 𝛼 𝑘 𝑟 ! ⁢ ∂ 𝛼 1 𝑟 + ⋯ + 𝛼 𝑘 𝑟 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝛼 1 𝑟 𝑥 1 ⁢ ⋯ ⁢ ∂ 𝛼 𝑘 𝑟 𝑥 𝑘 ⁢ ( 𝑎 ) ) 𝑒 𝑗 𝑟 ,

where ( 𝛼 1 1 , … , 𝛼 𝑘 1 ) , … , ( 𝛼 1 𝑞 , … , 𝛼 𝑘 𝑞 ) ∈ ℕ 𝑘 are an enumeration of all the vectors ( 𝛼 1 𝑟 , … , 𝛼 𝑘 𝑟 ) of 𝑘 natural numbers such that 𝛼 𝑗 𝑟 ≤ 𝛼 𝑗 and 𝛼 1 𝑟 + … + 𝛼 𝑘 𝑟

0 and we write 𝑞 for the number of such vectors. The details of this formula reflect the complicated combinatorics the arise from repeated applications of the chain and product rules for differentiation that one uses to prove it. Conceptually, however, it is rather straightforward: it tells us that the coefficients of the 𝑅 -th order Taylor approximation of 𝑓 ; 𝑔 can be expressed exclusively in terms of those of 𝑓 and 𝑔 .

Thus the Faà di Bruno formula uniquely determines the Taylor approximation ℎ : ( ℝ ( 𝑅 + 𝑘 𝑘 ) ) 𝑛 → ℝ ( 𝑅 + 𝑘 𝑘 ) in terms of the derivatives of 𝑔 : ℝ 𝑛 → ℝ of order ≤ 𝑅 , and we can also recover all such derivatives from ℎ .

2.3.Example: a two-dimensional second order Taylor series

As an example, we can specialize the Faà di Bruno formula above to the second order Taylor series of a function 𝑓 : ℝ 2 → ℝ 𝑙 and its behaviour under postcomposition with a smooth function 𝑔 : ℝ 𝑙 → ℝ :

∂ 2 ( 𝑓 ; 𝑔 ) ⁢ ( 𝑥 ) ∂ 𝑥 𝑖 ⁢ ∂ 𝑥 𝑖 ′ ⁢ ( 𝑎 )

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 ) ∂ 𝑦 𝑗 ⁢ ( 𝑓 ⁢ ( 𝑎 ) ) ⁢ ∂ 2 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 𝑖 ⁢ ∂ 𝑥 𝑖 ′ ⁢ ( 𝑎 ) + ∑ 𝑗 , 𝑗 ′

1 𝑙 ∂ 2 𝑔 ⁢ ( 𝑦 ) ∂ 𝑦 𝑗 ⁢ ∂ 𝑦 𝑗 ′ ⁢ ( 𝑓 ⁢ ( 𝑎 ) ) ⁢ ∂ 𝑓 𝑗 ′ ⁢ ( 𝑥 ) ∂ 𝑥 𝑖 ⁢ ( 𝑎 ) ⁢ ∂ 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 𝑖 ′ ⁢ ( 𝑎 ) ,

where 𝑖 , 𝑖 ′ ∈ { 1 , 2 } might either coincide or be distinct.

Rather than working with the full ( 2 , 2 ) -Taylor representation of 𝑔 , we ignore the non-mixed second order derivatives 𝑦 02 𝑗

∂ 2 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 2 2 and 𝑦 20 𝑗

∂ 2 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 1 2 for the moment, and we represent the derivatives of order ≤ 2 of 𝑓 𝑗 : ℝ 2 → ℝ (at some point 𝑎 ) as the numbers

( 𝑦 00 𝑗 , 𝑦 01 𝑗 , 𝑦 10 𝑗 , 𝑦 11 𝑗 )

( 𝑓 𝑗 ⁢ ( 𝑎 ) , ∂ 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 2 ⁢ ( 𝑎 ) , ∂ 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 1 ⁢ ( 𝑎 ) , ∂ 2 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 1 ⁢ ∂ 𝑥 2 ⁢ ( 𝑎 ) ) ∈ ℝ 4

and we can choose a similar representation for the derivatives of ( 𝑓 ; 𝑔 ) . Then, we observe that the Faà di Bruno formula induces the function ℎ : ( ℝ 4 ) 𝑙 → ℝ 4

ℎ ⁢ ( ( 𝑦 00 1 , 𝑦 01 1 , 𝑦 10 1 , 𝑦 11 1 ) , … , ( 𝑦 00 𝑙 , 𝑦 01 𝑙 , 𝑦 10 𝑙 , 𝑦 11 𝑙 ) )

( 𝑔 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 )

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 01 𝑗

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 10 𝑗

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 11 𝑗 + ∑ 𝑗 , 𝑗 ′

1 𝑙 ∂ 2 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ∂ 𝑦 𝑗 ′ ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 10 𝑗 ⋅ 𝑦 01 𝑗 ′ ) .

In particular, we can note that

ℎ ⁢ ( ( 𝑦 00 1 , 𝑦 01 1 , 𝑦 10 1 , 0 ) , … , ( 𝑦 00 𝑙 , 𝑦 01 𝑙 , 𝑦 10 𝑙 , 0 ) )

( 𝑔 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 )

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 01 𝑗

∑ 𝑗

1 𝑙 ∂ 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 10 𝑗

∑ 𝑗 , 𝑗 ′

1 𝑙 ∂ 2 𝑔 ⁢ ( 𝑦 1 , … , 𝑦 𝑙 ) ∂ 𝑦 𝑗 ⁢ ∂ 𝑦 𝑗 ′ ⁢ ( 𝑦 00 1 , … , 𝑦 00 𝑙 ) ⋅ 𝑦 10 𝑗 ⋅ 𝑦 01 𝑗 ′ ) .

We see can use this method to calculate any directional first and second order derivative of 𝑔 in one pass. For example, if 𝑙

3 , so 𝑔 : ℝ 3 → ℝ , then the last component of ℎ ⁢ ( ( 𝑥 , 𝑥 ′ , 𝑥 ′′ , 0 ) , ( 𝑦 , 𝑦 ′ , 𝑦 ′′ , 0 ) , ( 𝑧 , 𝑧 ′ , 𝑧 ′′ , 0 ) ) is the result of taking the first derivative in direction ( 𝑥 ′ , 𝑦 ′ , 𝑧 ′ ) and the second derivative in direction ( 𝑥 ′′ , 𝑦 ′′ , 𝑧 ′′ ) , and evaluating at ( 𝑥 , 𝑦 , 𝑧 ) .

In the proper Taylor representation we explicitly include the non-mixed second order derivatives as inputs and outputs, leading to a function ℎ ′ : ( ℝ 6 ) 𝑙 → ℝ 6 . Above we have followed a common trick to avoid some unnecessary storage and computation, since these extra inputs and outputs are not required for computing the second order derivatives of  𝑔 . For instance, if 𝑙

2 then the last component of ℎ ⁢ ( ( 𝑥 , 1 , 1 , 0 ) , ( 𝑦 , 0 , 0 , 0 ) ) computes ∂ 2 𝑔 ⁢ ( 𝑥 , 𝑦 ) ∂ 𝑥 2 ⁢ ( 𝑥 , 𝑦 ) .

2.4.Example: a one-dimensional second order Taylor series

As opposed to (2,2)-AD, (1,2)-AD computes the first and second order derivatives in the same direction. For example, if 𝑔 : ℝ 2 → ℝ is a smooth function, then ℎ : ( ℝ 3 ) 2 → ℝ 3 . An intuition for ℎ can be given in terms of triple numbers. The transformed function operates on triples of numbers, ( 𝑥 , 𝑥 ′ , 𝑥 ′′ ) , and it is common to think of such a triple as 𝑥 + 𝑥 ′ ⁢ 𝜖 + 𝑥 ′′ ⁢ 𝜖 2 for an ‘infinitesimal’ 𝜖 which has the property that 𝜖 3

0 . For instance we have

ℎ ⁢ ( ( 𝑥 1 , 1 , 0 ) , ( 𝑥 2 , 0 , 0 ) )

( 𝑔 ⁢ ( 𝑥 1 , 𝑥 2 ) , ∂ 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ⁢ ( 𝑥 1 ) , ∂ 2 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 2 ⁢ ( 𝑥 1 ) )

ℎ ⁢ ( ( 𝑥 1 , 0 , 0 ) , ( 𝑥 2 , 1 , 0 ) )

( 𝑔 ⁢ ( 𝑥 1 , 𝑥 2 ) , ∂ 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 ⁢ ( 𝑥 2 ) , ∂ 2 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 2 ⁢ ( 𝑥 2 ) )

ℎ ( ( 𝑥 1 , 1 , 0 ) , ( 𝑥 2 , 1 , 0 ) )

( 𝑔 ( 𝑥 1 , 𝑥 2 ) , ∂ 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ( 𝑥 1 ) + ∂ 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 ( 𝑥 2 ) ,

∂ 2 𝑔 ⁢ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 2 ( 𝑥 1 ) + ∂ 2 𝑔 ⁢ ( 𝑥 1 , 𝑥 ) ∂ 𝑥 2 ( 𝑥 2 ) + 2 ∂ 2 𝑔 ⁢ ( 𝑥 , 𝑦 ) ∂ 𝑥 ⁢ ∂ 𝑦 ( 𝑥 1 , 𝑥 2 ) )

We see that we directly get non-mixed second-order partial derivatives but not the mixed-ones. We can recover ∂ 2 𝑔 ⁢ ( 𝑥 , 𝑦 ) ∂ 𝑥 ⁢ ∂ 𝑦 ⁢ ( 𝑥 1 , 𝑥 2 ) as 1 2 ⁢ ( ℎ ⁢ ( ( 𝑥 1 , 1 , 0 ) , ( 𝑥 2 , 1 , 0 ) ) − ℎ ⁢ ( ( 𝑥 1 , 1 , 0 ) , ( 𝑥 2 , 0 , 0 ) ) − ℎ ⁢ ( ( 𝑥 1 , 0 , 0 ) , ( 𝑥 2 , 1 , 0 ) ) ) .

More generally, if 𝑔 : ℝ 𝑙 → ℝ , then ℎ : ( ℝ 3 ) 𝑙 → ℝ 3 satisfies:

ℎ ⁢ ( ( 𝑥 1 , 𝑥 1 ′ , 0 ) , … , ( 𝑥 𝑙 , 𝑥 𝑙 ′ , 0 ) )

( 𝑔 ⁢ ( 𝑥 1 , … , 𝑥 𝑙 )

∑ 𝑖

1 𝑙 ∂ 𝑔 ⁢ ( 𝑥 1 , … , 𝑥 𝑙 ) ∂ 𝑥 𝑖 ⁢ ( 𝑥 1 , … , 𝑥 𝑙 ) ⋅ 𝑥 𝑖 ′

∑ 𝑖 , 𝑗

1 𝑙 ∂ 2 𝑔 ⁢ ( 𝑥 1 , … , 𝑥 𝑙 ) ∂ 𝑥 𝑖 ⁢ ∂ 𝑥 𝑗 ⁢ ( 𝑥 1 , … , 𝑥 𝑙 ) ⋅ 𝑥 𝑖 ′ ⋅ 𝑥 𝑗 ′ ) .

We can always recover the mixed second order partial derivatives from this but this requires several computations involving ℎ . This is thus different from the (2,2) method which was more direct.

2.5.Remark

In the rest of this article, we study forward-mode ( 𝑘 , 𝑅 ) -automatic differentiation for a language with higher-order functions. The reader may like to fix 𝑘

𝑅

1 for a standard automatic differentiation with first-order derivatives, based on dual numbers. This is the approach taken in the conference version of this paper [HSV20b]. But the generalization to higher-order derivatives with arbitrary  𝑘 and  𝑅 flows straightforwardly through the whole narrative.

3.A Higher Order Forward-Mode AD Translation 3.1.A simple language of smooth functions.

We consider a standard higher order typed language with a first order type 𝐫𝐞𝐚𝐥 of real numbers. The types ( 𝜏 , 𝜎 ) and terms ( 𝑡 , 𝑠 ) are as follows.

𝜏 , 𝜎 , 𝜌

: :=

types

|

𝐫𝐞𝐚𝐥

real numbers

|

( 𝜏 1 ⁢ ∗ … ⁢ ∗ 𝜏 𝑛 )

finite product

|

𝜏 → 𝜎

function

𝑡 , 𝑠 , 𝑟

: :=

terms

𝑥

variable

|

𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 )

operations (including constants)

|

⟨ 𝑡 1 , … , 𝑡 𝑛 ⟩ | 𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑠

tuples/pattern matching

|

𝜆 ⁢ 𝑥 . 𝑡 | 𝑡 ⁢ 𝑠

function abstraction/application

The typing rules are in Figure 3. We have included some abstract basic 𝑛 -ary operations 𝗈𝗉 ∈ 𝖮𝗉 𝑛 for every 𝑛 ∈ ℕ . These are intended to include the usual (smooth) mathematical operations that are used in programs to which automatic differentiation is applied. For example,

for any real constant 𝑐 ∈ ℝ , we typically include a constant 𝑐 ¯ ∈ 𝖮𝗉 0 ; we slightly abuse notation and will simply write 𝑐 ¯ for 𝑐 ¯ ⁢ ( ) in our examples;

we include some unary operations such as 𝜍 ∈ 𝖮𝗉 1 which we intend to stand for the usual sigmoid function, 𝜍 ⁢ ( 𝑥 )

def 1 1 + 𝑒 − 𝑥 ;

we include some binary operations such as addition and multiplication ( + ) , ( ∗ ) ∈ 𝖮𝗉 2 ;

We add some simple syntactic sugar 𝑡 − 𝑢

def 𝑡 + ( − 1 ) ¯ ∗ 𝑢 and, for some natural number 𝑛 ,

𝑛 ⋅ 𝑡

def 𝑡 + … + 𝑡 ⏞ 𝑛  times and 𝑡 𝑛

def 𝑡 ∗ … ∗ 𝑡 ⏞ 𝑛  times

Similarly, we will frequently denote repeated sums and products using ∑ - and ∏ -signs, respectively: for example, we write 𝑡 1 + … + 𝑡 𝑛 as ∑ 𝑖 ∈ { 1 , … , 𝑛 } 𝑡 𝑖 and 𝑡 1 ∗ … ∗ 𝑡 𝑛 as ∏ 𝑖 ∈ { 1 , … , 𝑛 } 𝑡 𝑖 . This in addition to programming sugar such as 𝐥𝐞𝐭 ⁢ 𝑥

𝑡 ⁢ 𝐢𝐧 ⁢ 𝑠 for ( 𝜆 𝑥 . 𝑠 ) 𝑡 and 𝜆 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ . 𝑡 for 𝜆 ⁢ 𝑥 . 𝐜𝐚𝐬𝐞 ⁢ 𝑥 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑡 .

\inferrule ⁢ Γ ⊢ 𝑡 1 : 𝐫𝐞𝐚𝐥 ⁢ … ⁢ Γ ⊢ 𝑡 𝑛 : 𝐫𝐞𝐚𝐥 ⁢ Γ ⊢ 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) : 𝐫𝐞𝐚𝐥 ⁢ ( 𝗈𝗉 ∈ 𝖮𝗉 𝑛 )

\inferrule ⁢ Γ ⊢ 𝑡 1 : 𝜏 1 ⁢ … ⁢ Γ ⊢ 𝑡 𝑛 : 𝜏 𝑛 ⁢ Γ ⊢ ⟨ 𝑡 1 , … , 𝑡 𝑛 ⟩ : ( 𝜏 1 ⁢ ∗ … ⁢ ∗ 𝜏 𝑛 ) \inferrule ⁢ Γ ⊢ 𝑡 : ( 𝜎 1 ⁢ ∗ … ⁢ ∗ 𝜎 𝑛 ) ⁢ Γ , 𝑥 1 : 𝜎 1 , … , 𝑥 𝑛 : 𝜎 𝑛 ⊢ 𝑠 : 𝜏 ⁢ Γ ⊢ 𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑠 : 𝜏

\inferrule Γ ⊢ 𝑥 : 𝜏 ( ( 𝑥 : 𝜏 ) ∈ Γ ) \inferrule Γ , 𝑥 : 𝜏 ⊢ 𝑡 : 𝜎 Γ ⊢ 𝜆 𝑥 : 𝜏 . 𝑡 : 𝜏 → 𝜎 \inferrule Γ ⊢ 𝑡 : 𝜎 → 𝜏

Γ ⊢ 𝑠 : 𝜎 ⁢ Γ ⊢ 𝑡 ⁢ 𝑠 : 𝜏

Figure 3.Typing rules for the simple language. 3.2.Syntactic automatic differentiation: a functorial macro.

The aim of higher order forward mode AD is to find the ( 𝑘 , 𝑅 ) -Taylor representation of a function by syntactic manipulations, for some choice of ( 𝑘 , 𝑅 ) that we fix. For our simple language, we implement this as the following inductively defined macro 𝒟 → ( 𝑘 , 𝑅 ) on both types and terms (see also [WWE+19, SFVPJ19]). For the sake of legibility, we simply write 𝒟 → ( 𝑘 , 𝑅 ) as 𝒟 → here and leave the dimension 𝑘 and order 𝑅 of the Taylor representation implicit. The following definition is for general 𝑘 and 𝑅 , but we treat specific cases afterwards in Example 3.2.

𝒟 → ⁢ ( 𝜏 → 𝜎 )

def 𝒟 → ⁢ ( 𝜏 ) → 𝒟 → ⁢ ( 𝜎 ) 𝒟 → ⁢ ( 𝜏 1 ⁢ ∗ … ⁢ ∗ 𝜏 𝑛 )

def 𝒟 → ⁢ ( 𝜏 1 ) ⁢ ∗ … ⁢ ∗ 𝒟 → ⁢ ( 𝜏 𝑛 )

𝒟 → ⁢ ( 𝐫𝐞𝐚𝐥 )

def 𝐫𝐞𝐚𝐥 ( 𝑅 + 𝑘 𝑘 ) (i.e., the type of tuples of reals of length  ( 𝑅 + 𝑘 𝑘 ) )

𝒟 → ⁢ ( 𝑥 )

def 𝑥 𝒟 → ⁢ ( 𝑐 ¯ )

def ⟨ 𝑐 ¯ , 0 ⟩

𝒟 → ( 𝜆 𝑥 . 𝑡 )

def 𝜆 𝑥 . 𝒟 → ( 𝑡 ) 𝒟 → ( 𝑡 𝑠 )

def 𝒟 → ( 𝑡 ) 𝒟 → ( 𝑠 ) 𝒟 → ( ⟨ 𝑡 1 , … , 𝑡 𝑛 ⟩ )

def ⟨ 𝒟 → ( 𝑡 1 ) , … , 𝒟 → ( 𝑡 𝑛 ) ⟩

𝒟 → ⁢ ( 𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑠 )

def 𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ⁢ ( 𝑡 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝒟 → ⁢ ( 𝑠 )

𝒟 → ⁢ ( 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) )

def

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ⁢ ( 𝑡 1 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 1 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ⁢ ( 𝑡 𝑛 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 0 ⁢ … ⁢ 0 𝑛 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 𝑛 ⟩ →


𝐷 0 ⁢ … ⁢ 0 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 , … . , 𝑥 … ⁢ 0 𝑛 ) ,

⋯ ,

𝐷 𝑅 ⁢ … ⁢ 0 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 , … . , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 𝑛 )

where 𝐷 0 ⁢ … ⁢ 0 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 , … . , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 𝑛 )

def 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 )

𝐷 𝛼 1 ⁢ … . 𝛼 𝑘 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 , … . , 𝑥 𝑅 , 0 ⁢ … ⁢ 0 𝑛 )

def (for  𝛼 1 + … + 𝛼 𝑘

0 )

𝛼 1 ! ⋅ … ⋅ 𝛼 𝑘 ! ⋅ ∑ { ( 𝛽 1 , … , 𝛽 𝑛 ) ∈ ℕ 𝑙 ∣ 1 ≤ 𝛽 1 + … + 𝛽 𝑛 ≤ 𝛼 1 + … + 𝛼 𝑘 } ∂ 𝛽 1 ⁢ ⋯ ⁢ 𝛽 𝑛 𝗈𝗉 ( 𝑥 0 ⁢ … ⁢ 0 1 , … , 𝑥 0 ⁢ … ⁢ 0 𝑛 ) ∗

∑ { ( ( 𝑒 1 1 , … , 𝑒 𝑙 1 ) , … , ( 𝑒 1 𝑞 , … , 𝑒 𝑙 𝑞 ) ) ∈ ( ℕ 𝑙 ) 𝑞 ∣ 𝑒 𝑗 1 + … + 𝑒 𝑗 𝑞

𝛽 𝑗 , ( 𝑒 1 1 + … + 𝑒 𝑙 1 ) ⋅ 𝛼 𝑖 1 + … + ( 𝑒 1 𝑞 + … + 𝑒 𝑙 𝑞 ) ⋅ 𝛼 𝑖 𝑞

𝛼 𝑖 }

∏ 𝑟

1 𝑞 ∏ 𝑗

1 𝑙 1 𝑒 𝑗 𝑟 ! ⋅ ( 1 𝛼 1 𝑟 ! ⋅ … ⋅ 𝛼 𝑘 𝑟 ! ⋅ 𝑥 𝛼 1 ⁢ ⋯ ⁢ 𝛼 𝑘 𝑗 ) 𝑒 𝑗 𝑟 ⁢ .

Here, ( ∂ 𝛽 1 ⁢ ⋯ ⁢ 𝛽 𝑛 𝗈𝗉 ) ⁢ ( 𝑥 1 , … , 𝑥 𝑛 ) are some chosen terms of type 𝐫𝐞𝐚𝐥 in the language with free variables from 𝑥 1 , … , 𝑥 𝑛 . We think of these terms as implementing the partial derivative ∂ 𝛽 1 + … + 𝛽 𝑛 ⟦ 𝗈𝗉 ⟧ ( 𝑥 1 , … , 𝑥 𝑛 ) ∂ 𝑥 1 𝛽 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑛 𝛽 𝑛 of the smooth function ⟦ 𝗈𝗉 ⟧ : ℝ 𝑛 → ℝ that 𝗈𝗉 implements. For example, we could choose the following representations of derivatives of order ≤ 2 of our example operations

∂ 01 ( + ) ⁢ ( 𝑥 1 , 𝑥 2 )

1 ¯
∂ 02 ( + ) ⁢ ( 𝑥 1 , 𝑥 2 )

0 ¯

∂ 10 ( + ) ⁢ ( 𝑥 1 , 𝑥 2 )

1 ¯
∂ 11 ( + ) ⁢ ( 𝑥 1 , 𝑥 2 )

0 ¯

∂ 20 ( + ) ⁢ ( 𝑥 1 , 𝑥 2 )

0 ¯

∂ 01 ( ∗ ) ⁢ ( 𝑥 1 , 𝑥 2 )

𝑥 1
∂ 02 ( ∗ ) ⁢ ( 𝑥 1 , 𝑥 2 )

0 ¯

∂ 10 ( ∗ ) ⁢ ( 𝑥 1 , 𝑥 2 )

𝑥 2
∂ 11 ( ∗ ) ⁢ ( 𝑥 1 , 𝑥 2 )

1 ¯

∂ 20 ( ∗ ) ⁢ ( 𝑥 1 , 𝑥 2 )

0 ¯

∂ 1 ( 𝜍 ) ⁢ ( 𝑥 )

𝐥𝐞𝐭 ⁢ 𝑦

𝜍 ⁢ ( 𝑥 ) ⁢ 𝐢𝐧 ⁢ 𝑦 ∗ ( 1 ¯ − 𝑦 )
∂ 2 ( 𝜍 ) ⁢ ( 𝑥 )

𝐥𝐞𝐭 ⁢ 𝑦

𝜍 ⁢ ( 𝑥 ) ⁢ 𝐢𝐧

𝐥𝐞𝐭 ⁢ 𝑧

𝑦 ∗ ( 1 ¯ − 𝑦 ) ⁢ 𝐢𝐧 ⁢ 𝑧 ∗ ( 1 ¯ − 2 ¯ ∗ 𝑦 )

Note that our rules, in particular, imply that 𝒟 → ⁢ ( 𝑐 ¯ )

⟨ 𝑐 ¯ , 0 ¯ , … , 0 ¯ ⟩ .

{exa}

[ ( 1 , 1 )

  • and ( 2 , 2 ) -AD] Our choices of partial derivatives of the example operations are sufficient to implement ( 𝑘 , 𝑅 ) -Taylor forward AD with 𝑅 ≤ 2 . To be explicit, the distinctive formulas for ( 1 , 1 )
  • and ( 2 , 2 ) -AD methods (specializing our abstract definition of 𝒟 → ( 𝑘 , 𝑅 ) above) are

𝒟 → ( 1 , 1 ) ⁢ ( 𝐫𝐞𝐚𝐥 )

( 𝐫𝐞𝐚𝐥 ⁢ ∗ 𝐫𝐞𝐚𝐥 )

𝒟 → ( 1 , 1 ) ⁢ ( 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) )

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 1 , 1 ) ⁢ ( 𝑡 1 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 0 1 , 𝑥 1 1 ⟩ → … → 𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 1 , 1 ) ⁢ ( 𝑡 𝑛 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 0 𝑛 , 𝑥 1 𝑛 ⟩ →

⟨ 𝗈𝗉 ⁢ ( 𝑥 0 1 , … , 𝑥 0 𝑛 ) , ∑ 𝑖

1 𝑛 𝑥 1 𝑖 ∗ ∂ 𝑖 𝗈𝗉 ⁢ ( 𝑥 0 1 , … , 𝑥 0 𝑛 ) ⟩

𝒟 → ( 2 , 2 ) ⁢ ( 𝐫𝐞𝐚𝐥 )

𝐫𝐞𝐚𝐥 6

𝒟 → ( 2 , 2 ) ⁢ ( 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) )

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ⁢ ( 𝑡 1 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 00 1 , 𝑥 01 1 , 𝑥 02 1 , 𝑥 10 1 , 𝑥 11 1 , 𝑥 20 1 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ⁢ ( 𝑡 𝑛 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 00 𝑛 , 𝑥 01 𝑛 , 𝑥 02 𝑛 , 𝑥 10 𝑛 , 𝑥 11 𝑛 , 𝑥 20 𝑛 ⟩ →

⟨ 𝗈𝗉 ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 01 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 02 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) + ∑ 𝑖 , 𝑗

1 𝑛 𝑥 01 𝑖 ∗ 𝑥 01 𝑗 ∗ ∂ 𝑖 , 𝑗 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 10 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 11 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) + ∑ 𝑖 , 𝑗

1 𝑛 𝑥 10 𝑖 ∗ 𝑥 01 𝑗 ∗ ∂ 𝑖 , 𝑗 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 20 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) + ∑ 𝑖 , 𝑗

1 𝑛 𝑥 10 𝑖 ∗ 𝑥 10 𝑗 ∗ ∂ 𝑖 , 𝑗 ^ 𝗈𝗉 ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ⟩

where we informally write 𝑖 ^ for the one-hot encoding of 𝑖 (the sequence of length 𝑛 consisting exclusively of zeros except in position 𝑖 where it has a 1 ) and 𝑖 , 𝑗 ^ for the two-hot encoding of 𝑖 and 𝑗 (the sequence of length 𝑛 consisting exclusively of zeros except in positions 𝑖 and 𝑗 where it has a 1 if 𝑖 ≠ 𝑗 and a 2 if 𝑖

𝑗 )

As noted in Section  2, it is often unnecessary to include all components of the ( 2 , 2 ) -algorithm, for example when computing a second order directional derivative. In that case, we may define a restricted ( 2 , 2 ) -AD algorithm that drops the non-mixed second order derivatives from the definitions above and defines 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝐫𝐞𝐚𝐥 )

𝐫𝐞𝐚𝐥 4 and

𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) )

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝑡 1 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 00 1 , 𝑥 01 1 , 𝑥 10 1 , 𝑥 11 1 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝑡 𝑛 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑥 00 𝑛 , 𝑥 01 𝑛 , 𝑥 10 𝑛 , 𝑥 11 𝑛 ⟩ →

⟨ 𝗈𝗉 ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 01 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 10 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ,

∑ 𝑖

1 𝑛 𝑥 11 𝑖 ∗ ∂ 𝑖 ^ 𝗈𝗉 ⁢ ( 𝑥 00 1 , … , 𝑥 00 𝑛 )

  • ∑ 𝑖 , 𝑖 ′

    1 𝑛 𝑥 10 𝑖 ∗ 𝑥 01 𝑖 ∗ ∂ 𝑖 ^ ^ 𝗈𝗉 ( 𝑥 00 1 , … , 𝑥 00 𝑛 ) ⟩ .

We extend 𝒟 → to contexts: 𝒟 → ⁢ ( { 𝑥 1 : 𝜏 1 , … , 𝑥 𝑛 : 𝜏 𝑛 } )

def { 𝑥 1 : 𝒟 → ⁢ ( 𝜏 1 ) , … , 𝑥 𝑛 : 𝒟 → ⁢ ( 𝜏 𝑛 ) } . This turns 𝒟 → into a well-typed, functorial macro in the following sense.

Lemma 1 (Functorial macro).

If Γ ⊢ 𝑡 : 𝜏 then 𝒟 → ⁢ ( Γ ) ⊢ 𝒟 → ⁢ ( 𝑡 ) : 𝒟 → ⁢ ( 𝜏 ) . If Γ , 𝑥 : 𝜎 ⊢ 𝑡 : 𝜏 and Γ ⊢ 𝑠 : 𝜎 then 𝒟 → ( Γ ) ⊢ 𝒟 → ( 𝑡 [ 𝑠 / 𝑥 ] )

𝒟 → ( 𝑡 ) [ 𝒟 → ⁢ ( 𝑠 ) / 𝑥 ] .

Proof 3.1.

By induction on the structure of typing derviations.

{exa}

[Inner products] Let us write 𝜏 𝑛 for the 𝑛 -fold product ( 𝜏 ⁢ ∗ … ⁢ ∗ 𝜏 ) . Then, given Γ ⊢ 𝑡 , 𝑠 : 𝐫𝐞𝐚𝐥 𝑛 we can define their inner product

Γ ⊢ 𝑡 ⋅ 𝑛 𝑠

def

𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑧 1 , … , 𝑧 𝑛 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝑠 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑦 1 , … , 𝑦 𝑛 ⟩ → 𝑧 1 ∗ 𝑦 1 + ⋯ + 𝑧 𝑛 ∗ 𝑦 𝑛 : 𝐫𝐞𝐚𝐥

To illustrate the calculation of 𝒟 → ( 1 , 1 ) , let us expand (and 𝛽 -reduce) 𝒟 → ( 1 , 1 ) ⁢ ( 𝑡 ⋅ 2 𝑠 ) :

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 1 , 1 ) ⁢ ( 𝑡 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑧 1 , 𝑧 2 ⟩ → 𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 1 , 1 ) ⁢ ( 𝑠 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑦 1 , 𝑦 2 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝑧 1 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑧 1 , 1 , 𝑧 1 , 2 ⟩ → 𝐜𝐚𝐬𝐞 ⁢ 𝑦 1 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑦 1 , 1 , 𝑦 1 , 2 ⟩ →

𝐜𝐚𝐬𝐞 ⁢ 𝑧 2 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑧 2 , 1 , 𝑧 2 , 2 ⟩ → 𝐜𝐚𝐬𝐞 ⁢ 𝑦 2 ⁢ 𝐨𝐟 ⁢ ⟨ 𝑦 2 , 1 , 𝑦 2 , 2 ⟩ →

⟨ 𝑧 1 , 1 ∗ 𝑦 1 , 1 + 𝑧 2 , 1 ∗ 𝑦 2 , 1 , 𝑧 1 , 1 ∗ 𝑦 1 , 2 + 𝑧 1 , 2 ∗ 𝑦 1 , 1 + 𝑧 2 , 1 ∗ 𝑦 2 , 2 + 𝑧 2 , 2 ∗ 𝑦 2 , 1 ⟩

Let us also expand the calculation of 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝑡 ⋅ 2 𝑠 ) :

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝑡 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑧 1 , 𝑧 2 ⟩ → 𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ( 2 , 2 ) ′ ⁢ ( 𝑠 ) ⁢ 𝐨𝐟 ⁢ ⟨ 𝑦 1 , 𝑦 2 ⟩ →

𝐜𝐚𝐬𝐞 𝑧 1 𝐨𝐟 ⟨ 𝑧 1 , 𝑧 1 , 1 ′ , 𝑧 1 , 2 ′ , 𝑧 1 ′′ ⟩ → 𝐜𝐚𝐬𝐞 𝑦 1 𝐨𝐟 ⟨ 𝑦 1 , 𝑦 1 , 1 ′ , 𝑦 1 , 2 ′ , 𝑦 1 ′′ , ⟩ →

𝐜𝐚𝐬𝐞 𝑧 2 𝐨𝐟 ⟨ 𝑧 2 , 𝑧 2 , 1 ′ , 𝑧 2 , 2 ′ , 𝑧 2 ′′ , ⟩ → 𝐜𝐚𝐬𝐞 𝑦 2 𝐨𝐟 ⟨ 𝑦 2 , 𝑦 2 , 1 ′ , 𝑦 2 , 2 ′ , 𝑦 2 ′′ , ⟩ →

⟨ 𝑧 1 ∗ 𝑦 1 + 𝑧 2 ∗ 𝑦 2 ,

𝑧 1 ∗ 𝑦 1 , 1 ′ + 𝑧 1 , 1 ′ ∗ 𝑦 1 + 𝑧 2 ∗ 𝑦 2 , 1 ′ + 𝑧 2 , 1 ′ ∗ 𝑦 2 ,

𝑧 1 ∗ 𝑦 1 , 2 ′ + 𝑧 1 , 2 ′ ∗ 𝑦 1 + 𝑧 2 ∗ 𝑦 2 , 2 ′ + 𝑧 2 , 2 ′ ∗ 𝑦 2 ,

𝑧 1 ′′ ∗ 𝑦 1 + 𝑧 2 ′′ ∗ 𝑦 2 + 𝑦 1 ′′ ∗ 𝑧 1 + 𝑦 2 ′′ ∗ 𝑧 2 +

𝑦 1 , 1 ′ ∗ 𝑦 1 , 2 ′ + 𝑦 2 , 1 ′ ∗ 𝑦 2 , 2 ′ + 𝑧 1 , 1 ′ ∗ 𝑧 1 , 2 ′ + 𝑧 2 , 1 ′ ∗ 𝑧 2 , 2 ′ ⟩

{exa}

[Neural networks] In our introduction, we provided a program (1) in our language to build a neural network out of expressions neuron , layer , comp ; this program makes use of the inner product of Ex. 3.1. We can similarly calculate the derivatives of deep neural nets by mechanically applying the macro  𝒟 → .

4.Semantics of differentiation

Consider for a moment the first order fragment of the language in Section 3, with only one type, 𝐫𝐞𝐚𝐥 , and no 𝜆 ’s or pairs. This has a simple semantics in the category of cartesian spaces and smooth maps. Indeed, a term 𝑥 1 ⁢ … ⁢ 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 has a natural reading as a function ⟦ 𝑡 ⟧ : ℝ 𝑛 → ℝ by interpreting our operation symbols by the well-known operations on ℝ 𝑛 → ℝ with the corresponding name. In fact, the functions that are definable in this first order fragment are smooth. Let us write 𝐂𝐚𝐫𝐭𝐒𝐩 for this category of cartesian spaces ( ℝ 𝑛 for some 𝑛 ) and smooth functions.

The category 𝐂𝐚𝐫𝐭𝐒𝐩 has cartesian products, and so we can also interpret product types, tupling and pattern matching, giving us a useful syntax for constructing functions into and out of products of ℝ . For example, the interpretation of ( neuron 𝑛 ) in (1) becomes

ℝ 𝑛 × ℝ 𝑛 × ℝ → ⟦ ⋅ 𝑛 ⟧ × id ℝ ℝ × ℝ → ⟦ + ⟧ ℝ → ⟦ 𝜍 ⟧ ℝ

where ⟦ ⋅ 𝑛 ⟧ , ⟦ + ⟧ and ⟦ 𝜍 ⟧ are the usual inner product, addition and the sigmoid function on ℝ , respectively.

Inside this category, we can straightforwardly study the first order language without 𝜆 ’s, and automatic differentiation. In fact, we can prove the following by plain induction on the syntax: The interpretation of the (syntactic) forward AD 𝒟 → ⁢ ( 𝑡 ) of a first order term 𝑡 equals the usual (semantic) derivative of the interpretation of 𝑡 as a smooth function.

However, as is well-known, the category 𝐂𝐚𝐫𝐭𝐒𝐩 does not support function spaces. To see this, notice that we have polynomial terms

𝑥 1 , … , 𝑥 𝑑 : 𝐫𝐞𝐚𝐥 ⊢ 𝜆 𝑦 . ∑ 𝑛

1 𝑑 𝑥 𝑛 𝑦 𝑛 : 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥

for each 𝑑 , and so if we could interpret ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) as a Euclidean space ℝ 𝑝 then, by interpreting these polynomial expressions, we would be able to find continuous injections ℝ 𝑑 → ℝ 𝑝 for every 𝑑 , which is topologically impossible for any  𝑝 , for example as a consequence of the Borsuk-Ulam theorem (see Appx. A).

This lack of function spaces means that we cannot interpret the functions ( layer ) and ( comp ) from (1) in 𝐂𝐚𝐫𝐭𝐒𝐩 , as they are higher order functions, even though they are very useful and innocent building blocks for differential programming! Clearly, we could define neural nets such as (1) directly as smooth functions without any higher order subcomponents, though that would quickly become cumbersome for deep networks. A problematic consequence of the lack of a semantics for higher order differential programs is that we have no obvious way of establishing compositional semantic correctness of 𝒟 → for the given implementation of (1).

We now show that every definable function is smooth, and then in Section 4.2 we show that the 𝒟 → macro witnesses its derivatives.

4.1.Smoothness at higher types and diffeologies

The aim of this section is to introduce diffeological spaces as a semantic model for the simple language in Section 3. By way of motivation, we begin with a standard set theoretic semantics, where types are interpreted as follows

⌊ ⌊ 𝐫𝐞𝐚𝐥 ⌉ ⌉

def ℝ ⌊ ⌊ ( 𝜏 1 ∗ … ∗ 𝜏 𝑛 ) ⌉ ⌉

def ∏ 𝑖

1 𝑛 ⌊ ⌊ 𝜏 𝑖 ⌉ ⌉ ⌊ ⌊ 𝜏 → 𝜎 ⌉ ⌉

def ( ⌊ ⌊ 𝜏 ⌉ ⌉ → ⌊ ⌊ 𝜎 ⌉ ⌉ )

and a term 𝑥 1 : 𝜏 1 , … , 𝑥 𝑛 : 𝜏 𝑛 ⊢ 𝑡 : 𝜎 is interpreted as a function ∏ 𝑖

1 𝑛 ⌊ ⌊ 𝜏 𝑖 ⌉ ⌉ → ⌊ ⌊ 𝜎 ⌉ ⌉ , mapping a valuation of the context to a result.

We can show that the interpretation of a term 𝑥 1 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 is always a smooth function ℝ 𝑛 → ℝ , even if it has higher order subterms. We begin with a fairly standard logical relations proof of this, and then move from this to the semantic model of diffeological spaces.

Proposition 2.

If 𝑥 1 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 then the function ⌊ ⌊ 𝑡 ⌉ ⌉ : ℝ 𝑛 → ℝ is smooth.

Proof 4.1.

For each type 𝜏 define a set 𝑄 𝜏 ⊆ [ ℝ 𝑘 → ⌊ ⌊ 𝜏 ⌉ ⌉ ] by induction on the structure of types:

𝑄 𝐫𝐞𝐚𝐥

{ 𝑓 : ℝ 𝑘 → ℝ | 𝑓 ⁢  is smooth }

𝑄 ( 𝜏 1 ⁢ ∗ … ⁢ ∗ 𝜏 𝑛 )

{ 𝑓 : ℝ 𝑘 → ∏ 𝑖

1 𝑛 ⌊ ⌊ 𝜏 𝑖 ⌉ ⌉ | ∀ 𝑟 → ∈ ℝ 𝑘 . ∀ 𝑖 . 𝑓 𝑖 ( 𝑟 → ) ∈ 𝑄 𝜏 𝑖 }

𝑄 𝜏 → 𝜎

{ 𝑓 : ℝ 𝑘 → ⌊ ⌊ 𝜏 ⌉ ⌉ → ⌊ ⌊ 𝜎 ⌉ ⌉ | ∀ 𝑔 ∈ 𝑄 𝜏 . 𝜆 ( 𝑟 → ) . 𝑓 ( 𝑟 → ) ( 𝑔 ( 𝑟 → ) ) ∈ 𝑄 𝜎 }

Now we show the fundamental lemma: if 𝑥 1 : 𝜏 1 , … , 𝑥 𝑛 : 𝜏 𝑛 ⊢ 𝑢 : 𝜎 and 𝑔 1 ∈ 𝑄 𝜏 1 ⁢ … ⁢ 𝑔 𝑛 ∈ 𝑄 𝜏 𝑛 then ( ( 𝑔 1 … 𝑔 𝑛 ) ; ⌊ ⌊ 𝑢 ⌉ ⌉ ) ∈ 𝑄 𝜎 . This is shown by induction on the structure of typing derivations. The only interesting step here is that the basic operations ( + , ∗ , 𝜍 etc.) are smooth. We deduce the statement of the theorem by putting 𝑢

𝑡 , 𝑘

𝑛 , and letting 𝑔 𝑖 : ℝ 𝑛 → ℝ be the projections.

At higher types, the logical relations 𝑄 show that we can only define functions that send smooth functions to smooth functions, meaning that we can never use them to build first order functions that are not smooth. For example, ( comp ) in (1) has this property.

This logical relations proof suggests to build a semantic model by interpreting types as sets with structure: for each type we have a set 𝑋 together with a set 𝑄 𝑋 ℝ 𝑘 ⊆ [ ℝ 𝑘 → 𝑋 ] of plots. {defi} A diffeological space ( 𝑋 , 𝒫 𝑋 ) consists of a set 𝑋 together with, for each 𝑛 and each open subset 𝑈 of ℝ 𝑛 , a set 𝒫 𝑋 𝑈 ⊆ [ 𝑈 → 𝑋 ] of functions, called plots, such that

all constant functions are plots;

if 𝑓 : 𝑉 → 𝑈 is a smooth function and 𝑝 ∈ 𝒫 𝑋 𝑈 , then 𝑓 ; 𝑝 ∈ 𝒫 𝑋 𝑉 ;

if ( 𝑝 𝑖 ∈ 𝒫 𝑋 𝑈 𝑖 ) 𝑖 ∈ 𝐼 is a compatible family of plots ( 𝑥 ∈ 𝑈 𝑖 ∩ 𝑈 𝑗 ⇒ 𝑝 𝑖 ⁢ ( 𝑥 )

𝑝 𝑗 ⁢ ( 𝑥 ) ) and ( 𝑈 𝑖 ) 𝑖 ∈ 𝐼 covers 𝑈 , then the gluing 𝑝 : 𝑈 → 𝑋 : 𝑥 ∈ 𝑈 𝑖 ↦ 𝑝 𝑖 ⁢ ( 𝑥 ) is a plot.

We call a function 𝑓 : 𝑋 → 𝑌 between diffeological spaces smooth if, for all plots 𝑝 ∈ 𝒫 𝑋 𝑈 , we have that 𝑝 ; 𝑓 ∈ 𝒫 𝑌 𝑈 . We write 𝐃𝐢𝐟𝐟 ⁢ ( 𝑋 , 𝑌 ) for the set of smooth maps from 𝑋 to 𝑌 . Smooth functions compose, and so we have a category 𝐃𝐢𝐟𝐟 of diffeological spaces and smooth functions.

A diffeological space is thus a set equipped with structure. Many constructions of sets carry over straightforwardly to diffeological spaces.

{exa}

[Cartesian diffeologies] Each open subset 𝑈 of ℝ 𝑛 can be given the structure of a diffeological space by taking all the smooth functions 𝑉 → 𝑈 as 𝒫 𝑈 𝑉 . Smooth functions from 𝑉 → 𝑈 in the traditional sense coincide with smooth functions in the sense of diffeological spaces [IZ13]. Thus diffeological spaces have a profound relationship with ordinary calculus.

In categorical terms, this gives a full embedding of 𝐂𝐚𝐫𝐭𝐒𝐩 in 𝐃𝐢𝐟𝐟 . {exa}[Product diffeologies] Given a family ( 𝑋 𝑖 ) 𝑖 ∈ 𝐼 of diffeological spaces, we can equip the product ∏ 𝑖 ∈ 𝐼 𝑋 𝑖 of sets with the product diffeology in which 𝑈 -plots are precisely the functions of the form ( 𝑝 𝑖 ) 𝑖 ∈ 𝐼 for 𝑝 𝑖 ∈ 𝒫 𝑋 𝑖 𝑈 .

This gives us the categorical product in 𝐃𝐢𝐟𝐟 . {exa}[Functional diffeology] We can equip the set 𝐃𝐢𝐟𝐟 ⁢ ( 𝑋 , 𝑌 ) of smooth functions between diffeological spaces with the functional diffeology in which 𝑈 -plots consist of functions 𝑓 : 𝑈 → 𝐃𝐢𝐟𝐟 ⁢ ( 𝑋 , 𝑌 ) such that ( 𝑢 , 𝑥 ) ↦ 𝑓 ⁢ ( 𝑢 ) ⁢ ( 𝑥 ) is an element of 𝐃𝐢𝐟𝐟 ⁢ ( 𝑈 × 𝑋 , 𝑌 ) .

This specifies the categorical function object in 𝐃𝐢𝐟𝐟 .

We can now give a denotational semantics to our language from Section 3 in the category of diffeological spaces. We interpret each type 𝜏 as a set ⟦ 𝜏 ⟧ equipped with the relevant diffeology, by induction on the structure of types:

⟦ 𝐫𝐞𝐚𝐥 ⟧

def ℝ with the standard diffeology

⟦ ( 𝜏 1 ∗ … ∗ 𝜏 𝑛 ) ⟧

def ∏ 𝑖

1 𝑛 ⟦ 𝜏 𝑖 ⟧ with the product diffeology

⟦ 𝜏 → 𝜎 ⟧

def 𝐃𝐢𝐟𝐟 ( ⟦ 𝜏 ⟧ , ⟦ 𝜎 ⟧ ) with the functional diffeology

A context Γ

( 𝑥 1 : 𝜏 1 … 𝑥 𝑛 : 𝜏 𝑛 ) is interpreted as a diffeological space ⟦ Γ ⟧

def ∏ 𝑖

1 𝑛 ⟦ 𝜏 𝑖 ⟧ . Now well typed terms Γ ⊢ 𝑡 : 𝜏 are interpreted as smooth functions ⟦ 𝑡 ⟧ : ⟦ Γ ⟧ → ⟦ 𝜏 ⟧ , giving a meaning for 𝑡 for every valuation of the context. This is routinely defined by induction on the structure of typing derivations once we choose a smooth function ⟦ 𝗈𝗉 ⟧ : ℝ 𝑛 → ℝ to interpret each 𝑛 -ary operation 𝗈𝗉 ∈ 𝖮𝗉 𝑛 . For example, constants 𝑐 ¯ : 𝐫𝐞𝐚𝐥 are interpreted as constant functions; and the first order operations ( + , ∗ , 𝜍 ) are interpreted by composing with the corresponding functions, which are smooth: e.g., ⟦ 𝜍 ( 𝑡 ) ⟧ ( 𝜌 )

def 𝜍 ( ⟦ 𝑡 ⟧ ( 𝜌 ) ) , where 𝜌 ∈ ⟦ Γ ⟧ . Variables are interpreted as ⟦ 𝑥 𝑖 ⟧ ( 𝜌 )

def 𝜌 𝑖 . The remaining constructs are interpreted as follows, and it is straightforward to show that smoothness is preserved.

⟦ ⟨ 𝑡 1 , … , 𝑡 𝑛 ⟩ ⟧ ( 𝜌 )

def ( ⟦ 𝑡 1 ⟧ ( 𝜌 ) , … , ⟦ 𝑡 𝑛 ⟧ ( 𝜌 ) )
⟦ 𝜆 𝑥 : 𝜏 . 𝑡 ⟧ ( 𝜌 ) ( 𝑎 )

def ⟦ 𝑡 ⟧ ( 𝜌 , 𝑎 ) ( 𝑎 ∈ ⟦ 𝜏 ⟧ )

⟦ 𝐜𝐚𝐬𝐞 𝑡 𝐨𝐟 ⟨ … ⟩ → 𝑠 ⟧ ( 𝜌 )

def ⟦ 𝑠 ⟧ ( 𝜌 , ⟦ 𝑡 ⟧ ( 𝜌 ) )
⟦ 𝑡 𝑠 ⟧ ( 𝜌 )

def ⟦ 𝑡 ⟧ ( 𝜌 ) ( ⟦ 𝑠 ⟧ ( 𝜌 ) )

The logical relations proof of Proposition 2 is reminiscent of diffeological spaces. We now briefly remark on the suitability of the axioms of diffeological spaces (Def 4.1) for a semantic model of smooth programs. The first axiom says that we only consider reflexive logical relations. From the perspective of the interpretation, it recognizes in particular that the semantics of an expression of type ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 is defined by its value on smooth functions rather than arbitrary arguments. That is to say, the set-theoretic semantics at the beginning of this section, ⌊ ⌊ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⌉ ⌉ , is different to the diffeological semantics, ⟦ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⟧ . The second axiom for diffeological spaces ensures that the smooth maps in 𝐃𝐢𝐟𝐟 ⁢ ( 𝑈 , 𝑋 ) are exactly the plots in 𝒫 𝑋 𝑈 . The third axiom ensures that categories of manifolds fully embed into 𝐃𝐢𝐟𝐟 ; it will not play a visible role in this paper — in fact, [BCLG20] prove similar results for a simple language like ours by using plain logical relations (over 𝐒𝐞𝐭 ) and without demanding the diffeology axioms. However, we expect the third axiom to be crucial for programming with other smooth structures or partiality.

4.2.Correctness of AD

We have shown that a term 𝑥 1 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 is interpreted as a smooth function ⟦ 𝑡 ⟧ : ℝ 𝑛 → ℝ , even if 𝑡 involves higher order functions (like (1)). Moreover, the macro differentiation 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝑡 ) is a function ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝑡 ) ⟧ : ( ℝ ( 𝑅 + 𝑘 𝑘 ) ) 𝑛 → ℝ ( 𝑅 + 𝑘 𝑘 ) (Proposition 1). This enables us to state a limited version of our main correctness theorem:

Theorem 3 (Semantic correctness of 𝒟 → (limited)).

For any term

𝑥 1 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 , the function ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝑡 ) ⟧ is the ( 𝑘 , 𝑅 ) -Taylor representation (3) of ⟦ 𝑡 ⟧ . In detail: for any smooth functions 𝑓 1 ⁢ … ⁢ 𝑓 𝑛 : ℝ 𝑘 → ℝ ,

( ( ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 𝑗 ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) 𝑗

1 𝑛 ; ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝑡 ) ⟧

( ( ∂ 𝛼 1 + … + 𝛼 𝑘 ( ( 𝑓 1 , … , 𝑓 𝑛 ) ; ⟦ 𝑡 ⟧ ) ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) 𝑗

1 𝑛 ⁢ .

For instance, if 𝑛

2 , then ⟦ 𝒟 → ( 1 , 1 ) ( 𝑡 ) ⟧ ( 𝑥 1 , 1 , 𝑥 2 , 0 )

( ⟦ 𝑡 ⟧ ( 𝑥 1 , 𝑥 2 ) , ∂ ⟦ 𝑡 ⟧ ( 𝑥 , 𝑥 2 ) ∂ 𝑥 ( 𝑥 1 ) ) .

Proof 4.2.

We prove this by logical relations. A categorical version of this proof is in Section 6.2.

For each type 𝜏 , we define a binary relation 𝑆 𝜏 between (open) 𝑘 -dimensional plots in ⟦ 𝜏 ⟧ and (open) 𝑘 -dimensional plots in ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝜏 ) ⟧ , i.e.  𝑆 𝜏 ⊆ 𝒫 ⟦ 𝜏 ⟧ ℝ 𝑘 × 𝒫 ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝜏 ) ⟧ ℝ 𝑘 , by induction on 𝜏 :

𝑆 𝐫𝐞𝐚𝐥

def
{ ( 𝑓 , ( ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) | 𝑓 : ℝ 𝑘 → ℝ ⁢  smooth }

𝑆 ( 𝜏 1 ⁢ ∗ … ⁢ ∗ 𝜏 𝑛 )

def
{ ( ( 𝑓 1 , … , 𝑓 𝑛 ) , ( 𝑔 1 , … , 𝑔 𝑛 ) ) ∣ ( 𝑓 1 , 𝑔 1 ) ∈ 𝑆 𝜏 1 , … , ( 𝑓 𝑛 , 𝑔 𝑛 ) ∈ 𝑆 𝜏 𝑛 }

𝑆 𝜏 → 𝜎

def

{ ( 𝑓 1 , 𝑓 2 ) ∣ ∀ ( 𝑔 1 , 𝑔 2 ) ∈ 𝑆 𝜏 . ( 𝑥 ↦ 𝑓 1 ⁢ ( 𝑥 ) ⁢ ( 𝑔 1 ⁢ ( 𝑥 ) ) , 𝑥 ↦ 𝑓 2 ⁢ ( 𝑥 ) ⁢ ( 𝑔 2 ⁢ ( 𝑥 ) ) ) ∈ 𝑆 𝜎 }

Then, we establish the following ‘fundamental lemma’:

If 𝑥 1 : 𝜏 1 , … , 𝑥 𝑛 : 𝜏 𝑛 ⊢ 𝑡 : 𝜎 and, for all 1 ≤ 𝑖 ≤ 𝑛 , 𝑓 𝑖 : ℝ 𝑘 → ⟦ 𝜏 𝑖 ⟧ and

𝑔 𝑖 : ℝ 𝑘 → ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝜏 𝑖 ) ⟧ are such that ( 𝑓 𝑖 , 𝑔 𝑖 ) is in 𝑆 𝜏 𝑖 , then we have that

( ( 𝑓 1 , … , 𝑓 𝑛 ) ; ⟦ 𝑡 ⟧ , ( 𝑔 1 , … , 𝑔 𝑛 ) ; ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝑡 ) ⟧ )

is in 𝑆 𝜎 .

This is proved routinely by induction on the typing derivation of 𝑡 . The case for 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) relies on the precise definition of 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝗈𝗉 ⁢ ( 𝑡 1 , … , 𝑡 𝑛 ) ) .

We conclude the theorem from the fundamental lemma by considering the case where 𝜏 𝑖

𝜎

𝐫𝐞𝐚𝐥 , 𝑚

𝑛 and 𝑠 𝑖

𝑦 𝑖 .

5.Extending the language: variant and inductive types

In this section, we show that the definition of forward AD and the semantics generalize if we extend the language of Section  3 with variants and inductive types. As an example of inductive types, we consider lists. This specific choice is only for expository purposes and the whole development works at the level of generality of arbitrary algebraic data types generated as initial algebras of (polynomial) type constructors formed by finite products and variants. These types are easily interpreted in the category of diffeological spaces in much the same way. The categorically minded reader may regard this as a consequence of 𝐃𝐢𝐟𝐟 being a concrete Grothendieck quasitopos, e.g. [BH11], and hence is complete and cocomplete.

5.1.Language.

We additionally consider the following types and terms:

𝜏 , 𝜎 , 𝜌

: :=

types

|

{ ℓ 𝟣 ⁢ 𝜏 1 | … | ℓ 𝗇 ⁢ 𝜏 𝑛 }

variant

|

𝐥𝐢𝐬𝐭 ⁢ ( 𝜏 )

list

𝑡 , 𝑠 , 𝑟

: :=

terms

|

𝜏 . ℓ ⁢ 𝑡

variant constructor

|

[ ] | 𝑡 : : 𝑠

empty list and cons

|

𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ { ℓ 𝟣 ⁢ 𝑥 1 → 𝑠 1 | ⋯ | ℓ 𝗇 ⁢ 𝑥 𝑛 → 𝑠 𝑛 }

pattern matching: variants

|

𝐟𝐨𝐥𝐝 ⁢ ( 𝑥 1 , 𝑥 2 ) . 𝑡 ⁢ 𝐨𝐯𝐞𝐫 ⁢ 𝑠 ⁢ 𝐟𝐫𝐨𝐦 ⁢ 𝑟

list fold

We extend the type system according to the rules of Fig. 4.

\inferrule Γ ⊢ 𝑡 : 𝜏 𝑖 Γ ⊢ 𝜏 . ℓ 𝑖 𝑡 : 𝜏 ( ( ℓ 𝗂 𝜏 𝑖 ) ∈ 𝜏 ) \inferrule Γ ⊢ [ ] : 𝐥𝐢𝐬𝐭 ( 𝜏 ) \inferrule Γ ⊢ 𝑡 : 𝜏

Γ ⊢ 𝑠 : 𝐥𝐢𝐬𝐭 ( 𝜏 ) Γ ⊢ 𝑡 : : 𝑠 : 𝐥𝐢𝐬𝐭 ( 𝜏 )

\inferrule ⁢ Γ ⊢ 𝑡 : { ℓ 𝟣 ⁢ 𝜏 1 | … | ℓ 𝗇 ⁢ 𝜏 𝑛 }

for each  1 ≤ 𝑖 ≤ 𝑛 :  ⁢ Γ , 𝑥 𝑖 : 𝜏 𝑖 ⊢ 𝑠 𝑖 : 𝜏 ⁢ Γ ⊢ 𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ { ℓ 𝟣 ⁢ 𝑥 1 → 𝑠 1 | ⋯ | ℓ 𝗇 ⁢ 𝑥 𝑛

→ 𝑠 𝑛 } : 𝜏

\inferrule ⁢ Γ ⊢ 𝑠 : 𝐥𝐢𝐬𝐭 ⁢ ( 𝜏 )

Γ ⊢ 𝑟 : 𝜎

Γ , 𝑥 1 : 𝜏 , 𝑥 2 : 𝜎 ⊢ 𝑡 : 𝜎 Γ ⊢ 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑡 𝐨𝐯𝐞𝐫 𝑠 𝐟𝐫𝐨𝐦 𝑟 : 𝜎

Figure 4.Additional typing rules for the extended language.

We can then extend 𝒟 → ( 𝑘 , 𝑅 ) (again, writing it as 𝒟 → , for legibility) to our new types and terms by

𝒟 → ⁢ ( { ℓ 𝟣 ⁢ 𝜏 1 | … | ℓ 𝗇 ⁢ 𝜏 𝑛 } )

def { ℓ 𝟣 ⁢ 𝒟 → ⁢ ( 𝜏 1 ) | … | ℓ 𝗇 ⁢ 𝒟 → ⁢ ( 𝜏 𝑛 ) }

𝒟 → ⁢ ( 𝐥𝐢𝐬𝐭 ⁢ ( 𝜏 ) )

def 𝐥𝐢𝐬𝐭 ⁢ ( 𝒟 → ⁢ ( 𝜏 ) )

𝒟 → ( 𝜏 . ℓ 𝑡 )

def 𝒟 → ( 𝜏 ) . ℓ 𝒟 → ( 𝑡 )

𝒟 → ⁢ ( [ ] )

def [ ]

𝒟 → ( 𝑡 : : 𝑠 )

def 𝒟 → ( 𝑡 ) : : 𝒟 → ( 𝑠 )

𝒟 → ⁢ ( 𝐜𝐚𝐬𝐞 ⁢ 𝑡 ⁢ 𝐨𝐟 ⁢ { ℓ 𝟣 ⁢ 𝑥 1 → 𝑠 1 | ⋯ | ℓ 𝗇 ⁢ 𝑥 𝑛 → 𝑠 𝑛 } )

def

𝐜𝐚𝐬𝐞 ⁢ 𝒟 → ⁢ ( 𝑡 ) ⁢ 𝐨𝐟 ⁢ { ℓ 𝟣 ⁢ 𝑥 1 → 𝒟 → ⁢ ( 𝑠 1 ) | ⋯ | ℓ 𝗇 ⁢ 𝑥 𝑛 → 𝒟 → ⁢ ( 𝑠 𝑛 ) }

𝒟 → ( 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑡 𝐨𝐯𝐞𝐫 𝑠 𝐟𝐫𝐨𝐦 𝑟 )

def 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝒟 → ( 𝑡 ) 𝐨𝐯𝐞𝐫 𝒟 → ( 𝑠 ) 𝐟𝐫𝐨𝐦 𝒟 → ( 𝑟 )

To demonstrate the practical use of expressive type systems for differential programming, we consider the following two examples. {exa}[Lists of inputs for neural nets] Usually, we run a neural network on a large data set, the size of which might be determined at runtime. To evaluate a neural network on multiple inputs, in practice, one often sums the outcomes. This can be coded in our extended language as follows. Suppose that we have a network 𝑓 : ( 𝐫𝐞𝐚𝐥 𝑛 ⁢ ∗ 𝑃 ) → 𝐫𝐞𝐚𝐥 that operates on single input vectors. We can construct one that operates on lists of inputs as follows:

𝑔

def 𝜆 ⟨ 𝑙 , 𝑤 ⟩ . 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑓 ⟨ 𝑥 1 , 𝑤 ⟩ + 𝑥 2 𝐨𝐯𝐞𝐫 𝑙 𝐟𝐫𝐨𝐦 0 ¯ : ( 𝐥𝐢𝐬𝐭 ( 𝐫𝐞𝐚𝐥 𝑛 ) ∗ 𝑃 ) → 𝐫𝐞𝐚𝐥

{exa}

[Missing data] In practically every application of statistics and machine learning, we face the problem of missing data: for some observations, only partial information is available.

In an expressive typed programming language like we consider, we can model missing data conveniently using the data type 𝐦𝐚𝐲𝐛𝐞 ⁢ ( 𝜏 )

{ 𝖭𝗈𝗍𝗁𝗂𝗇𝗀 ⁢ ( ) | 𝖩𝗎𝗌𝗍 ⁢ 𝜏 } . In the context of a neural network, one might use it as follows. First, define some helper functions

fromMaybe

def 𝜆 ⁢ 𝑥 . 𝜆 ⁢ 𝑚 . 𝐜𝐚𝐬𝐞 ⁢ 𝑚 ⁢ 𝐨𝐟 ⁢ { 𝖭𝗈𝗍𝗁𝗂𝗇𝗀 ⁢ _ → 𝑥 | 𝖩𝗎𝗌𝗍 ⁢ 𝑥 ′ → 𝑥 ′ }

fromMaybe 𝑛

def 𝜆 ⁢ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ . 𝜆 ⁢ ⟨ 𝑚 1 , … , 𝑚 𝑛 ⟩ . ⟨ fromMaybe ⁢ 𝑥 1 ⁢ 𝑚 1 , … , fromMaybe ⁢ 𝑥 𝑛 ⁢ 𝑚 𝑛 ⟩

: ( 𝐦𝐚𝐲𝐛𝐞 ⁢ ( 𝐫𝐞𝐚𝐥 ) ) 𝑛 → 𝐫𝐞𝐚𝐥 𝑛 → 𝐫𝐞𝐚𝐥 𝑛

map

def 𝜆 𝑓 . 𝜆 𝑙 . 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑓 𝑥 1 : : 𝑥 2 𝐨𝐯𝐞𝐫 𝑙 𝐟𝐫𝐨𝐦 [ ] : ( 𝜏 → 𝜎 ) → 𝐥𝐢𝐬𝐭 ( 𝜏 ) → 𝐥𝐢𝐬𝐭 ( 𝜎 )

Given a neural network 𝑓 : ( 𝐥𝐢𝐬𝐭 ⁢ ( 𝐫𝐞𝐚𝐥 𝑘 ) ⁢ ∗ 𝑃 ) → 𝐫𝐞𝐚𝐥 , we can build a new one that operates on on a data set for which some covariates (features) are missing, by passing in default values to replace the missing covariates:

𝜆 ⟨ 𝑙 , ⟨ 𝑚 , 𝑤 ⟩ ⟩ . 𝑓 ⟨ map ( fromMaybe 𝑘 𝑚 ) 𝑙 , 𝑤 ⟩ : ( 𝐥𝐢𝐬𝐭 ( ( 𝐦𝐚𝐲𝐛𝐞 ( 𝐫𝐞𝐚𝐥 ) ) 𝑘 ) ∗ ( 𝐫𝐞𝐚𝐥 𝑘 ∗ 𝑃 ) ) → 𝐫𝐞𝐚𝐥

Then, given a data set 𝑙 with missing covariates, we can perform automatic differentiation on this network to optimize, simultaneously, the ordinary network parameters  𝑤 and the default values for missing covariates 𝑚 .

5.2.Semantics.

In Section 4 we gave a denotational semantics for the simple language in diffeological spaces. This extends to the language in this section, as follows. As before, each type 𝜏 is interpreted as a diffeological space, which is a set equipped with a family of plots:

A variant type { ℓ 𝟣 ⁢ 𝜏 1 | … | ℓ 𝗇 ⁢ 𝜏 𝑛 } is inductively interpreted as the disjoint union of the semantic spaces, ⟦ { ℓ 𝟣 𝜏 1 | … | ℓ 𝗇 𝜏 𝑛 } ⟧

def ⨄ 𝑖

1 𝑛 ⟦ 𝜏 𝑖 ⟧ , with 𝑈 -plots

𝒫 ⟦ { ℓ 𝟣 ⁢ 𝜏 1 | … | ℓ 𝗇 ⁢ 𝜏 𝑛 } ⟧ 𝑈

def { [ 𝑈 𝑗 → 𝑓 𝑗 ⟦ 𝜏 𝑗 ⟧ → ⨄ 𝑖

1 𝑛 ⟦ 𝜏 𝑖 ⟧ ] 𝑗

1 𝑛 | 𝑈

⨄ 𝑗

1 𝑛 𝑈 𝑗 , 𝑓 𝑗 ∈ 𝒫 ⟦ 𝜏 𝑗 ⟧ 𝑈 𝑗 } .

A list type 𝐥𝐢𝐬𝐭 ⁢ ( 𝜏 ) is interpreted as the union of the sets of length 𝑖 tuples for all natural numbers 𝑖 , ⟦ 𝐥𝐢𝐬𝐭 ( 𝜏 ) ⟧

def ⨄ 𝑖

0 ∞ ⟦ 𝜏 ⟧ 𝑖 with 𝑈 -plots

𝒫 ⟦ 𝐥𝐢𝐬𝐭 ⁢ ( 𝜏 ) ⟧ 𝑈

def { [ 𝑈 𝑗 → 𝑓 𝑗 ⟦ 𝜏 ⟧ 𝑗 → ⨄ 𝑖

0 ∞ ⟦ 𝜏 ⟧ 𝑖 ] 𝑗

0 ∞ | 𝑈

⨄ 𝑗

0 ∞ 𝑈 𝑗 , 𝑓 𝑗 ∈ 𝒫 ⟦ 𝜏 ⟧ 𝑗 𝑈 𝑗 }

The constructors and destructors for variants and lists are interpreted as in the usual set theoretic semantics.

It is routine to show inductively that these interpretations are smooth. Thus every term Γ ⊢ 𝑡 : 𝜏 in the extended language is interpreted as a smooth function ⟦ 𝑡 ⟧ : ⟦ Γ ⟧ → ⟦ 𝜏 ⟧ between diffeological spaces. List objects as initial algebras are computed as usual in a cocomplete category (e.g. [JR11]). More generally, the interpretation for algebraic data types follows exactly the usual categorical semantics of variant types and inductive types (e.g. [Pit95]).

6.Categorical analysis of (higher order) forward AD and its correctness

This section has three parts. First, we give a categorical account of the functoriality of AD (Ex. 6.1). Then we introduce our gluing construction, and relate it to the correctness of AD (dgm. (4)). Finally, we state and prove a correctness theorem for all first order types by considering a category of manifolds (Th. 8).

6.1.Syntactic categories.

The key contribution of this subsection is that the AD macro translation (Section 3.2) has a canonical status as a unique functor between categories with structure. To this end, we build a syntactic category 𝐒𝐲𝐧 from our language, which has the property of a free category with certain structure. This means that for any category  𝒞 with this structure, there is a unique structure-preserving functor 𝐒𝐲𝐧 → 𝒞 , which is an interpretation of our language in that category. Generally speaking, this is the categorical view of denotational semantics (e.g. [Pit95]). But in this particular setting, the category 𝐒𝐲𝐧 itself admits alternative forms of this structure, given by the dual numbers interpretation, the triple numbers interpretation etc. of Section 2. This gives canonical functors 𝐒𝐲𝐧 → 𝐒𝐲𝐧 translating the language into itself, which are the AD macro translations (Section 3.2). A key point is that 𝐒𝐲𝐧 is almost entirely determined by universal properties (for example, cartesian closure for the function space); the only freedom is in the choice of interpretation of

(1)

the real numbers 𝐫𝐞𝐚𝐥 , which can be taken as the plain type 𝐫𝐞𝐚𝐥 , or as the dual numbers interpretation 𝐫𝐞𝐚𝐥 ∗ 𝐫𝐞𝐚𝐥 etc.;

(2)

the primitive operations 𝗈𝗉 , which can be taken as the operation 𝗈𝗉 itself, or as the derivative of the operation etc..

𝐜𝐚𝐬𝐞 ⟨ 𝑡 1 , … , 𝑡 𝑛 ⟩ 𝐨𝐟 ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑠

𝑠 [ 𝑡 1 / 𝑥 1 , … , 𝑡 𝑛 / 𝑥 𝑛 ]
𝑠 [ 𝑡 / 𝑦 ]

⁢ 𝑥 1 , … , 𝑥 𝑛 𝐜𝐚𝐬𝐞 𝑡 𝐨𝐟 ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝑠 [ ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ / 𝑦 ]
𝐜𝐚𝐬𝐞 ℓ 𝗂 𝑡 𝐨𝐟 { ℓ 𝟣 𝑥 1 → 𝑠 1 | ⋯ | ℓ 𝗇 𝑥 𝑛 → 𝑠 𝑛 }

𝑠 𝑖 [ 𝑡 / 𝑥 𝑖 ]
𝑠 [ 𝑡 / 𝑦 ]

⁢ 𝑥 1 , … , 𝑥 𝑛 𝐜𝐚𝐬𝐞 𝑡 𝐨𝐟 { ℓ 𝟣 𝑥 1 → 𝑠 [ ℓ 𝟣 ⁢ 𝑥 1 / 𝑦 ] | ⋯ | ℓ 𝗇 𝑥 𝑛 → 𝑠 [ ℓ 𝗇 ⁢ 𝑥 𝑛 / 𝑦 ] }
𝐟𝐨𝐥𝐝 ⁢ ( 𝑥 1 , 𝑥 2 ) . 𝑡 ⁢ 𝐨𝐯𝐞𝐫 ⁢ [ ] ⁢ 𝐟𝐫𝐨𝐦 ⁢ 𝑟

𝑟
𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑡 𝐨𝐯𝐞𝐫 𝑠 1 : : 𝑠 2 𝐟𝐫𝐨𝐦 𝑟

𝑡 [ 𝑠 1 / 𝑥 1 , 𝐟𝐨𝐥𝐝 ⁢ ( 𝑥 1 , 𝑥 2 ) . 𝑡 ⁢ 𝐨𝐯𝐞𝐫 ⁢ 𝑠 2 ⁢ 𝐟𝐫𝐨𝐦 ⁢ 𝑟 / 𝑥 2 ]
𝑢

𝑠 [ [ ] / 𝑦 ] , 𝑟 [ 𝑠 / 𝑥 2 ]

𝑠 [ 𝑥 1 ⁣ : ⁣ : 𝑦 / 𝑦 ] ⇒ 𝑠 [ 𝑡 / 𝑦 ]

⁢ 𝑥 1 , 𝑥 2 𝐟𝐨𝐥𝐝 ( 𝑥 1 , 𝑥 2 ) . 𝑟 𝐨𝐯𝐞𝐫 𝑡 𝐟𝐫𝐨𝐦 𝑢
( 𝜆 𝑥 . 𝑡 ) 𝑠

𝑡 [ 𝑠 / 𝑥 ]
𝑡

⁢ 𝑥 𝜆 ⁢ 𝑥 . 𝑡 ⁢ 𝑥 We write

⁢ 𝑥 1 , … , 𝑥 𝑛 to indicate that the variables are free in the left hand side

Figure 5.Standard 𝛽 ⁢ 𝜂 -laws (e.g. [Pit95]) for products, functions, variants and lists.

In more detail, our language induces a syntactic category as follows. {defi} Let 𝐒𝐲𝐧 be the category whose objects are types, and where a morphism 𝜏 → 𝜎 is a term in context 𝑥 : 𝜏 ⊢ 𝑡 : 𝜎 modulo the 𝛽 ⁢ 𝜂 -laws (Fig. 5). Composition is by substitution. For simplicity, we do not impose identities involving the primitive operations, such as the arithmetic identity 𝑥 + 𝑦

𝑦 + 𝑥 in 𝐒𝐲𝐧 . As is standard, this category has the following universal property.

Lemma 4 (e.g. [Pit95]).

For every bicartesian closed category 𝒞 with list objects, and every choice of an object 𝐹 ⁢ ( 𝐫𝐞𝐚𝐥 ) ∈ 𝒞 and morphisms 𝐹 ⁢ ( 𝗈𝗉 ) ∈ 𝒞 ⁢ ( 𝐹 ⁢ ( 𝐫𝐞𝐚𝐥 ) 𝑛 , 𝐹 ⁢ ( 𝐫𝐞𝐚𝐥 ) ) for all 𝗈𝗉 ∈ 𝖮𝗉 𝑛 and 𝑛 ∈ ℕ , in 𝒞 , there is a unique functor 𝐹 : 𝐒𝐲𝐧 → 𝒞 respecting the interpretation and preserving the bicartesian closed structure as well as list objects.

Proof 6.1 (Proof notes).

The functor 𝐹 : 𝐒𝐲𝐧 → 𝒞 is a canonical denotational semantics for the language, interpreting types as objects of 𝒞 and terms as morphisms. For instance, 𝐹 ⁢ ( 𝜏 → 𝜎 )

def ( 𝐹 ⁢ 𝜏 → 𝐹 ⁢ 𝜎 ) , the function space in the category 𝒞 , and 𝐹 ⁢ ( 𝑡 ⁢ 𝑠 ) is the composite ( 𝐹 ⁢ 𝑡 , 𝐹 ⁢ 𝑠 ) ; 𝑒𝑣𝑎𝑙 .

When 𝒞

𝐃𝐢𝐟𝐟 , the denotational semantics of the language in diffeological spaces (Section 4,5.2) can be understood as the unique structure preserving functor ⟦ − ⟧ : 𝐒𝐲𝐧 → 𝐃𝐢𝐟𝐟 satisfying ⟦ 𝐫𝐞𝐚𝐥 ⟧

ℝ , ⟦ 𝜍 ⟧

𝜍 and so on.

{exa}

[Canonical definition of forward AD] The forward AD macro 𝒟 → ( 𝑘 , 𝑅 ) (Section 3,5.1) arises as a canonical bicartesian closed functor on 𝐒𝐲𝐧 that preserves list objects. Consider the unique bicartesian closed functor 𝐹 : 𝐒𝐲𝐧 → 𝐒𝐲𝐧 that preserves list objects such that 𝐹 ⁢ ( 𝐫𝐞𝐚𝐥 )

𝐫𝐞𝐚𝐥 ( 𝑅 + 𝑘 𝑘 ) and

𝐹 ( 𝗈𝗉 )

𝑧 : ( 𝐹 ( 𝐫𝐞𝐚𝐥 ) ∗ . . ∗ 𝐹 ( 𝐫𝐞𝐚𝐥 ) ) ⊢ 𝐜𝐚𝐬𝐞 𝑧 𝐨𝐟 ⟨ 𝑥 1 , … , 𝑥 𝑛 ⟩ → 𝒟 → ( 𝑘 , 𝑅 ) ( 𝗈𝗉 ( 𝑥 1 , … , 𝑥 𝑛 ) ) : 𝐹 ( 𝐫𝐞𝐚𝐥 ) .

Then for any type 𝜏 , 𝐹 ⁢ ( 𝜏 )

𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝜏 ) , and for any term 𝑥 : 𝜏 ⊢ 𝑡 : 𝜎 , 𝐹 ⁢ ( 𝑡 )

𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝑡 ) as morphisms 𝐹 ⁢ ( 𝜏 ) → 𝐹 ⁢ ( 𝜎 ) in the syntactic category.

This observation is a categorical counterpart to Lemma 1.

6.2.Categorical gluing and logical relations.

Gluing is a method for building new categorical models which has been used for many purposes, including logical relations and realizability [MS92]. Our logical relations argument in the proof of Theorem 3 can be understood in this setting. (In fact we originally found the proof of Theorem 3 in this way.) In this subsection, for the categorically minded, we explain this, and in doing so we quickly recover a correctness result for the more general language in Section 5 and for arbitrary first order types.

The general, established idea of categorical logical relations starts from the observation that that logical relations are defined by induction on the structure of types. Types have universal properties in a categorical semantics (e.g. cartesian closure for the function space), and so we can organize the logical relations argument by defining some category  𝒞 of relations and observing that it has the requisite categorical structure. The interpretation of types as relations can then be understood as coming from a unique structure preserving map 𝐒𝐲𝐧 → 𝒞 . In this paper, our logical relations are not quite as simple as a binary relation on sets; rather they are relations between plots. Nonetheless, this still forms a category with the appropriate structure, which follows because it can still be regarded as arising from a gluing construction, as we now explain.

We define a category 𝐆𝐥 𝑘 whose objects are triples ( 𝑋 , 𝑋 ′ , 𝑆 ) where 𝑋 and  𝑋 ′ are diffeological spaces and 𝑆 ⊆ 𝒫 𝑋 ℝ 𝑘 × 𝒫 𝑋 ′ ℝ 𝑘 is a relation between their 𝑘 -dimensional plots. A morphism ( 𝑋 , 𝑋 ′ , 𝑆 ) → ( 𝑌 , 𝑌 ′ , 𝑇 ) is a pair of smooth functions 𝑓 : 𝑋 → 𝑌 , 𝑓 ′ : 𝑋 ′ → 𝑌 ′ , such that if ( 𝑔 , 𝑔 ′ ) ∈ 𝑆 then ( 𝑔 ; 𝑓 , 𝑔 ′ ; 𝑓 ′ ) ∈ 𝑇 . The idea is that this is a semantic domain where we can simultaneously interpret the language and its automatic derivatives.

Proposition 5.

The category 𝐆𝐥 𝑘 is bicartesian closed, has list objects, and the projection functor proj : 𝐆𝐥 𝑘 → 𝐃𝐢𝐟𝐟 × 𝐃𝐢𝐟𝐟 preserves this structure.

Proof 6.2 (Proof notes).

The category 𝐆𝐥 𝑘 is a full subcategory of the comma category

id 𝐒𝐞𝐭 ↓ 𝐃𝐢𝐟𝐟 ⁢ ( ℝ 𝑘 , − ) × 𝐃𝐢𝐟𝐟 ⁢ ( ℝ 𝑘 , − ) . The result thus follows by the general theory of categorical gluing (e.g. [JLS07, Lemma 15]).

We give a semantics ⦇ − ⦈

( ⦇ − ⦈ 0 , ⦇ − ⦈ 1 , 𝑆 − ) for the language in 𝐆𝐥 𝑘 , interpreting types 𝜏 as objects ( ⦇ 𝜏 ⦈ 0 , ⦇ 𝜏 ⦈ 1 , 𝑆 𝜏 ) , and terms as morphisms. We let ⦇ 𝐫𝐞𝐚𝐥 ⦈ 0

def ℝ and ⦇ 𝐫𝐞𝐚𝐥 ⦈ 1

def ℝ ( 𝑅 + 𝑘 𝑘 ) , with the relation

𝑆 𝐫𝐞𝐚𝐥

def { ( 𝑓 , ( ∂ 𝛼 1 + … + 𝛼 𝑘 𝑓 ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ) ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) ( 𝑅 , 0 , … , 0 ) ) | 𝑓 : ℝ 𝑘 → ℝ ⁢  smooth } .

We interpret the operations 𝗈𝗉 according to ⟦ 𝗈𝗉 ⟧ in ⦇ − ⦈ 0 , but according to the ( 𝑘 , 𝑅 ) -Taylor representation of ⟦ 𝗈𝗉 ⟧ in ⦇ − ⦈ 1 . For instance, when 𝑘

2 and 𝑟

2 , ⦇ ∗ ⦈ 1 : ℝ 2 × ℝ 2 → ℝ 2 is

⦇ ∗ ⦈ 1
( ( 𝑥 00 , 𝑥 01 , 𝑥 02 , 𝑥 10 , 𝑥 11 , 𝑥 20 ) , ( 𝑦 00 , 𝑦 01 , 𝑦 02 , 𝑦 10 , 𝑦 11 , 𝑦 20 ) )

def

( 𝑥 00 𝑦 00 ,

𝑥 00 ⁢ 𝑦 01 + 𝑥 01 ⁢ 𝑦 00 ,

𝑥 02 ⁢ 𝑦 00 + 2 ⁢ 𝑥 01 ⁢ 𝑦 01 + 𝑥 00 ⁢ 𝑦 02 ,

𝑥 00 ⁢ 𝑦 10 + 𝑥 10 ⁢ 𝑦 00 ,

𝑥 11 ⁢ 𝑦 00 + 𝑥 01 ⁢ 𝑦 10 + 𝑥 10 ⁢ 𝑦 01 + 𝑥 00 ⁢ 𝑦 11 ,

𝑥 20 𝑦 00 + 2 𝑥 10 𝑦 10 + 𝑥 00 𝑦 20 ) .

At this point one checks that these interpretations are indeed morphisms in 𝐆𝐥 𝑘 . This is equivalent to the statement that ⦇ 𝗈𝗉 ⦈ 1 is the ( 𝑘 , 𝑅 ) -Taylor representation of ⟦ 𝗈𝗉 ⟧  (3). The remaining constructions of the language are interpreted using the categorical structure of 𝐆𝐥 𝑘 , following Lemma  4.

Notice that the diagram below commutes. One can check this by hand or note that it follows from the initiality of 𝐒𝐲𝐧 (Lemma  4): all the functors preserve all the structure.

𝐒𝐲𝐧 ( id , 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( − ) ) ⦇ − ⦈ 𝐒𝐲𝐧 × 𝐒𝐲𝐧 ⟦ − ⟧ × ⟦ − ⟧ 𝐆𝐥 𝑘 proj 𝐃𝐢𝐟𝐟 × 𝐃𝐢𝐟𝐟

(4)

We thus arrive at a restatement of the correctness theorem (Th. 3), which holds even for the extended language with variants and lists, because for any 𝑥 1 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 : 𝐫𝐞𝐚𝐥 , the interpretations ( ⟦ 𝑡 ⟧ , ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝑡 ) ⟧ ) are in the image of the projection 𝐆𝐥 𝑘 → 𝐃𝐢𝐟𝐟 × 𝐃𝐢𝐟𝐟 , and hence ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝑡 ) ⟧ is a ( 𝑘 , 𝑅 ) -Taylor representation of  ⟦ 𝑡 ⟧ .

6.3.Correctness at all first order types, via manifolds.

We now generalize Theorem 3 to hold at all first order types, not just the reals.

So far, we have shown that our macro translation (Section 3.2) gives correct derivatives to functions of the real numbers, even if other types are involved in the definitions of the functions (Theorem 3 and Section 6.2). We can state this formally because functions of the real numbers have well understood derivatives (Section 2). There are no established mathematical notions of derivatives at higher types, and so we cannot even begin to argue that our syntactic derivatives of functions ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) match with some existing mathematical notion (see also Section 7).

However, for functions of first order type, like 𝐥𝐢𝐬𝐭 ⁢ ( 𝐫𝐞𝐚𝐥 ) → 𝐥𝐢𝐬𝐭 ⁢ ( 𝐫𝐞𝐚𝐥 ) , there are established mathematical notions of derivative, because we can understand 𝐥𝐢𝐬𝐭 ⁢ ( 𝐫𝐞𝐚𝐥 ) as the manifold of all tuples of reals, and then appeal to the well-known theory of manifolds and jet bundles. We do this now, to achieve a correctness theorem for all first order types (Theorem 8). The key high level points are that

manifolds support a notion of differentiation, and an interpretation of all first order types, but not an interpretation of higher types;

diffeological spaces support all types, including higher types, but not an established notion of differentiation in general;

manifolds and smooth maps embed full and faithfully in diffeological spaces, preserving the interpretation of first order types, so we can use the two notions together.

We now explain this development in more detail.

For our purposes, a smooth manifold 𝑀 is a second-countable Hausdorff topological space together with a smooth atlas. In more detail, a topological space 𝑋 is second-countable when there exists a collection 𝑈 := { 𝑈 𝑖 } 𝑖 ∈ ℕ of open subsets of 𝑋 such that any open subset of 𝑋 can be written as a union of elements of 𝑈 . A topological space 𝑋 is Hausdorff if for every distinct points 𝑥 and 𝑦 , there exists disjoint open subsets 𝑈 , 𝑉 of 𝑋 such that 𝑥 ∈ 𝑈 , 𝑦 ∈ 𝑉 . A smooth atlas of a topological space 𝑋 is an open cover 𝒰 together with homeomorphisms ( 𝜙 𝑈 : 𝑈 → ℝ 𝑛 ⁢ ( 𝑈 ) ) 𝑈 ∈ 𝒰 (called charts, or local coordinates) such that 𝜙 𝑈 − 1 ; 𝜙 𝑉 is smooth on its domain of definition for all 𝑈 , 𝑉 ∈ 𝒰 . A function 𝑓 : 𝑀 → 𝑁 between manifolds is smooth if 𝜙 𝑈 − 1 ; 𝑓 ; 𝜓 𝑉 is smooth for all charts 𝜙 𝑈 and 𝜓 𝑉 of 𝑀 and 𝑁 , respectively. Let us write 𝐌𝐚𝐧 for this category. This definition of manifolds is a slight generalisation of the more usual one from differential geometry because different charts in an atlas may have different finite dimensions 𝑛 ⁢ ( 𝑈 ) . Thus we consider manifolds with dimensions that are potentially unbounded, albeit locally finite.

Each open subset of ℝ 𝑛 can be regarded as a manifold. This lets us regard the category of manifolds 𝐌𝐚𝐧 as a full subcategory of the category of diffeological spaces. We consider a manifold ( 𝑋 , { 𝜙 𝑈 } 𝑈 ) as a diffeological space with the same carrier set 𝑋 and where the plots 𝒫 𝑋 𝑈 , called the manifold diffeology, are the smooth functions in 𝐌𝐚𝐧 ⁢ ( 𝑈 , 𝑋 ) . A function 𝑋 → 𝑌 is smooth in the sense of manifolds if and only if it is smooth in the sense of diffeological spaces [IZ13]. For the categorically minded reader, this means that we have a full embedding of 𝐌𝐚𝐧 into 𝐃𝐢𝐟𝐟 . Moreover, the natural interpretation of the first order fragment of our language in 𝐌𝐚𝐧 coincides with that in 𝐃𝐢𝐟𝐟 . That is, the embedding of 𝐌𝐚𝐧 into 𝐃𝐢𝐟𝐟 preserves finite products and countable coproducts (hence initial algebras of polynomial endofunctors).

Proposition 6.

Suppose that a type 𝜏 is first order, i.e. it is just built from reals, products, variants, and lists (or, again, arbitrary inductive types), and not function types. Then the diffeological space ⟦ 𝜏 ⟧ is a manifold.

Proof 6.3 (Proof notes).

This is proved by induction on the structure of types. In fact, one may show that every such ⟦ 𝜏 ⟧ is isomorphic to a manifold of the form ⨄ 𝑖

1 𝑛 ℝ 𝑑 𝑖 where the bound 𝑛 is either finite or ∞ , but this isomorphism is typically not an identity function.

We recall how the Taylor representation of any morphism 𝑓 : 𝑀 → 𝑁 of manifolds is given by its action on jets [KSM99, Chapter IV]. For each point 𝑥 in a manifold 𝑀 , define the ( 𝑘 , 𝑅 ) -jet space 𝒥 𝑥 ( 𝑘 , 𝑅 ) ⁢ 𝑀 to be the set { 𝛾 ∈ 𝐌𝐚𝐧 ( ℝ 𝑘 , 𝑀 ) ∣ 𝛾 ( 0 )

𝑥 } / ∼ of equivalence classes [ 𝛾 ] of 𝑘 -dimensional plots 𝛾 in 𝑀 based at 𝑥 , where we identify 𝛾 1 ∼ 𝛾 2 iff all partial derivatives of order ≤ 𝑅 coincide in the sense that

∂ 𝛼 1 + … + 𝛼 𝑘 ( 𝛾 1 ; 𝑓 ) ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ⁢ ( 0 )

∂ 𝛼 1 + … + 𝛼 𝑘 ( 𝛾 2 ; 𝑓 ) ⁢ ( 𝑥 ) ∂ 𝑥 1 𝛼 1 ⁢ ⋯ ⁢ ∂ 𝑥 𝑘 𝛼 𝑘 ⁢ ( 0 )

for all smooth 𝑓 : 𝑀 → ℝ and all multi-indices ( 𝛼 1 , … , 𝛼 𝑘 )

( 0 , … , 0 ) , … , ( 𝑅 , 0 , … , 0 ) . In the case of ( 𝑘 , 𝑅 )

( 1 , 1 ) , a ( 𝑘 , 𝑅 ) -jet space is better known as a tangent space. The ( 𝑘 , 𝑅 ) -jet bundle (a.k.a. tangent bundle, in case ( 𝑘 , 𝑅 )

( 1 , 1 ) ) of 𝑀 is the set 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 )

def ⨄ 𝑥 ∈ 𝑀 𝒥 𝑥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) . The charts of 𝑀 equip 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) with a canonical manifold structure. The (manifold) diffeology of these jet bundles can be concisely summarized by the plots 𝒫 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) 𝑈

{ 𝑓 : 𝑈 → | 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) | ∣ ∃ 𝑔 ∈ 𝒫 𝑀 𝑈 × ℝ 𝑘 . ∀ 𝑢 ∈ 𝑈 . ( 𝑔 ⁢ ( 𝑢 , 0 ) , [ 𝑣 ↦ 𝑔 ⁢ ( 𝑢 , 𝑣 ) ] )

𝑓 ⁢ ( 𝑢 ) } . Then 𝒥 ( 𝑘 , 𝑅 ) acts on smooth maps 𝑓 : 𝑀 → 𝑁 to give 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑓 ) : 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) → 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑁 ) is defined as 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑓 ) ⁢ ( 𝑥 , [ 𝛾 ] )

def ( 𝑓 ⁢ ( 𝑥 ) , [ 𝛾 ; 𝑓 ] ) . In local coordinates, this action 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑓 ) is seen to coincide precisely with the ( 𝑘 , 𝑅 ) -Taylor representation of 𝑓 given by the Faà di Bruno formula [Mer04]. All told, the ( 𝑘 , 𝑅 ) -jet bundle is a functor 𝒥 ( 𝑘 , 𝑅 ) : 𝐌𝐚𝐧 → 𝐌𝐚𝐧  [KSM99].

We can understand the jet bundle of a composite space in terms of that of its parts.

Lemma 7.

There are canonical isomorphisms 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( ⨄ 𝑖

1 ∞ 𝑀 𝑖 ) ≅ ⨄ 𝑖

1 ∞ 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 𝑖 ) and 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 1 × … × 𝑀 𝑛 ) ≅ 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 1 ) × … × 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 𝑛 ) .

Proof 6.4 (Proof notes).

For disjoint unions, notice that that smooth morphisms from ℝ 𝑘 into a disjoint union of manifolds always factor over a single inclusion, because ℝ 𝑘 is connected. For products, it is well-known that partial derivatives of a morphism ( 𝑓 1 , … , 𝑓 𝑛 ) are calculated component-wise [Lee13, ex. 3-2].

We define a canonical isomorphism 𝜙 𝜏 𝒟 → ⁢ 𝒥 : ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝜏 ) ⟧ → 𝒥 ( 𝑘 , 𝑅 ) ( ⟦ 𝜏 ⟧ ) for every type 𝜏 , by induction on the structure of types. We let 𝜙 𝐫𝐞𝐚𝐥 𝒟 → ⁢ 𝒥 : ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ( 𝐫𝐞𝐚𝐥 ) ⟧ → 𝒥 ( 𝑘 , 𝑅 ) ( ⟦ 𝐫𝐞𝐚𝐥 ⟧ ) be given by 𝜙 𝐫𝐞𝐚𝐥 𝒟 → ⁢ 𝒥 ⁢ ( 𝑥 , 𝑥 ′ )

def ( 𝑥 , [ 𝑡 ↦ 𝑥 + 𝑥 ′ ⁢ 𝑡 ] ) . For the other types, we use Lemma 7. We can now phrase correctness at all first order types.

Theorem 8 (Semantic correctness of 𝒟 → ( 𝑘 , 𝑅 ) (full)).

For any ground 𝜏 , any first order context Γ and any term Γ ⊢ 𝑡 : 𝜏 , the syntactic translation 𝒟 → ( 𝑘 , 𝑅 ) coincides with the ( 𝑘 , 𝑅 ) -jet bundle functor, modulo these canonical isomorphisms:

⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( Γ ) ⟧ ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝑡 ) ⟧ 𝜙 Γ 𝒟 → ⁢ 𝒥 ≅ ⟦ 𝒟 → ( 𝑘 , 𝑅 ) ⁢ ( 𝜏 ) ⟧ 𝜙 𝜏 𝒟 → ⁢ 𝒥 ≅ 𝒥 ( 𝑘 , 𝑅 ) ( ⟦ Γ ⟧ ) 𝒥 ( 𝑘 , 𝑅 ) ( ⟦ 𝑡 ⟧ ) 𝒥 ( 𝑘 , 𝑅 ) ( ⟦ 𝜏 ⟧ )

Proof 6.5 (Proof notes).

For any 𝑘 -dimensional plot 𝛾 ∈ 𝐌𝐚𝐧 ⁢ ( ℝ 𝑘 , 𝑀 ) , let 𝛾 ¯ ∈ 𝐌𝐚𝐧 ⁢ ( ℝ 𝑘 , 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) ) be the ( 𝑘 , 𝑅 ) -jet curve, given by 𝛾 ¯ ⁢ ( 𝑥 )

( 𝛾 ⁢ ( 𝑥 ) , [ 𝑡 ↦ 𝛾 ⁢ ( 𝑥 + 𝑡 ) ] ) . First, we note that a smooth map ℎ : 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑀 ) → 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑁 ) is of the form 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑔 ) for some 𝑔 : 𝑀 → 𝑁 if for all smooth 𝛾 : ℝ 𝑘 → 𝑀 we have 𝛾 ¯ ; ℎ

( 𝛾 ; 𝑔 ) ¯ : ℝ 𝑘 → 𝒥 ( 𝑘 , 𝑅 ) ⁢ ( 𝑁 ) . This generalizes (3). Second, for any first order type 𝜏 , 𝑆 ⟦ 𝜏 ⟧

{ ( 𝑓 , 𝑓 ~ ) | 𝑓 ~ ; 𝜙 𝜏 𝒟 → ⁢ 𝒥

𝑓 ¯ } . This is shown by induction on the structure of types. We conclude the theorem from diagram (4), by putting these two observations together.

7.Discussion: What are derivatives of higher order functions?

In our gluing categories 𝐆𝐥 𝑘 of Section 6.2, we have avoided the question of what semantic derivatives should be associated with higher order functions. Our syntactic macro 𝒟 → provides a specific derivative for every definable function, but in the model 𝐆𝐥 𝑘 there is only a relation between plots and their corresponding Taylor representations, and this relation is not necessarily single-valued. Our approach has been rather indifferent about what “the” correct derivative of a higher order function should be. Instead, all we have cared about is that we are using “a” derivative that is correct in the sense that it can never be used to produce incorrect derivatives for first order functions, where we do have an unambiguous notion of correct derivative.

7.1.Automatic derivatives of higher order functions may not be unique!

For a concrete example to show that derivatives of higher order functions might not be unique in our framework, let us consider the case ( 𝑘 , 𝑅 )

( 1 , 1 ) and focus on first derivatives of the evaluation function

ev :
ℝ → ⟦ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⟧

ℝ → ( ℝ ⇒ ℝ ) ⇒ ℝ ;

𝑟 ↦ ( 𝑓 ↦ 𝑓 ⁢ ( 𝑟 ) ) .

Our macro 𝒟 → will return 𝜆 𝑎 : ℝ . 𝜆 𝑓 : ℝ × ℝ ⇒ ℝ × ℝ . 𝑓 ( 𝑎 , 1 ) . In this section we show that the lambda term 𝜆 𝑎 : ℝ . 𝜆 𝑓 : ℝ × ℝ ⇒ ℝ × ℝ . sort 𝑓 ( 𝑎 , 1 ) is also a valid derivative of the evaluation map, where sort : ( ℝ × ℝ ⇒ ℝ × ℝ ) ⇒ ( ℝ × ℝ ⇒ ℝ × ℝ ) is defined by

sort := 𝜆 𝑓 . 𝜆 ( ( 𝑟 , _ ) . ( 𝜋 1 ( 𝑓 ( 𝑟 , 0 ) ) , 𝜋 2 𝒟 → ( ( − , 0 ) ; 𝑓 ; 𝜋 1 ) ( 𝑟 ) ) ) .

This map is idempotent and it converts any map ℝ × ℝ → ℝ × ℝ into the dual-numbers representation of its first component. For example, ( sort ⁢ ( swap ) ) is the constantly ( 0 , 0 ) function, where we write

swap :

ℝ × ℝ → ℝ × ℝ

( 𝑟 , 𝑟 ′ ) ↦ ( 𝑟 ′ , 𝑟 ) .

According to our gluing semantics, a function 𝑔 : ⦇ 𝜏 ⦈ 1 → ⦇ 𝜎 ⦈ 1 defines a correct ( 𝑘 , 𝑅 ) -Taylor representation of a function 𝑓 : ⦇ 𝜏 ⦈ 0 → ⦇ 𝜎 ⦈ 0 iff ( 𝑓 , 𝑔 ) defines a morphism ⦇ 𝜏 ⦈ → ⦇ 𝜎 ⦈ in 𝐆𝐥 𝑘 . In particular, there is no guarantee that every 𝑓 has a unique correct ( 𝑘 , 𝑅 ) -Taylor representation 𝑔 . (Although such Taylor representations are, in fact, unique when 𝜏 , 𝜎 are first order types.) The gluing relation ⦇ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⦈ in 𝐆𝐥 1 relates curves in 𝛾 : ℝ → ( ℝ ⇒ ℝ ) ⇒ ℝ to “tangent curves” 𝛾 ′ : ℝ → ( ℝ × ℝ ⇒ ℝ × ℝ ) ⇒ ℝ × ℝ . In this relation, the function ev is related to at least two different tangent curves.

Lemma 9.

We have a smooth map

sort :

( ℝ × ℝ ⇒ ℝ × ℝ ) → ( ℝ × ℝ ⇒ ℝ × ℝ )

𝑓 ↦ ( ( 𝑟 , _ ) ↦ ( 𝜋 1 ⁢ ( 𝑓 ⁢ ( 𝑟 , 0 ) ) , ∇ ( ( − , 0 ) ; 𝑓 ; 𝜋 1 ) ⁢ ( 𝑟 ) ) ) .

Proof 7.1.

Let 𝑓 ∈ 𝒫 ℝ × ℝ ⇒ ℝ × ℝ 𝑈 and let 𝛾 1 , 𝛾 2 ∈ 𝒫 ℝ 𝑈 . Then, also 𝑢 ↦ ( 𝛾 1 ⁢ ( 𝑢 ) , 0 ) ; 𝑓 ⁢ ( 𝑢 ) ; 𝜋 1 ∈ 𝒫 ℝ 𝑈

𝐌𝐚𝐧 ⁢ ( 𝑈 , ℝ ) by definition of the exponential in 𝐃𝐢𝐟𝐟 . Therefore, we also have that 𝑢 ↦ ∇ ( 𝑢 ′ ↦ ( 𝛾 1 ⁢ ( 𝑢 ′ ) , 0 ) ; 𝑓 ⁢ ( 𝑢 ′ ) ; 𝜋 1 ) ∈ 𝐌𝐚𝐧 ⁢ ( 𝑈 , ℝ )

𝒫 ℝ 𝑈 , as we are working with infinitely differentiable smooth maps. Consequently,

𝑢 ↦ ( 𝑓 ; sort ) ⁢ ( 𝑢 ) ⁢ ( 𝛾 1 ⁢ ( 𝑢 ) , 𝛾 2 ⁢ ( 𝑢 ) )

( 𝜋 1 ⁢ ( 𝑓 ⁢ ( 𝑢 ) ) ⁢ ( 𝛾 1 ⁢ ( 𝑢 ) , 0 ) , ∇ ( 𝑢 ′ ↦ ( 𝛾 1 ⁢ ( 𝑢 ′ ) , 0 ) ; 𝑓 ⁢ ( 𝑢 ′ ) ; 𝜋 1 ) ) ∈ 𝒫 ℝ × ℝ 𝑈 ,

by definition of the product in 𝐃𝐢𝐟𝐟 . It follows that ( 𝑓 ; sort ) ∈ 𝒫 ℝ × ℝ ⇒ ℝ × ℝ 𝑈 .

Proposition 10.

We have that both ( ev , ev 1 ′ ) ∈ ⦇ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⦈ and ( ev , ev 2 ′ ) ∈ ⦇ ( 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ) → 𝐫𝐞𝐚𝐥 ⦈ for

ev 1 ′ :

ℝ → ( ℝ × ℝ ⇒ ℝ × ℝ ) ⇒ ℝ × ℝ

𝑎 ↦ ( 𝑓 ↦ 𝑓 ⁢ ( 𝑎 , 1 ) )

ev 2 ′ :

ℝ → ( ℝ × ℝ ⇒ ℝ × ℝ ) ⇒ ℝ × ℝ

𝑎 ↦ ( 𝑓 ↦ ( sort ⁢ 𝑓 ) ⁢ ( 𝑎 , 1 ) ) .

Proof 7.2.

By definition of ⦇ − ⦈ , we need to show that for any ( 𝛾 , 𝛾 ′ ) ∈ ⦇ 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ⦈ , we have that ( 𝑥 ↦ ev ( 𝑥 ) ( 𝛾 ( 𝑥 ) ) , 𝑥 ↦ ev 𝑖 ′ ( 𝑥 ) ( 𝛾 ′ ( 𝑥 ) ) ) ∈ ⦇ 𝐫𝐞𝐚𝐥 ⦈ . This means that we need to show that for 𝑖

1 , 2

𝑥 ↦ ev 𝑖 ′ ⁢ ( 𝑥 ) ⁢ ( 𝛾 ′ ⁢ ( 𝑥 ) )

( 𝑥 ↦ ev ⁢ ( 𝑥 ) ⁢ ( 𝛾 ⁢ ( 𝑥 ) ) , ∇ ( 𝑥 ↦ ev ⁢ ( 𝑥 ) ⁢ ( 𝛾 ⁢ ( 𝑥 ) ) ) )

Unrolling further, this means we need to show that for any 𝛾 : ℝ → ℝ ⇒ ℝ and 𝛾 ′ : ℝ → ℝ × ℝ ⇒ ℝ × ℝ such that for any ( 𝛿 , 𝛿 ′ ) ∈ ⦇ 𝐫𝐞𝐚𝐥 ⦈ (which means that 𝛿 : ℝ → ℝ and 𝛿 ′

( 𝛿 , ∇ 𝛿 ) ), we have that

( 𝑟 ↦ 𝛾 ( 𝑟 ) ( 𝛿 ( 𝑟 ) ) , 𝑟 ↦ 𝛾 ′ ( 𝑟 ) ( 𝛿 ′ ( 𝑟 ) ) ) ∈ ⦇ 𝐫𝐞𝐚𝐥 ⦈

The latter part finally means that we need to show that

𝑟 ↦ 𝛾 ′ ⁢ ( 𝑟 ) ⁢ ( 𝛿 ⁢ ( 𝑟 ) , ∇ 𝛿 ⁢ ( 𝑟 ) )

( 𝑟 ↦ 𝛾 ⁢ ( 𝑟 ) ⁢ ( 𝛿 ⁢ ( 𝑟 ) ) , ∇ ( 𝑟 ↦ 𝛾 ⁢ ( 𝑟 ) ⁢ ( 𝛿 ⁢ ( 𝑟 ) ) ) )

Now, focussing on ev 1 ′ : we need to show that

𝑥 ↦ ev 1 ′ ( 𝑥 ) ( 𝛾 ′ ( 𝑥 ) )

( 𝑥 ↦ ev ( 𝑥 ) ( 𝛾 ( 𝑥 ) ) ,

∇ ( 𝑥 ↦ ev ( 𝑥 ) ( 𝛾 ( 𝑥 ) ) ) )

Inlining the definition of ev 1 ′ : we need to show that

𝑥 ↦ 𝛾 ′ ⁢ ( 𝑥 ) ⁢ ( 𝑥 , 1 )

( 𝑥 ↦ 𝛾 ⁢ ( 𝑥 ) ⁢ ( 𝑥 ) , ∇ ( 𝑥 ↦ 𝛾 ⁢ ( 𝑥 ) ⁢ ( 𝑥 ) ) )

This follows by assumption by choosing 𝛿 ⁢ ( 𝑟 )

𝑟 , and hence 𝛿 ′ ⁢ ( 𝑟 )

( 𝑟 , 1 ) .

Focussing on ev 2 ′ : we need to show that

𝑥 ↦ ev 2 ′ ⁢ ( 𝑥 ) ⁢ ( 𝛾 ′ ⁢ ( 𝑥 ) )

( 𝑥 ↦ ev ⁢ ( 𝑥 ) ⁢ ( 𝛾 ⁢ ( 𝑥 ) ) , ∇ ( 𝑥 ↦ ev ⁢ ( 𝑥 ) ⁢ ( 𝛾 ⁢ ( 𝑥 ) ) ) )

Inlining ev 2 ′ ’s definition: we need to show that

( 𝑥 ↦ ( ( 𝜋 1 ⁢ ( 𝛾 ′ ⁢ ( 𝑥 ) ⁢ ( 𝑥 , 0 ) ) , 𝑥 ↦ ∇ ( ( − , 0 ) ; 𝛾 ′ ⁢ ( 𝑥 ) ; 𝜋 1 ) ⁢ ( 𝑥 ) ) ) )

( 𝑥 ↦ ( ( 𝑟 , _ ) ↦ ( 𝜋 1 ⁢ ( 𝛾 ′ ⁢ ( 𝑥 ) ⁢ ( 𝑟 , 0 ) ) , ∇ ( ( − , 0 ) ; 𝛾 ′ ⁢ ( 𝑥 ) ; 𝜋 1 ) ⁢ ( 𝑟 ) ) ) ⁢ ( 𝑥 , 1 ) )

is equal to

𝑥 ↦ ( sort ⁢ 𝛾 ′ ⁢ ( 𝑥 ) ) ⁢ ( 𝑥 , 1 )

( 𝑥 ↦ 𝛾 ⁢ ( 𝑥 ) ⁢ ( 𝑥 ) , ∇ ( 𝑥 ↦ 𝛾 ⁢ ( 𝑥 ) ⁢ ( 𝑥 ) ) )

That is, we need to show that 𝜋 1 ⁢ ( 𝛾 ′ ⁢ ( 𝑥 ) ⁢ ( 𝑥 , 0 ) )

𝛾 ⁢ ( 𝑥 ) ⁢ ( 𝑥 ) for all 𝑥 ∈ ℝ , which holds by the assumption that ( 𝛾 , 𝛾 ′ ) ∈ ⦇ 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ⦈ by choosing 𝛿 ⁢ ( 𝑥 ′ )

𝑥 (and hence 𝛿 ′ ⁢ ( 𝑥 ′ )

( 𝑥 , 0 ) ) and then specializing to 𝑥

𝑥 ′ .

Yet, ev 1 ′ ≠ ev 2 ′ as ev 1 ′ ⁢ ( 𝑎 ) ⁢ ( swap )

( 1 , 𝑎 ) and ev 2 ′ ⁢ ( 𝑎 ) ⁢ ( swap )

( 0 , 0 ) . This shows that ev 1 ′ ≠ ev 2 ′ are both “valid” semantic derivatives of the evaluation function ( ev ) in our framework. In particular, it shows that semantic derivatives of higher order functions might not be unique. Our macro 𝒟 → will return ev 1 ′ , but everything would still work just as well if it instead returned ev 2 ′ .

7.2.Canonical derivatives of higher order functions?

Differential geometers and analysts have long pursued notions of a canonical derivative of various higher order functions arising, for example, in the calculus of variations and in the study of infinite dimensional Lie groups [KM97]. Such an uncontroversial notion of derivative exists on various (infinite dimensional) spaces of functions that form suitable (so-called convenient) vector spaces, or, manifolds locally modelled on such vector spaces. At the level of generality of diffeological spaces, however, various natural notions of derivative that coincide in convenient vector spaces start to diverge and it is no longer clear what the best definition of a derivative is [CW14]. Another, fundamentally different setting that defines canonical derivatives of many higher order functions is given by synthetic differential geometry [Koc06].

While derivatives of higher order functions are of deep interest and have rightly been studied in their own right in differential geometry, we believe the situation is subtly different in computer science:

(1)

In programming applications, we use higher order programs only to construct the first order functions that we ultimately end up running and calculating derivatives of. Automatic differentiation methods can exploit this freedom: derivatives of higher order functions only matter in so far as they can be used to construct the correct derivatives of first order functions, so we can choose a simple and cheap notion of derivative among the valid options. As such, the fact that our semantics does not commit to a single notion of derivative of higher order functions can be seen as a feature rather than bug that models the pragmatics of programming practice.

(2)

While function spaces in differential geometry are typically infinite dimensional objects that are unsuitable for representation in the finite memory of a computer, higher order functions as used in programming are much more restricted: all they can do is call a function on finitely many arguments and analyse the function outputs. As such, function types in programming can be thought of as (locally) finite dimensional. In case a canonical notion of automatic derivative of higher order function is really desired, it may be worth pursuing a more intentional notion of semantics such as one based on game semantics. Such intentional techniques could capture the computational notion of higher order function better than our current (and other) extensional semantics using existing techniques from differential geometry. We hope that an exploration of such techniques might lead to an appropriate notion of computable derivative, even for higher order functions.

8.Discussion and future work 8.1.Summary

We have shown that diffeological spaces provide a denotational semantics for a higher order language with variants and inductive types (Section 4,5). We have used this to show correctness of simple forward-mode AD translations for calculating higher derivatives (Theorem 3, Theomem 8).

The structure of our elementary correctness argument for Theorem 3 is a typical logical relations proof over a denotational semantics. As explained in Section 6, this can equivalently be understood as a denotational semantics in a new kind of space obtained by categorical gluing.

Overall, then, there are two logical relations at play. One is in diffeological spaces, which ensures that all definable functions are smooth. The other is in the correctness proof (equivalently in the categorical gluing), which explicitly tracks the derivative of each function, and tracks the syntactic AD even at higher types.

8.2.Connection to the state of the art in AD implementation

As is common in denotational semantics research, we have here focused on an idealized language and simple translations to illustrate the main aspects of the method. There are a number of points where our approach is simplistic compared to the advanced current practice, as we now explain.

8.2.1.Representation of vectors

In our examples we have treated 𝑛 -vectors as tuples of length  𝑛 . This style of programming does not scale to large  𝑛 . A better solution would be to use array types, following [SFVPJ19]. As demonstrated by [CJS20], our categorical semantics and correctness proofs straightforwardly extend to cover them, in a similar way to our treatment of lists. In fact, [CJS20] formalizes our correctness arguments in Coq and extends them to apply to the system of [SFVPJ19].

8.2.2.Efficient forward-mode AD

For AD to be useful, it must be fast. The ( 1 , 1 ) -AD macro 𝒟 → ( 1 , 1 ) that we use is the basis of an efficient AD library [SFVPJ19]. Numerous optimizations are needed, ranging from algebraic manipulations, to partial evaluations, to the use of an optimizing C compiler, but the resulting implementation is performant in experiments [SFVPJ19]. The Coq formalization [CJS20] validates some of these manipulations using a similar semantics to ours. We believe the implementation in [SFVPJ19] can be extended to apply to the more general ( 𝑘 , 𝑅 ) -AD methods we described in this paper through minor changes.

8.2.3.Reverse-mode and mixed-mode AD

While forward-mode AD methods are useful, many applications require reverse-mode AD, or even mixed-mode AD for efficiency. In [HSV20a], we described how our correctness proof applies to a continuation-based AD technique that closely resembles reverse-mode AD, but only has the correct complexity under a non-standard operational semantics [BMP20] (in particular, the linear factoring rule is crucial). It remains to be seen whether this technique and its correctness proof can be adapted to yield genuine reverse AD under a standard operational semantics.

Alternatively, by relying on a variation of our techniques, [Vák21] gives a correctness proof of a rather different ( 1 , 1 ) -reverse AD algorithm that stores the (primal, adjoint)-vector pair as a struct-of-arrays rather than as an array-of-structs. Future work could explore extended its analysis to ( 𝑘 , 𝑅 ) -reverse AD and mixed-mode AD for efficiently computing higher order derivatives.

8.2.4.Other language features

The idealized languages that we considered so far do not touch on several useful language constructs. For example: the use of functions that are partial (such as division) or partly-smooth (such as ReLU); phenomena such as iteration, recursion; and probabilities. Recent work by MV [Vák20] shows how our analysis of ( 1 , 1 ) -AD extends to apply to partiality, iteration, and recursion. This development is orthogonal to the one in this paper: its methods combine directly with those in the present paper to analyze ( 𝑘 , 𝑅 ) -forward mode AD of recursive programs. We leave the analysis of AD of probabilistic programs for future work.

Acknowledgment

We have benefited from discussing this work with many people, including M. Betancourt, B. Carpenter, O. Kammar, C. Mak, L. Ong, B. Pearlmutter, G. Plotkin, A. Shaikhha, J. Sigal, and others. In the course of this work, MV has also been employed at Oxford (EPSRC Project EP/M023974/1) and at Columbia in the Stan development team. This project has also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 895827; NWO Veni grant number VI.Veni.202.124; a Royal Society University Research Fellowship; the ERC BLAST grant; the Air Force Office of Scientific Research under award number FA9550-21-1-0038; and a Facebook Research Award.

References [AAB+16] ↑ Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.Tensorflow: A system for large-scale machine learning.In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016. [Ama12] ↑ Shun-ichi Amari.Differential-geometrical methods in statistics, volume 28.Springer Science & Business Media, 2012. [AP20] ↑ Martín Abadi and Gordon D Plotkin.A simple differentiable programming language.In Proc. POPL 2020. ACM, 2020. [BCLG20] ↑ Gilles Barthe, Raphaëlle Crubillé, Ugo Dal Lago, and Francesco Gavazzo.On the versatility of open logical relations: Continuity, automatic differentiation, and a containment theorem.In Proc. ESOP 2020. Springer, 2020.To appear. [Bet18] ↑ Michael Betancourt.A geometric theory of higher-order automatic differentiation.arXiv preprint arXiv:1812.11592, 2018. [BH11] ↑ John Baez and Alexander Hoffnung.Convenient categories of smooth spaces.Transactions of the American Mathematical Society, 363(11):5789–5825, 2011. [BJD19] ↑ Jesse Bettencourt, Matthew J Johnson, and David Duvenaud.Taylor-mode automatic differentiation for higher-order derivatives in JAX.2019. [BML+20] ↑ Gilbert Bernstein, Michael Mara, Tzu-Mao Li, Dougal Maclaurin, and Jonathan Ragan-Kelley.Differentiating a tensor language.arXiv preprint arXiv:2008.11256, 2020. [BMP20] ↑ Alois Brunel, Damiano Mazza, and Michele Pagani.Backpropagation in the simply typed lambda-calculus with linear negation.In Proc. POPL 2020, 2020. [BS96] ↑ Claus Bendtsen and Ole Stauning.Fadbad, a flexible C++ package for automatic differentiation.Technical report, Technical Report IMM–REP–1996–17, Department of Mathematical Modelling, Technical University of Denmark, Lyngby, 1996. [BS97] ↑ Claus Bendtsen and Ole Stauning.Tadiff, a flexible c++ package for automatic differentiation.TU of Denmark, Department of Mathematical Modelling, Lungby. Technical report IMM-REP-1997-07, 1997. [CCG+20] ↑ J. Robin B. Cockett, Geoff S. H. Cruttwell, Jonathan Gallagher, Jean-Simon Pacaud Lemay, Benjamin MacAdam, Gordon D. Plotkin, and Dorette Pronk.Reverse derivative categories.In Proc. CSL 2020, 2020. [CGM19] ↑ Geoff Cruttwell, Jonathan Gallagher, and Ben MacAdam.Towards formalizing and extending differential programming using tangent categories.In Proc. ACT 2019, 2019. [CHB+15] ↑ Bob Carpenter, Matthew D Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt.The Stan math library: Reverse-mode automatic differentiation in C++.arXiv preprint arXiv:1509.07164, 2015. [CJS20] ↑ Curtis Chin Jen Sem.Formalized correctness proofs of automatic differentiation in Coq.Master’s Thesis, Utrecht University, 2020.Thesis: https://dspace.library.uu.nl/handle/1874/400790. Coq code: https://github.com/crtschin/thesis. [CRBD18] ↑ Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud.Neural ordinary differential equations.In Advances in Neural Information Processing Systems, pages 6571–6583, 2018. [CS96] ↑ G Constantine and T Savits.A multivariate Faa di Bruno formula with applications.Transactions of the American Mathematical Society, 348(2):503–520, 1996. [CS11] ↑ J Robin B Cockett and Robert AG Seely.The Faa di Bruno construction.Theory and Applications of Categories, 25(15):394–425, 2011. [CW14] ↑ J Daniel Christensen and Enxin Wu.Tangent spaces and tangent bundles for diffeological spaces.arXiv preprint arXiv:1411.5425, 2014. [DHS11] ↑ John Duchi, Elad Hazan, and Yoram Singer.Adaptive subgradient methods for online learning and stochastic optimization.Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. [Ell18] ↑ Conal Elliott.The simple essence of automatic differentiation.Proceedings of the ACM on Programming Languages, 2(ICFP):70, 2018. [EM03] ↑ L Hernández Encinas and J Munoz Masque.A short proof of the generalized Faà di Bruno’s formula.Applied Mathematics Letters, 16(6):975–979, 2003. [ER03] ↑ Thomas Ehrhard and Laurent Regnier.The differential lambda-calculus.Theoretical Computer Science, 309(1-3):1–41, 2003. [FJL18] ↑ Roy Frostig, Matthew James Johnson, and Chris Leary.Compiling machine learning programs via high-level tracing.Systems for Machine Learning, 2018. [FST19] ↑ Brendan Fong, David Spivak, and Rémy Tuyéras.Backprop as functor: A compositional perspective on supervised learning.In 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 1–13. IEEE, 2019. [GSS00] ↑ Izrail Moiseevitch Gelfand, Richard A Silverman, and Richard A Silverman.Calculus of variations.Courier Corporation, 2000. [GUW00] ↑ Andreas Griewank, Jean Utke, and Andrea Walther.Evaluating higher derivative tensors by forward propagation of univariate taylor series.Mathematics of Computation, 69(231):1117–1130, 2000. [HG14] ↑ Matthew D Hoffman and Andrew Gelman.The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo.Journal of Machine Learning Research, 15(1):1593–1623, 2014. [HSV20a] ↑ Mathieu Huot, Sam Staton, and Matthijs Vákár.Correctness of automatic differentiation via diffeologies and categorical gluing.In FoSSaCS, pages 319–338, 2020. [HSV20b] ↑ Mathieu Huot, Sam Staton, and Matthijs Vákár.Correctness of automatic differentiation via diffeologies and categorical gluing.Full version, 2020.arxiv:2001.02209. [IZ13] ↑ Patrick Iglesias-Zemmour.Diffeology.American Mathematical Soc., 2013. [JLS07] ↑ Peter T Johnstone, Stephen Lack, and P Sobocinski.Quasitoposes, quasiadhesive categories and Artin glueing.In Proc. CALCO 2007, 2007. [JR11] ↑ Bart Jacobs and JMMM Rutten.An introduction to (co)algebras and (co)induction.In Advanced Topics in Bisimulation and Coinduction, pages 38–99. CUP, 2011. [Kar01] ↑ Jerzy Karczmarczuk.Functional differentiation of computer programs.Higher-Order and Symbolic Computation, 14(1):35–57, 2001. [KB14] ↑ Diederik P Kingma and Jimmy Ba.Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014. [KK04] ↑ Dana A Knoll and David E Keyes.Jacobian-free Newton–Krylov methods: a survey of approaches and applications.Journal of Computational Physics, 193(2):357–397, 2004. [KM97] ↑ Andreas Kriegl and Peter W Michor.The convenient setting of global analysis, volume 53.American Mathematical Soc., 1997. [Koc06] ↑ Anders Kock.Synthetic differential geometry, volume 333.Cambridge University Press, 2006. [KSM99] ↑ Ivan Kolár, Jan Slovák, and Peter W Michor.Natural operations in differential geometry.1999. [KTR+17] ↑ Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei.Automatic differentiation variational inference.The Journal of Machine Learning Research, 18(1):430–474, 2017. [KW+52] ↑ Jack Kiefer, Jacob Wolfowitz, et al.Stochastic estimation of the maximum of a regression function.The Annals of Mathematical Statistics, 23(3):462–466, 1952. [Lee13] ↑ John M Lee.Smooth manifolds.In Introduction to Smooth Manifolds, pages 1–31. Springer, 2013. [LMG18] ↑ Sören Laue, Matthias Mitterreiter, and Joachim Giesen.Computing higher order derivatives of matrix and tensor expressions.Advances in Neural Information Processing Systems, 31:2750–2759, 2018. [LMG20] ↑ Sören Laue, Matthias Mitterreiter, and Joachim Giesen.A simple and efficient tensor calculus.In AAAI, pages 4527–4534, 2020. [LN89] ↑ Dong C Liu and Jorge Nocedal.On the limited memory BFGS method for large scale optimization.Mathematical programming, 45(1-3):503–528, 1989. [LNV21] ↑ Fernando Lucatelli Nunes and Matthijs Vákár.CHAD for expressive total languages.arXiv e-prints, pages arXiv–2110, 2021. [LYRY20] ↑ Wonyeol Lee, Hangyeol Yu, Xavier Rival, and Hongseok Yang.On correctness of automatic differentiation for non-differentiable functions.In Advances in Neural Information Processing Systems, 2020. [Man12] ↑ Oleksandr Manzyuk.A simply typed 𝜆 -calculus of forward automatic differentiation.In Proc. MFPS 2012, 2012. [Mar10] ↑ James Martens.Deep learning via Hessian-free optimization.In ICML, volume 27, pages 735–742, 2010. [Mer04] ↑ Joel Merker.Four explicit formulas for the prolongations of an infinitesimal lie symmetry and multivariate Faa di Bruno formulas.arXiv preprint math/0411650, 2004. [MO20] ↑ Carol Mak and Luke Ong.A differential-form pullback programming language for higher-order reverse-mode automatic differentiation.arxiv:2002.08241, 2020. [MP21] ↑ Damiano Mazza and Michele Pagani.Automatic differentiation in PCF.Proc. ACM Program. Lang., 5(POPL):1–27, 2021.doi:10.1145/3434309. [MS92] ↑ John C Mitchell and Andre Scedrov.Notes on sconing and relators.In International Workshop on Computer Science Logic, pages 352–378. Springer, 1992. [Nea11] ↑ Radford M Neal.MCMC using Hamiltonian dynamics.In Handbook of Markov Chain Monte Carlo, chapter 5. Chapman & Hall / CRC Press, 2011. [PGC+17] ↑ Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer.Automatic differentiation in pytorch.2017. [Pit95] ↑ Andrew M Pitts.Categorical logic.Technical report, University of Cambridge, Computer Laboratory, 1995. [Plo18] ↑ Gordon D Plotkin.Some principles of differential programming languages.Invited talk, POPL 2018, 2018. [PS07] ↑ Barak A Pearlmutter and Jeffrey Mark Siskind.Lazy multivariate higher-order forward-mode ad.ACM SIGPLAN Notices, 42(1):155–160, 2007. [PS08] ↑ Barak A Pearlmutter and Jeffrey Mark Siskind.Reverse-mode AD in a functional framework: Lambda the ultimate backpropagator.ACM Transactions on Programming Languages and Systems (TOPLAS), 30(2):7, 2008. [Qia99] ↑ Ning Qian.On the momentum term in gradient descent learning algorithms.Neural networks, 12(1):145–151, 1999. [RM51] ↑ Herbert Robbins and Sutton Monro.A stochastic approximation method.The Annals of Mathematical Statistics, pages 400–407, 1951. [Sav06] ↑ Thomas H Savits.Some statistical applications of Faa di Bruno.Journal of Multivariate Analysis, 97(10):2131–2140, 2006. [SFVPJ19] ↑ Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones.Efficient differentiable programming in a functional array-processing language.Proceedings of the ACM on Programming Languages, 3(ICFP):97, 2019. [SMC20] ↑ Benjamin Sherman, Jesse Michel, and Michael Carbin. 𝜆 𝑆 : Computable semantics for differentiable programming with higher-order functions and datatypes.arXiv preprint arXiv:2007.08017, 2020. [Sou80] ↑ Jean-Marie Souriau.Groupes différentiels.In Differential geometrical methods in mathematical physics, pages 91–128. Springer, 1980. [Sta11] ↑ Andrew Stacey.Comparative smootheology.Theory Appl. Categ., 25(4):64–117, 2011. [Vák20] ↑ Matthijs Vákár.Denotational correctness of foward-mode automatic differentiation for iteration and recursion.arXiv preprint arXiv:2007.05282, 2020. [Vák21] ↑ Matthijs Vákár.Reverse AD at higher types: Pure, principled and denotationally correct.In ESOP, pages 607–634, 2021. [VMBBL18] ↑ Bart Van Merriënboer, Olivier Breuleux, Arnaud Bergeron, and Pascal Lamblin.Automatic differentiation in ML: Where we are and where we should be going.In Advances in Neural Information Processing Systems, pages 8757–8767, 2018. [VS21] ↑ Matthijs Vákár and Tom Smeding.CHAD: Combinatory homomorphic automatic differentiation.arXiv preprint arXiv:2103.15776, 2021. [WGP16] ↑ Mu Wang, Assefaw Gebremedhin, and Alex Pothen.Capitalizing on live variables: new algorithms for efficient hessian computation via automatic differentiation.Mathematical Programming Computation, 8(4):393–433, 2016. [WWE+19] ↑ Fei Wang, Xilun Wu, Gregory Essertel, James Decker, and Tiark Rompf.Demystifying differentiable programming: Shift/reset the penultimate backpropagator.Proceedings of the ACM on Programming Languages, 3(ICFP), 2019. [ZHCW20] ↑ Shaopeng Zhu, Shih-Han Hung, Shouvanik Chakrabarti, and Xiaodi Wu.On the principles of differentiable quantum programming languages.In Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI 2020, London, UK, June 15-20, 2020, pages 272–285. ACM, 2020.doi:10.1145/3385412.3386011. Appendix A 𝐂𝐚𝐫𝐭𝐒𝐩 and 𝐌𝐚𝐧 are not cartesian closed categories Lemma 11.

There is no continuous injection ℝ 𝑑 + 1 → ℝ 𝑑 .

Proof A.1.

If there were, it would restrict to a continuous injection 𝑆 𝑑 → ℝ 𝑑 . The Borsuk-Ulam theorem, however, tells us that every continuous 𝑓 : 𝑆 𝑑 → ℝ 𝑑 has some 𝑥 ∈ 𝑆 𝑑 such that 𝑓 ⁢ ( 𝑥 )

𝑓 ⁢ ( − 𝑥 ) , which is a contradiction.

Let us define the terms:

𝑥 0 : 𝐫𝐞𝐚𝐥 , … , 𝑥 𝑛 : 𝐫𝐞𝐚𝐥 ⊢ 𝑡 𝑛

𝜆 𝑦 . 𝑥 0 + 𝑥 1 ∗ 𝑦 + ⋯ + 𝑥 𝑛 ∗ 𝑦 𝑛 : 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥

Assuming that 𝐂𝐚𝐫𝐭𝐒𝐩 / 𝐌𝐚𝐧 is cartesian closed, observe that these get interpreted as injective continuous (because smooth) functions ℝ 𝑛 → ⟦ 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ⟧ in 𝐂𝐚𝐫𝐭𝐒𝐩 and 𝐌𝐚𝐧 .

Theorem 12.

𝐂𝐚𝐫𝐭𝐒𝐩 is not cartesian closed.

Proof A.2.

In case 𝐂𝐚𝐫𝐭𝐒𝐩 were cartesian closed, we would have ⟦ 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ⟧

𝐫𝐞𝐚𝐥 𝑛 for some 𝑛 . Then, we would get, in particular a continuous injection ⟦ 𝑡 𝑛 + 1 ⟧ : ℝ 𝑛 + 1 → ℝ 𝑛 , which contradicts Lemma 11.

Theorem 13.

𝐌𝐚𝐧 is not cartesian closed.

Proof A.3.

Observe that we have 𝜄 𝑛 : ℝ 𝑛 → ℝ 𝑛 + 1 ; ⟨ 𝑎 0 , … , 𝑎 𝑛 ⟩ ↦ ⟨ 𝑎 0 , … , 𝑎 𝑛 , 0 ⟩ and that 𝜄 𝑛 ; ⟦ 𝑡 𝑛 + 1 ⟧

⟦ 𝑡 𝑛 ⟧ . Let us write 𝐴 𝑛 for the image of ⟦ 𝑡 𝑛 ⟧ and 𝐴

∪ 𝑛 ∈ ℕ 𝐴 𝑛 . Then, 𝐴 𝑛 is connected because it is the continuous image of a connected set. Similarly, 𝐴 is connected because it is the non-disjoint union of connected sets. This means that 𝐴 lies in a single connected component of ⟦ 𝐫𝐞𝐚𝐥 → 𝐫𝐞𝐚𝐥 ⟧ , which is a manifold with some finite dimension, say 𝑑 .

Take some 𝑥 ∈ ℝ 𝑑 + 1 (say, 0 ), take some open 𝑑 -ball 𝑈 around ⟦ 𝑡 𝑑 + 1 ⟧ ( 𝑥 ) , and take some open 𝑑 + 1 -ball 𝑉 around 𝑥 in ⟦ 𝑡 𝑑 + 1 ⟧ − 1 ( 𝑈 ) . Then, ⟦ 𝑡 𝑑 + 1 ⟧ restricts to a continuous injection from 𝑉 to 𝑈 , or equivalently, ℝ 𝑑 + 1 to ℝ 𝑑 , which contradicts Lemma 11.

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
130 kB
·
Xet hash:
efab583732122382d05cb69f35fdbedd6b533f3a351f39181b363b91bd4d487a

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.