Skip to content

MATHEMATICAL PRELIMINARIES: TENSOR REPRESENTATION OF FUNCTIONS

Before diving into the computational implementation, it's helpful to introduce tensor notation as a powerful tool for representing multivariate functions and their derivatives. This notation provides a concise and elegant way to handle the mathematical objects in our framework, particularly when working with higher-dimensional state spaces.

Tensor Notation for Value Functions

In our framework, value functions are defined over potentially multi-dimensional state spaces. For instance, an arrival value function 𝒜(x) might depend on multiple state variables:

𝒜(x) = 𝒜(x₁, x₂, ..., xₙ)

While we could represent this function as a scalar-valued function of multiple arguments, tensor notation offers advantages when working with derivatives and when implementing the functions computationally.

Advantages of Tensor Notation

  1. Named Variables: References variables directly by name rather than position
  2. Derivative Clarity: Provides clear representation of partial derivatives
  3. Implementation Flexibility: Works with any computational representation (grid, neural, etc.)

Derivative Notation Examples

Consider an arrival value function 𝒜(k, γ) that depends on two state variables: capital k and a production efficiency parameter γ. We can represent this function's derivatives using a superscript notation that explicitly references the variables:

  • The first derivative with respect to k can be denoted as ∂𝒜/∂k or in our notation as 𝒜^(k)
  • Similarly, ∂𝒜/∂γ can be denoted as 𝒜^(γ)
  • The cross-derivative ∂²𝒜/∂k∂γ can be denoted as 𝒜^(k,γ)

This notation makes no assumptions about the order of arguments and references variables directly by name rather than position. It represents derivatives at any arbitrary point in the state space, not just at specific grid points.

Example: Cross-Derivative of Arrival Value Function

Let's consider a specific example where we have an arrival value function 𝒜(k, γ) with two state variables. Suppose we want to compute the cross-derivative ∂²𝒜/∂k∂γ at a specific point (k₀, γ₀).

Using our notation, we would represent this as:

𝒜^(k,γ)(k₀, γ₀) = ∂²𝒜(k₀, γ₀)/∂k∂γ

Computational Implementation Approaches

Derivatives can be computed using any of the following approaches:

  1. Finite Differences:

    𝒜^(k,γ)(k₀, γ₀) ≈ [𝒜(k₀+h, γ₀+h) - 𝒜(k₀+h, γ₀) - 𝒜(k₀, γ₀+h) + 𝒜(k₀, γ₀)]/(h²)
    

  2. Automatic Differentiation:

    𝒜^(k,γ) = gradient(gradient(𝒜, 'k'), 'γ')  // Using variable names directly
    

  3. Symbolic Differentiation: If 𝒜 has a symbolic representation, e.g., 𝒜(k, γ) = k^α * γ^β, then:

    𝒜^(k,γ)(k, γ) = ∂²/∂k∂γ(k^α * γ^β) = α*β*k^(α-1)*γ^(β-1)
    

Special Functions

In addition to standard functions, our framework supports various special functions that arise in economic and mathematical analysis:

  1. Weierstrass 𝔭-function:
  2. The Weierstrass elliptic function 𝔭(z; ω₁, ω₂) is a doubly periodic complex function
  3. It satisfies the differential equation: [𝔭'(z)]² = 4[𝔭(z)]³ - g₂𝔭(z) - g₃
  4. Where g₂ and g₃ are constants determined by the periods ω₁ and ω₂
  5. Used in analyzing periodic phenomena and in certain classes of differential equations

In our computational framework, we implement a unified interface that allows us to compute these derivatives with respect to named variables regardless of how the function is represented internally (grid, polynomial, spline, or neural network).