Skip to content

Core DDSL Concepts

This document defines a Functional Programming (FP) system following Backus (1977).

1. Objects (O)

The set of objects O is the smallest set containing:

  1. ⊥ (bottom) — the undefined object
  2. Atoms — a set A including T (true), F (false), ϕ (empty sequence), numbers, symbols
  3. Sequences — if x₁, ..., xₙ are objects, then ⟨x₁, ..., xₙ⟩ is an object

Key constraint: The sequence constructor is ⊥-preserving:

If x is a sequence with ⊥ as an element, then x = ⊥.

Thus no proper sequence contains ⊥.

2. Application

An FP system has a single operation: application.

If f is a function and x is an object, then f:x is an application denoting the result of applying f to x.

+:⟨1,2⟩ = 3       tl:⟨A,B,C⟩ = ⟨B,C⟩
1:⟨A,B,C⟩ = A     2:⟨A,B,C⟩ = B

3. Functions (F)

All functions f ∈ F map objects to objects and are bottom-preserving: f:⊥ = ⊥ for all f.

Functions come in three kinds:

Kind Description
Primitive Built-in (e.g., +, tl, selectors 1, 2, ...)
Functional Form Combines functions/objects to produce new functions
Defined Named via definitions (Def f ≡ r)

4. Functional Forms

Functional forms are expressions that denote functions, parameterized by other functions or objects.

Form Definition Description
Composition (f∘g):x ≡ f:(g:x) Apply g, then f
Construction [f₁,...,fₙ]:x ≡ ⟨f₁:x,...,fₙ:x⟩ Parallel application
Condition (p→f;g):x ≡ p:x=T → f:x; g:x Conditional
Constant x̄:y ≡ y=⊥ → ⊥; x Always returns x
Apply-to-all αf:⟨x₁,...,xₙ⟩ ≡ ⟨f:x₁,...,f:xₙ⟩ Map
Insert /f:⟨x₁,...,xₙ⟩ ≡ f:⟨x₁,/f:⟨x₂,...,xₙ⟩⟩ Fold/reduce

5. System Definition

An FP system is fully determined by four sets:

Component Symbol Role
Atoms A Determines the set of objects
Primitive functions P Built-in computations
Functional forms F Combining operations
Definitions D Named function bindings (well-formed: no duplicate left-hand sides)

6. Semantics

To compute f:x:

  1. If f is primitive → system knows how to compute it
  2. If f is a functional form → apply the form's definition using its parameters
  3. If f is defined (Def f ≡ r) → compute r:x
  4. Otherwise → f:x = ⊥

If computation does not terminate, the result is ⊥.

7. Domain-Compatibility Implies MDP Structure

Theorem (informal): Let \(\mathcal{S} = \{S_1, \ldots, S_K\}\) be a set of stage operators. If: 1. Each \(S_i : \mathcal{V}_i \to \mathcal{V}_{i+1}\) (functionals on value functions) 2. \(\text{cod}(S_i) \subseteq \text{dom}(S_{i+1})\) for all adjacent pairs

Then the composition \(\mathbb{T} = S_K \circ \cdots \circ S_1\) is a well-formed Bellman operator.

Contrapositive: You cannot have a set of domain-compatible stages that fails to define an MDP. The compatibility conditions ARE the MDP structure.


8. Why Variable Names Become Essential

This makes naming a structural feature, not just documentation:

Role Why It Matters
State variable names Determine domain identity: \(X_{\prec}(S_2)\) must literally match \(X_{\succ}(S_1)\)
Domain declarations The type V : State → ℝ isn't annotation—it's the interface contract
Perch labels Name the information sets; composition requires label agreement

Example: If stage \(S_1\) outputs to space \((x, a)\) and stage \(S_2\) expects input from space \((x, w)\), they cannot compose—even if dimensionally compatible. The names enforce economic meaning.


9. The DDSL Guarantee

This gives you a strong invariant:

If it type-checks, it's an MDP.

Specifically: - Well-formed DDSL spec → domain-compatible stage sequence → valid Bellman operator - No "accidentally composable" stages that don't make economic sense - The naming discipline prevents dimensional coincidences from being mistaken for structural compatibility

This is why @in constraints and explicit state-space declarations aren't bureaucracy—they're the mechanism that makes composition meaningful.


Reference: Backus, J. (1978). "Can Programming Be Liberated from the von Neumann Style?" Communications of the ACM, 21(8), 613-641.