4. Stages, Methodization, and Calibration¶
This section defines how DDSL models are organized into stages, how symbolic operators become numerical algorithms through methodization, and how abstract symbols become concrete values through calibration.
4.1 Syntactic Stages¶
A DDSL model is organized into coherent units called stages. At the syntactic level, a stage is a collection of related function objects together with a closed declaration environment of symbols.
Core (WIP but guiding) principle: a stage is a declaration unit: it introduces a complete set of named objects (spaces, variables, parameters/settings, operators/movers, etc.) with no free names. This is what makes later methodization + calibration deterministic enough to yield an unambiguous host-language representation.
We say that a stage declares a symbol when it introduces a name together with a kind and typing constraints (e.g., x @in X is a declaration/typing judgment; it is not a runtime assignment).
Scoping rule (WIP but important):
- Names declared in symbols: are stage-global and immutable (single assignment at stage scope).
- Names introduced inside an @via block are local temporaries (lexically scoped to that @via body) and are not exported across equations/movers.
Definition of Syntactic Stage¶
Definition (Syntactic Stage): A syntactic stage is a finite set $$ \mathbb{S} = { f_1,\dots,f_N } \subset \mathbb{F} $$ where each \(f_i\) is a function object (a finite sequence of symbols from \(\mathbf{SYM}\) that satisfies the DDSL grammar).
Stage Structure¶
A syntactic stage typically contains:
- Symbol declarations (
symbols:) — all names used within the stage (typing environment) - Operator declarations (
operators:) — operator templates/classes in scope for methodization (no methods, no numbers) - Function definitions (
equations:) — primitive functions, kernels, auxiliary definitions - Perches/movers (
perches:) — ADC structure with forward/backward mover bodies (purely symbolic)
Example structure (buffer-stock model):
stage:
name: consind
model_type: adc_stage
inputs: [Ve, D_arvl]
outputs: [Va, astar, D_dcsn, D_cntn]
operators:
- name: E_{}
class: expectation
- name: argmax_{}
class: optimization
- name: APPROX
class: approximation
- name: dcsn_to_arvl
class: mover
kind: backward
- name: cntn_to_dcsn
class: mover
kind: backward
symbols:
spaces:
"arrival state space": Xa @def R+
"decision state space": Xv @def R+
variables:
states:
"arrival wealth": xa @in Xa
"decision wealth": xv @in Xv
controls:
"consumption": a @in A
shocks:
"income shock": w @in W, w ~ Normal(μ_w, σ_w)
parameters:
"discount factor": β @in (0,1)
"risk aversion": γ @in R+
settings:
"arrival grid points": n_xa @in Z+
"quadrature nodes": n_nodes @in Z+
movers:
"dcsn_to_arvl": B_arvl: [Xv -> R] -> [Xa -> R]
"cntn_to_dcsn": B_dcsn: [Xe -> R] -> [Xv -> R, Xv -> A]
Stage Composition¶
Stages can be composed to build complex models: - Sequential composition: Stages executed in order - Parallel composition: Independent stages executed concurrently - Hierarchical composition: Stages containing sub-stages
4.2 The Methodization Functor¶
Before calibration, the methodization functor transforms symbolic operators into methodized operators by attaching numerical schemes. Critically, methodization is configured via an external config file — methods are NOT specified inside movers.
Definition¶
Definition (Methodization Functor): The methodization functor \(\mathcal{M}\) is a map: $$ \mathcal{M}: (\mathbb{S}, \text{Registry}, \text{MethodConfig}) \to \mathbf{OPS}_{\text{method}} $$ where: - \(\mathbb{S}\) is the syntactic stage (with
operators:declarations) - Registry is the global operation registry (available schemes/methods/options; e.g.operation-registry.yml) - MethodConfig is the model-specific methodization config - \(\mathbf{OPS}_{\text{method}}\) is the space of methodized operators
Operator identity implication (from the methodization notes):
- Methods attach at the lowest-level object: operator instances (e.g., E_w, argmax_a, draw operators) are methodized independently of the movers in which they appear.
- If the same operator instance appears in multiple movers, it must receive the same method. If you need different methods for “the same-looking operator”, introduce distinct operator instances (e.g., E_w_1, E_w_2) rather than attaching methods to AST occurrence addresses.
Three-File Architecture¶
The methodization functor reads three inputs:
| File | Purpose | Contains |
|---|---|---|
| stage.yml | Operator declarations | What operators are used (no methods) |
| operation-registry.yml | Global schemes | What schemes/methods exist |
| methodization.yml | Model choices | Which scheme/method for each operator |
Note (WIP): the DDSL also has an operator registry (as part of the language spec; see §3.5) that defines operator templates, signatures, and how instances like
E_ware formed and recognized. Theoperation-registry.ymlis different: it enumerates available numerical schemes/methods for those operator classes.
This separation ensures: - Stages stay purely symbolic — no numerical choices embedded - Methods are explicit — clear record of algorithm choices - Reusability — same stage can have different methodizations
What Methodization Does¶
The methodization functor:
- Reads operator declarations from
stage.ymloperators:block - Identifies operator instances used by the stage (e.g.,
E_w,argmax_a,w ~ D_w) and validates them against the operator registry - Dispatches by class (expectation, optimization, mover, approximation)
- Looks up scheme/method from methodization config (validated against the operation registry)
- Attaches APPROX to mover outputs (wraps in approximation operators)
- Records settings profiles (symbolic labels for calibration)
Kernels vs. methods (important): - A stage may contain many kernels / primitive function definitions (transition kernels, utility, constraints, auxiliary definitions). These are pure function objects and do not require method attachments merely by existing. - Methods attach to operator instances that appear when you execute movers under ρ (expectations, argmax/rootfinding, draws/push-forwards, approximation/interpolation wrappers, etc.). - In particular, forward dynamics are naturally expressed as push-forward operators induced by (kernel + shock distribution). The push-forward needs a sampling/quadrature method; the kernel itself typically does not.
Example: Methodization Config¶
The methodization config binds operators to schemes/methods:
# consind-methodization.yml
stage: consind
methodization:
logical_operators:
- symbol: E_{}
instance: E_w
scheme: shock_expectation
method: !gauss-hermite
settings_profile: expect_w_settings
attaches_to:
mover: dcsn_to_arvl
- symbol: argmax_{}
instance: argmax_a
scheme: statewise_argmax
method: !golden-section
settings_profile: argmax_a_settings
attaches_to:
mover: cntn_to_dcsn
movers:
- mover: dcsn_to_arvl
scheme: bellman_backward
method: !generic
approx:
- approx_symbol: APPROX
target: Va
scheme: interpolation
method: !linear
settings_profile: Va_interp_settings
Authoring note (optional): flat bindings form
For simpler authoring, this same config can be expressed as a flat list of bindings (grouping is presentation-only). Expectation operators are naturally keyed by the shock RV: E_{w}(...) corresponds to operator instance E_w (similarly E_{y} → E_y).
Key point: you can either:
- use a settings_profile label (a named bundle resolved later), or
- more simply, attach a settings: {option_key: settings_symbol_name, ...} mapping directly.
In both cases, concrete numbers live outside the stage file (in settings.yml / calibration inputs).
Example: Methodizing the Arrival Mover¶
Stage syntax (purely symbolic):
Methodization config entry:
- symbol: E_{}
instance: E_w
scheme: shock_expectation
method: !gauss-hermite
settings_profile: expect_w_settings
After methodization:
M(B_arvl) = (B_arvl, {
E_w: (shock_expectation, !gauss-hermite, expect_w_settings),
Va: APPROX(interpolation, !linear, Va_interp_settings)
})
The methodized operator has: - A scheme/method for the expectation operator - An APPROX wrapper for the output function - But no concrete numbers yet — settings are still symbolic labels
The APPROX Insertion¶
Each mover's output functions are wrapped in APPROX operators:
movers:
- mover: cntn_to_dcsn
approx:
- target: Vv
scheme: interpolation
method: !linear
settings_profile: Vv_interp_settings
- target: astar
scheme: interpolation
method: !linear
settings_profile: astar_interp_settings
This ensures that each mover produces discretized functional objects (interpolants over grids).
Conceptual Pipeline¶
Υ (math meaning)
↑
syntactic stage ─────┴─────────────────────────────
│
│ (operators: declarations)
│
├─────────────> operation-registry.yml
│ (what schemes exist)
│
└─────────────> methodization.yml
(which choices for this model)
│
↓ M (methodization functor)
(mover, scheme, method, settings_profile)
│
↓ C (calibration)
(mover, scheme, method, concrete_values)
│
↓ ρ (representation)
AST / numerical
│
↓
approximate math operator
↑
(equals Υ only in limit)
4.3 Two-Part Calibration Functor¶
After methodization, the calibration functor attaches concrete numerical values. The calibration config file resolves the settings_profile labels from methodization.
Economic Parameterization¶
The parameterization functor handles economic parameters:
Definition (Parameterization Functor): $$ \mathbb{C}_{\text{param}} : \mathbf{PARAMS} \to \mathbb{O}_P $$ assigns a primitive numerical object to each parameter symbol.
In the calibration file:
parameters:
β: 0.96 # discount factor
γ: 2.0 # risk aversion
r: 1.04 # interest rate
μ_w: 0.0 # shock mean
σ_w: 0.1 # shock std dev
Important: Parameters can be attached already at the Υ level — they belong to the mathematical problem itself.
Method Settings¶
The settings functor resolves the settings_profile labels from methodization:
Definition (Settings Functor): $$ \mathbb{C}_{\text{settings}} : \text{SettingsProfile} \to \mathbb{O}_P $$ assigns concrete values to each settings profile.
In the calibration file:
method_settings:
# Expectation settings (resolves expect_w_settings)
expect_w_settings:
nodes: 9
multidim: tensor
# Optimization settings (resolves argmax_a_settings)
argmax_a_settings:
tol: 1.0e-8
max_iter: 100
# Interpolation settings (resolves Va_interp_settings, etc.)
Va_interp_settings:
extrapolation: clip
# Grid settings
Xa_grid_settings:
n_points: 50
bounds: [0.1, 10.0]
Important: Settings only make sense after methodization — they parameterize the specific schemes chosen.
Combined Calibration¶
The complete calibration functor is the pair:
This separation ensures: 1. Clean distinction between economic model (parameters) and numerical implementation (settings) 2. Flexibility to change numerical settings without altering the economic model 3. Correct ordering: Parameters at math level, settings at numerical level 4. Profile resolution: Settings profiles from methodization are resolved to concrete values
4.4 From Methodized Operator to AST¶
The complete transformation from syntactic operator to an executatable representation follows this pipeline:
Step 1: Start with Syntactic Operator¶
B_dcsn: Ve -> [Vv, astar] @via |
astar(xv) = argmax_{a in Gamma(xv)} Q_kernel(xv, a)
Vv(xv) = Q_kernel(xv, astar(xv))
At this stage, \(\Upsilon(\texttt{B\_dcsn})\) gives the mathematical Bellman operator acting on function spaces.
Step 2: Apply Methodization¶
M(B_dcsn) = (B_dcsn, {
argmax_a: (statewise_argmax, !golden-section, argmax_a_settings),
Vv: APPROX(interpolation, !linear, Vv_interp_settings),
astar: APPROX(interpolation, !linear, astar_interp_settings)
})
Now we have an approximate operator family — the algorithm is chosen but settings are symbolic.
Step 3: Apply Calibration¶
C ∘ M(B_dcsn) = (B_dcsn, {
argmax_a: (statewise_argmax, !golden-section, {tol: 1.0e-8, max_iter: 100}),
Vv: APPROX(interpolation, !linear, {extrapolation: clip}),
astar: APPROX(interpolation, !linear, {extrapolation: clip})
})
All required settings profiles are replaced with concrete (primitive) numerical values.
Step 4: Apply Representation Map¶
ρ(C ∘ M(B_dcsn)) = AST {
for each xv in grid_Xv:
a_star[xv] = golden_section_search(
func=lambda a: Q_kernel(xv, a),
bounds=Gamma(xv),
tol=1e-6
)
Vv[xv] = Q_kernel(xv, a_star[xv])
return LinearInterpolator(grid_Xv, Vv), LinearInterpolator(grid_Xv, a_star)
}
Now we have an operator \(T_{\text{num}}\) that maps primitive numerical objects (arrays of approximate value function values) to primitive numerical objects.
4.5 Calibrated Stages¶
The central semantic unit of a DDSL model is the calibrated stage, which combines syntactic structure, methodization, and concrete values.
Definition of Calibrated Stage¶
Definition (Calibrated Stage): A calibrated stage is an immutable triple $$ (\mathbb{S}, \mathcal{M}, \mathbb{C}) $$ where: - \(\mathbb{S}\) is a syntactic stage - \(\mathcal{M}\) is a methodization (scheme assignments for all operators) - \(\mathbb{C}\) is a calibration (values for all parameters and method settings)
Calibration Application¶
Formally, calibration has two parts:
- Parameter calibration substitutes parameter symbols inside the stage’s function objects.
- Settings calibration resolves the settings_profile labels produced by methodization into concrete records of primitive numerical values.
To express the first part, for each function object \(f \in \mathbb{S}\) represented as a finite sequence \(f = (s_1,\dots,s_k)\), define:
For the second part, if methodization introduces a settings profile label p, calibration supplies a record of primitive numerical values:
$$
\mathbb{C}_{\text{settings}}(p) \in \mathbb{O}_P
$$
where we are using closure of \(\mathbb{O}_P\) under finite records/tuples.
WIP: in implementation, the calibrated, methodized stage is obtained by applying parameter substitution to the stage syntax and resolving each
settings_profilelabel in the methodization artifacts before applying ρ.
Properties of Calibrated Stages¶
- Immutability: Once created, a calibrated stage cannot be modified
- Completeness: All parameters, settings, and methods must be specified
- Type safety: Calibration values must match declared types
- Ordering: Methodization precedes calibration — methods must be chosen before settings make sense
4.6 Mathematical Meaning vs. Numerical Meaning¶
The Mathematical Meaning Map (Υ)¶
Definition (Mathematical Meaning Map): $$ \Upsilon : \mathbb{S} \to \mathbb{DM} $$ assigns to a syntactic stage (with parameterization) a mathematical Bellman operator in the space \(\mathbb{DM}\) of dynamic-model operators.
This map: - Acts on the syntactic layer - Can incorporate parameters (they define the mathematical problem) - Does not need methods or settings - Produces the "true" mathematical model
The Computational Meaning Map (ρ)¶
Definition (Computational Representation Map): $$ \rho : (\mathbb{S}, \mathcal{M}, \mathbb{C}) \to \mathbb{F}^{\text{COMP}} $$ assigns to a calibrated, methodized stage a canonical representation in the host-language AST space.
This map: - Requires methodization (to know which algorithms to use) - Requires calibration (to have concrete numbers) - Produces executable numerical code - The resulting operator acts on discrete primitive objects
Equality Only in the Limit¶
Critical point: The approximate operator under ρ is not equal to the mathematical operator under Υ:
Equality holds only in the limit as discretization parameters go to their ideal values: - Grid points \(n \to \infty\) - Quadrature nodes \(n_{\text{nodes}} \to \infty\) - Tolerance \(\text{tol} \to 0\)
Under appropriate assumptions: $$ \lim_{\substack{n \to \infty \ \text{tol} \to 0}} \rho(\mathcal{M}(\mathbb{S}), \mathbb{C}_n) = \Upsilon(\mathbb{S}) $$
4.7 Three-File Architecture¶
The DDSL uses a three-file architecture separating syntax, methodization, and calibration:
File 1: Stage (Pure Syntax)¶
# consind-stage.yml
stage:
name: consind
inputs: [Ve, D_arvl]
outputs: [Va, astar, D_dcsn, D_cntn]
operators:
- name: E_{}
class: expectation
- name: argmax_{}
class: optimization
- name: dcsn_to_arvl
class: mover
kind: backward
symbols:
parameters:
"discount factor": β @in (0,1)
settings:
"arrival grid points": n_xa @in Z+
perches:
arrival:
backward:
B_arvl: Vv -> Va @via |
xv = g_av(xa, w)
Va(xa) = E_w[Vv(xv)]
# NO @methods here - purely symbolic
File 2: Methodization Config¶
# consind-methodization.yml
stage: consind
methodization:
logical_operators:
- symbol: E_{}
instance: E_w
scheme: shock_expectation
method: !gauss-hermite
settings_profile: expect_w_settings
movers:
- mover: dcsn_to_arvl
scheme: bellman_backward
method: !generic
approx:
- target: Va
scheme: interpolation
method: !linear
settings_profile: Va_interp_settings
File 3: Calibration¶
# consind-calibration.yml
stage: consind
parameters:
β: 0.96
γ: 2.0
method_settings:
expect_w_settings:
nodes: 9
Va_interp_settings:
extrapolation: clip
Xa_grid_settings:
n_points: 50
bounds: [0.1, 10.0]
Multiple Configurations¶
Different methodizations and calibrations can be combined:
# Development: coarse, fast
consind-methodization-dev.yml # Monte Carlo expectation
consind-calibration-dev.yml # nodes: 100, grid: 20 points
# Production: fine, accurate
consind-methodization-prod.yml # Gauss-Hermite expectation
consind-calibration-prod.yml # nodes: 9, grid: 200 points
4.8 Perches and ADC Structure¶
Perch Organization (Purely Symbolic)¶
In the ADC framework, perches define movers without method annotations:
Interpretation: perches are best understood as information sets (a filtration) within a stage. The arrival/decision/continuation labels encode what is observable/conditionable at each point, and mover bodies must respect this (RHS terms must be measurable with respect to the target perch’s information).
perches:
sequence:
forward: [arrival, decision]
backward: [decision, arrival]
arrival:
backward:
B_arvl: Vv -> Va @via |
xv = g_av(xa, w)
Va(xa) = E_w[Vv(xv)]
forward:
F_arvl: D_arvl -> D_dcsn @via |
xa_tilde ~ D_arvl
w ~ D_w
xv_tilde = g_av(xa_tilde, w)
decision:
backward:
B_dcsn: Ve -> [Vv, astar] @via |
astar(xv) = argmax_{a in Gamma(xv)} Q_kernel(xv, a)
Vv(xv) = Q_kernel(xv, astar(xv))
forward:
F_dcsn: D_dcsn -> D_cntn @via |
xv_tilde ~ D_dcsn
a_tilde = astar(xv_tilde)
xe_tilde = g_ve(xv_tilde, a_tilde)
Key point: Movers describe what mathematical operations occur. The how (numerical methods) comes from the methodization config.
4.9 Stage Validation¶
Pre-Methodization Validation¶
Before methodization, stages are validated for: - Symbol completeness: All referenced symbols are declared - Type consistency: Function compositions are well-typed - Perch connectivity: Movers connect properly between perches - Operator registry: All used operators are in the registry
Post-Methodization Validation¶
After methodization, we validate: - Method coverage: All operators have schemes assigned - Scheme compatibility: Schemes are appropriate for operator types - Setting references: All scheme settings reference declared settings
Post-Calibration Validation¶
After calibration, we validate: - Coverage: All parameters and settings have values - Type matching: Values match declared types (e.g., \(\beta \in (0,1)\)) - Range compliance: Values satisfy constraints - Consistency: Related parameters are compatible
4.10 Summary¶
The stages, methodization, and calibration system provides:
- Modular organization: Stages group related computations around perches
- Clear separation:
- Syntax (structure)
- Methodization (algorithms)
- Calibration (values)
- Correct ordering: Methodization before calibration
- Two-part calibration: Parameters (math level) vs. settings (numerical level)
- Mathematical grounding: Connection to Bellman operators via Υ
- Computational realization: Executable code via ρ (requires methodization + calibration)
- Limit agreement: Approximate operators converge to true operators
The transformation pipeline is:
This architecture ensures that: - Mathematicians can reason about \(\Upsilon(\mathbb{S})\) as Bellman operators - Programmers can implement \(\rho(\mathcal{M}(\mathbb{S}), \mathbb{C})\) as an executatable representation - Economists can adjust \(\mathbb{C}_{\text{param}}\) without touching numerics - Computational scientists can tune \(\mathbb{C}_{\text{settings}}\) for performance - Algorithm developers can modify \(\mathcal{M}\) to try different schemes