Modular portfolio choice and consumption¶
1 Overview¶
This notebook provides a self-contained walkthrough of Bellman calculus and the dolo-plus declarative syntax, focusing on how to use dolo-plus to declare a model, rather than the functional-programming machinery. We solve a finite-horizon portfolio–consumption problem by composing modular stages into four within-period configurations. Each period has two decisions—portfolio allocation ($\varsigma \in [0,1]$) and consumption–savings ($c>0$)—declared in separate stages and composed via connectors.
We explore two ideas: modularity and semi-conjugacy.
Modularity Each stage implements an operator $\mathbb{S}$ that takes a continuation value function as input and returns an arrival value function—i.e., a well-defined Bellman operator. From the stage's perspective, the rest of the model is irrelevant. Stages are composed—rather than merged—within a period using intra-period connectors, and periods are linked via inter-period connectors. This lets us build and modify models one stage at a time: swap components, reorder decisions, and reuse a small set of template stage recipes.
Semi-conjugacy Because the four configurations share the same stage components, their iterated value functions are related by order semi-conjugacy (Sargent and Stachurski, 2026). Rearranging the order of the same operators produces period operators that are equivalent up to a transformation. The upshot: we can write the problem in the ordering that best matches economic intuition, but solve a different configuration for numerical reasons (or because a solver is available for that configuration) and then recover the solution to the original problem.
Declarative syntax¶
Bellman calculus is a system of rules that tells us how to map declarative syntax into the formal definition of Bellman operators. dolo-plus (a fork of dolo) is a declarative syntax for modular stages and representation of that syntax in Python (the dolo-plus .mod file).
The goal of declarative syntax is to write what the computational model is—states, controls, constraints, expectations, methods —rather than how to solve it. Following Backus (1978), the same syntax serves two roles:
- it defines the mathematical model (as in a paper), and
- it defines (but does not implement) a computational object (what a solver executes).
As we will show, almost all of the syntax declarations for a stochastic recursive program boil down to stage syntax. dolo-plus is simply a collection of declarations—symbols, equation blocks, space constructors—written in dolo-plus syntax and serialized in YAML.
Stage as meaning and representation¶
So, dolo-plus syntax represents a stage -- a Bellman operator. Bellman gives us rules to implement two maps:
- $\Upsilon$ (meaning): interprets stage syntax as the Bellman operator of a well-defined stochastic program (spaces, transitions, and the stage Bellman operator, including any implied kernels/policies).
$$ \text{Stage syntax} \overset{\Upsilon}{\longrightarrow} \text{Math model} $$
- $\mathrm{P}$ (representation): produces an executable representation (e.g., grids, interpolants, and callables) that can be applied as numerical approximations of the Bellman operator.
$$ \text{Stage syntax} \overset{\mathrm{P},\;\Upsilon}{\longrightarrow} \text{Callables} \overset{\text{solve}}{\longrightarrow} \text{Value fns \& policies} $$
Bellman calculus (and, in principle, dolo-plus) is language-agnostic. Everything beyond the syntax—solvers, storage, numerical plumbing—is handled by user code, the whisperer, which reads syntax through $\Upsilon$ and applies $\mathrm{P}$ to obtain executable objects.
Note that the map $\Upsilon$ should be "isomorphic" in some sense, if we translate a stage syntax using $\Upsilon$ to a stochastic program (i.e, map the syntax to math), the stochastic program should map back onto the the same syntax.\footnote{Strictly, $\Upsilon$ will map to equivalence classes of stochastic programs -- sets of english language model descriptions that represent the same problem.}
[!note] AI-assisted parsing and verification
Usually, $\Upsilon$ is instantiated via documentation or is "in our heads". As part of the Bellman calculus project, we also use an explicit implementation of $\Upsilon$ called Matsya, an LLM-powered operator that parses
dolo-plusinto english language problem definitions using retrieval over Bellman calculus semantic rules, a training dataset, with iterative refinement.We can think of Matsya as implementing and approximation of $\Upsilon$ in both directions:
- Forward ($\hat{\Upsilon}$): stage syntax $\to$ a formal stochastic program (spaces, transitions, Bellman equation).
- Inverse ($\hat{\Phi}$): a stochastic program $\to$ the corresponding stage syntax.
A well-defined stage should survive the round trip $\hat{\Phi}\circ \hat{\Upsilon}(\mathsf{Stg})$. That is, a well-defined stage is a fixed point $\hat{\Phi}\circ \hat{\Upsilon}\mathsf{Stg}=\mathsf{Stg}$.
Matsya's
--refinemode automates the process of finding such a fixed point by initializing $\hat{\Phi}$ with high temperature and iterating lower and lower tempratures. If refinement does not converge, the remaining differences typically point to genuine ambiguities that require judgement—mirroring CEGAR (counterexample-guided abstraction refinement), where each diff tightens the specification.See:
docs/development/research-notes/round-trip-refinement.md
Stage pipeline¶
A single syntactic stage can be represented in several increasingly committed forms. We start with a purely symbolic and typed declaration of structure, then progressively bind to it (i) numerical parameter values, (ii) approximation methods for functions, and (iii) numerical approximation settings.
Only after those syntax-layer commitments do we compile executable callables for the stage, and finally apply it to produce solved objects and simulations.
It helps to read the pipeline as "one new commitment per step":
- Symbolic: what the stage says (symbols for variables, parameters, and settings; equations; spaces), with no numbers and no solver choices.
- Parameterized: the math is pinned down by binding calibrated parameter values (changing these changes the model).
- Methodized: how we intend to solve it by attaching methods to functions (interpolation grids, EGM, VFI, quadrature, etc.).
- Configured: how accurately we approximate it by binding numerical settings (grid sizes, tolerances, bounds).
- Callable: an executable operator built from grids/interpolants/quadrature nodes—ready to be applied.
- Solved: result objects (value + policy) produced by iterating the operator backwards.
- Simulated: forward use of the policy to generate agent paths or distributions.
The key distinction is that steps 1–4 live in the syntax layer (they specify meaning + intended numerical treatment), while steps 5–7 live in computational representation and application (they build and use concrete executable objects). The Bellman calculus system specifies the structure of the syntactic layer of a stage, but it does not specify the structure of objects in the computational layer.[^1]
[^1]: Bellman calculus does specify thin syntax about how stages are composed into periods, periods into sequences, etc., but it does not specify the details of how grids are built, how interpolants are computed, etc.
Steps 1–4: syntax layer | Step 5: callable representation | Steps 6–7: application. Detailed table →
Source files for building a stage are organised in a library directory containing stage syntax templates, method YAMLs, a shared calibration.yaml, and a shared settings.yaml. However, the source files themselves should not be thought of as a stage, but rather as a collection of files that can be used to build a stage (we return to the reason behind this distinction below).
The cons, port and disc stages¶
Let's build the three stages by writing out the formal mathematical description, converting it to syntax and building the methodized, calibrated and configured stages as dolo-plus model objects.
# ═══════════════════════════════════════════════════════════════
# Setup: library path, calibration (Υ), settings (Ρ), methods (Ρ)
# ═══════════════════════════════════════════════════════════════
#
# I/O boundary: all disk reads happen here. Everything
# downstream is pure transforms on in-memory objects.
# (Ousterhout: "new layer, new abstraction" — this layer
# turns files into dicts; later layers turn dicts into
# operators.)
from __future__ import annotations
import sys
import warnings
from pathlib import Path
import numpy as np
import yaml
warnings.filterwarnings('ignore', category=UserWarning)
# Put code/ on the import path so `from ffp_utils import ...` works
sys.path.insert(0, 'code')
# Library path — local syntax library (self-contained copy)
_here = Path.cwd()
_repo = next(p for p in [_here] + list(_here.parents)
if (p / 'packages' / 'dolo').exists())
LIB = Path('syntax')
STAGES = LIB / 'stages'
# Calibration (Υ-level): mathematical parameters
with open(LIB / 'calibration.yaml') as _f:
_calib_raw = yaml.safe_load(_f)
base_params = _calib_raw.get('calibration', {}).get('parameters', _calib_raw)
# Settings (Ρ-level): numerical configuration
with open(LIB / 'settings.yaml') as _f:
_settings_raw = yaml.safe_load(_f)
settings = _settings_raw.get('settings', _settings_raw)
# Methods (Ρ-level): per-stage numerical method configurations
# Uses dolo's loader to handle custom YAML tags (!egm, !vfi, etc.)
from dolo.compiler.methodization import load_methodization
cons_methods = load_methodization(str(STAGES / 'cons_stage_methods.yml'))
port_methods = load_methodization(str(STAGES / 'port_stage_methods.yml'))
noport_methods = load_methodization(str(STAGES / 'noport_stage_methods.yml'))
disc_methods = load_methodization(str(STAGES / 'disc_stage_methods.yml'))
print(f'Library: {LIB}\n')
print('Calibration (Υ):')
for k, v in base_params.items():
print(f' {k:>4s} = {v}')
print(f'\nSettings (Ρ):')
for k, v in settings.items():
print(f' {k:>16s} = {v}')
print(f'\nMethods (Ρ):')
for name, m in [('cons', cons_methods), ('port', port_methods),
('noport', noport_methods), ('disc', disc_methods)]:
targets = [e['on'] for e in m.get('methods', [])]
print(f' {name:>8s}: {len(targets)} target(s)')
Library: syntax
Calibration (Υ):
β = 0.96
ρ = 2.0
R = 1.02
μ_Ψ = 0.04
σ_Ψ = 0.15
μ_θ = 0.0
σ_θ = 0.1
ρ_ζ = 0.0
Settings (Ρ):
n_m = 100
m_min = 0.01
m_max = 10.0
n_a = 100
a_min = 0.0
a_max = 10.0
n_k = 100
k_min = 0.01
k_max = 10.0
n_θ_nodes = 7
n_Ψ_nodes = 7
n_joint_nodes = 49
n_sim = 10000
sim_seed = 42
tol_vfi = 1e-08
max_iter = 1000
Methods (Ρ):
cons: 11 target(s)
port: 9 target(s)
noport: 9 target(s)
disc: 8 target(s)
Consumption stage¶
This stage is a deterministic consumption–savings decision. The agent enters with cash-on-hand $m \in \mathbb{R}_{++}$ at the arrival perch, the cash on hand is transformed to the decision stage by the transition function $m_d = m$. At the decision perch, the agent chooses consumption $c \in (0, m_d)$ at the decision perch, and exits with post-consumption assets $a = m - c$, where $a$ is the continuation state.
In general, if the state transition is an identity, we can use the same variable name for the source and target perch and do not explicitly need to write out the transition function. However, we are going to do so here to demonstrate the information structure of the stage in a simple example.
The continuation value function is given by $\mathrm{v}_{\succ}$, and the transformation (mover) from the continuation perch to the decision perch is given by the maximization problem:
$$ \mathrm{v}_{\sim}(m_d) = \max_{c \in (0,\, m_d)} \left\{ \frac{c^{1-\rho}}{1-\rho} + \, \mathrm{v}_\succ(m_d - c) \right\} $$
Since the arrival-to-decision transition is identity ($m_d = m$), the arrival value is $\mathrm{v}_\prec(m) = \mathrm{v}_{\sim}(m)$. Thus the Bellman operator of the stage. This maximization defines the backward decision mover $\mathbb{G}$; since the arrival transition is identity, the arrival mover $\mathbb{I}$ is trivial, and the stage operator is $\mathbb{S} = \mathbb{I} \circ \mathbb{G}$.
EGM representation (inverse Euler).
$$ \mathrm{c}_{\succ}(a) = \bigl(\beta \, \partial\mathrm{v}_\succ(a)\bigr)^{-1/\rho}, \qquad \mathrm{m}_{\succ}(a) = a + \mathrm{c}_{\succ}(a) $$
Importantly, consumption here is a function of the continuation state $a$, so the above equation defines a function $\mathrm{c}_{\succ}$ to represent it (this is the consumed function). One way to view EGM is that it involves inverting an approximation of $\mathrm{m}_{\succ}(a)$ by interpolation to recover the decision policy $\mathrm{c}(m)$.
Let's use matsya to convert this math description into a dolo-plus stage YAML. First we write out the decription as text.
# Editable math description — edit this string then run the Matsya cell below.
# (The rendered version is the markdown cell above.)
_cons_math = r"""
A deterministic consumption-savings decision. The agent enters
with cash-on-hand m in R++, chooses consumption c in (0, m),
and exits with post-consumption assets a = m - c.
Bellman equation (decision perch):
v_fn(m_d) = max_{c in (0, m_d)} { c^{1-rho}/(1-rho) + beta * v_fn_succ(m_d - c) }
The arrival-to-decision transition is identity (m_d = m).
EGM representation (inverse Euler):
c_fn_succ(a) = (beta * dv_fn_succ(a))^{-1/rho}
m_fn_succ(a) = a + c_fn_succ(a)
"""
Next, we feed the math description to matsya to produce a dolo-plus stage YAML.
Matsya is a RAG-backed LLM service: it retrieves relevant snippets from the BE-DDSL documentation,
then asks a language model to translate the natural-language/LaTeX specification
into strict dolo-plus syntax. The helper matsya_translate is a thin CLI wrapper
— it takes a math description string, sends it to the Matsya endpoint, and returns
the generated YAML (or None if the service is unavailable).
# Matsya calls temporarily skipped — load from canonical YAML on disk
_cons_yaml = None
print("Matsya skipped — will load cons_stage from disk.")
Matsya skipped — will load cons_stage from disk.
We also have a pre-loaded version of the YAML in the next cell and load and display it via a simple wrapper (display_stage_yaml).
# Consumption stage: canonical YAML (source of truth)
from ffp_utils import display_stage_yaml
display_stage_yaml(STAGES / 'cons_stage.yaml', repo_root=_repo)
Canonical source: syntax/stages/cons_stage.yaml
name: cons_stage
symbols:
spaces:
Xm: "@def R++"
Xa: "@def R+"
prestate:
m: "@in Xm"
states:
m_d: "@in Xm"
poststates:
a: "@in Xa"
controls:
c: "@in R+"
values:
V[<]: "@in R"
V: "@in R"
V[>]: "@in R"
values_marginal:
dV[<]: "@in R+"
dV: "@in R+"
dV[>]: "@in R+"
parameters:
β: "@in (0,1)"
ρ: "@in R+"
settings:
n_m: "@in Z+"
m_min: "@in R+"
m_max: "@in R+"
n_sim: "@in Z+"
sim_seed: "@in Z+"
equations:
arvl_to_dcsn_transition: |
m_d = m
dcsn_to_cntn_transition: |
a = m_d - c
cntn_to_dcsn_mover:
Bellman: |
u = (c^(1-ρ))/(1-ρ)
V = max_{c}(u + β*V[>])
c = argmax_{c}((c^(1-ρ))/(1-ρ) + β*V[>])
InvEuler: |
c[>] = (β*dV[>])^(-1/ρ)
MarginalBellman: |
dV = (c)^(-ρ)
cntn_to_dcsn_transition: |
m_d[>] = a + c[>]
dcsn_to_arvl_mover:
Bellman: |
V[<] = V
ShadowBellman: |
dV[<] = dV
Portfolio stage¶
A stochastic portfolio allocation decision. The agent enters with investable wealth $k \in \mathbb{R}_{++}$, chooses risky share $\varsigma \in [0,1]$, then realizes a post-decision shock $\zeta = (\Psi, \theta)$ drawn from a bivariate log-normal.
Transition (decision to continuation).
$$ m = k_d\bigl(\varsigma \Psi + (1-\varsigma)R\bigr) + \theta $$
Bellman equation (decision perch).
$$ \mathrm{v}(k_d) = \max_{\varsigma \in [0,1]} \;\mathbb{E}_{\Psi,\theta} \!\left[ \mathrm{v}_\succ\!\bigl( k_d(\varsigma \Psi + (1-\varsigma)R) + \theta \bigr) \right] $$
The arrival-to-decision transition is identity ($k_d = k$). Shock timing is post-decision.
# Editable math description — edit this string then run the Matsya cell below.
_port_math = r"""
A stochastic portfolio allocation decision. The agent enters
with investable wealth k in R++, chooses risky share varsigma
in [0,1], then realizes a joint shock zeta = (Psi, theta)
drawn from a bivariate log-normal.
Transition (decision to continuation):
m = k_d * (varsigma * Psi + (1 - varsigma) * R) + theta
Bellman equation (decision perch):
v(k_d) = max_{varsigma in [0,1]} E_{Psi,theta}[
v_succ( k_d (varsigma Psi + (1 - varsigma) R) + theta ) ]
The arrival-to-decision transition is identity (k_d = k).
Shock timing is post-decision.
"""
# Matsya calls temporarily skipped — load from canonical YAML on disk
_port_yaml = None
print("Matsya skipped — will load port_stage from disk.")
Matsya skipped — will load port_stage from disk.
# Portfolio stage: canonical YAML (source of truth)
display_stage_yaml(STAGES / 'port_stage.yaml', repo_root=_repo)
Canonical source: syntax/stages/port_stage.yaml
name: port_stage
symbols:
spaces:
Xk: "@def R++"
Xm: "@def R++"
Π: "@def [0,1]"
Rp: "@def R++"
Θ: "@def R++"
prestate:
k: "@in Xk"
states:
k_d: "@in Xk"
controls:
ς: "@in Π"
exogenous:
Ψ: "@in Rp"
θ: "@in Θ"
ζ:
"@dist": Normal
Μ: [μ_Ψ, μ_θ]
Σ: [[σ_Ψ^2, ρ_ζ*σ_Ψ*σ_θ], [ρ_ζ*σ_Ψ*σ_θ, σ_θ^2]]
poststates:
m: "@in Xm"
values:
V[<]: "@in R"
V: "@in R"
V[>]: "@in R"
parameters:
R: "@in R++"
μ_Ψ: "@in R"
σ_Ψ: "@in R+"
μ_θ: "@in R"
σ_θ: "@in R+"
ρ_ζ: "@in [-1,1]"
equations:
arvl_to_dcsn_transition: |
k_d = k
dcsn_to_cntn_transition: |
m = k_d*(ς*Ψ + (1-ς)*R) + θ
cntn_to_dcsn_mover:
Bellman: |
V = max_{ς}(E_{Ψ,θ}(V[>]))
ς = argmax_{ς}(E_{Ψ,θ}(V[>]))
dcsn_to_arvl_mover:
Bellman: |
V[<] = V
Disc stage¶
The disc stage extracts the discount factor $\beta$ into a standalone operator:
$$ \mathbb{S}_{\text{disc}}(\mathrm{v}_\succ)(x) = \beta \cdot \mathrm{v}_\succ(x) $$
Applied first in backward direction, it scales both $\mathrm{v}_\succ$ and $\partial\mathrm{v}_\succ$ by $\beta$, so economic stages remain beta-free and reusable across configurations.
Building stages¶
To take stock, we now have the three stage YAMLs (cons, port, disc) in the library. In principle, we can represent this syntax in any way we want in memory, and then layer on a faithful computational representation on top of it. dolo-plus has a native representation in the form of a SymbolicModel (.mod) object: a symbolic stage containing symbol declarations (with perch annotations), equation blocks (transitions, movers), and space definitions—but no numerical methods, no parameter values, and no solver settings. It is the syntactic object to which the meaning map ($\Upsilon$) is applied.
The key point is that each .mod object—not the YAML template in the library—is a syntactic stage instance in a particular form. Even if we “repeat” the same stage template across time, we construct a new (identical) stage instance each time, rather than reusing a single shared object. Why? Because we define a stage as a composition of morphisms between perches (nodes containing the state variable, value function, information set, etc.). The morphism
$$
V_{t} \xrightarrow{\mathbb{S}_{t}} V_{t-1}
$$
is a distinct object, even if we specialize $\mathbb{S}_{t}$ to be an instance of the same operator at every $t$. Practically, this means we can inspect a stage object $V_{t} \xrightarrow{\mathbb{S}_{t}} V_{t-1}$ (and ultimately its forward graph) and have everything needed to describe that occurrence: its own symbol table, its own equation ASTs, and—once attached—its own methods, settings, and calibration, without having to trace back to a shared template.
We could reuse one shared stage template across time and let each period “point” to it. But then we need extra bookkeeping rules—for period $t$, which calibration, methods, and settings apply, and where do we attach them without mutating the shared template? Treating each stage occurrence as its own (possibly identical) object avoids this: each one is self-contained and easy to inspect.
Methods, settings, and calibration are attached via functorial rebindings: each functor takes a stage and returns a new stage with the relevant attachment populated. A convenient way to present the pipeline is:
$$ \text{YAML} \overset{\text{parse}}{\longrightarrow} \mathbb{S}^{\text{SYM}} \overset{\text{methodize}}{\longrightarrow} \overset{\text{configure}}{\longrightarrow} \overset{\text{calibrate}}{\longrightarrow} \mathbb{S}^{\text{CFG}} $$
Let’s walk through this pipeline explicitly. Importantly, we are not yet building the stage objects we will solve below—we are simply demonstrating the pipeline and inspecting the intermediate forms.
# ---------------------------------------------------------------
# Parse: YAML → symbolic stage (no methods/params/settings yet)
# ---------------------------------------------------------------
#
# If Matsya produced YAML earlier, use it; otherwise load from disk.
# Either way, the result is a SymbolicModel in memory.
from ffp_utils import load_stage, load_stage_from_string
cons_mod = (load_stage_from_string(_cons_yaml, name='cons_stage')
if _cons_yaml else load_stage('cons_stage', STAGES))
port_mod = (load_stage_from_string(_port_yaml, name='port_stage')
if _port_yaml else load_stage('port_stage', STAGES))
disc_mod = load_stage('disc_stage', STAGES)
noport_mod = load_stage('noport_stage', STAGES)
for s in [cons_mod, port_mod, disc_mod, noport_mod]:
groups = {k: len(v) for k, v in s.symbols.items() if v}
print(f'{s.name:16s} {groups}')
cons_stage {'spaces': 2, 'prestate': 1, 'states': 1, 'poststates': 1, 'controls': 1, 'values': 3, 'values_marginal': 3, 'parameters': 2, 'settings': 5}
port_stage {'spaces': 5, 'prestate': 1, 'states': 1, 'controls': 1, 'exogenous': 3, 'poststates': 1, 'values': 3, 'parameters': 6}
disc_stage {'spaces': 1, 'prestate': 1, 'states': 1, 'poststates': 1, 'values': 3, 'parameters': 1}
noport_stage {'spaces': 3, 'prestate': 1, 'exogenous': 1, 'states': 1, 'poststates': 1, 'values': 3, 'parameters': 3, 'settings': 6}
# ---------------------------------------------------------------
# Inspect cons_stage: declared symbols and equation blocks
# ---------------------------------------------------------------
from dolo.compiler.doloplus_translator.core import extract_raw_equations
print(f'Stage: {cons_mod.name}')
print(f'Parameters: {list(cons_mod.symbols.get("parameters", []))}')
print(f'States: {list(cons_mod.symbols.get("states", []))}')
print(f'Controls: {list(cons_mod.symbols.get("controls", []))}')
print(f'Poststates: {list(cons_mod.symbols.get("poststates", []))}')
eqs = extract_raw_equations(cons_mod.data)
print(f'\nEquation blocks: {list(eqs.keys())}')
# Preview the Bellman equation
bellman = eqs.get('cntn_to_dcsn_mover', {}).get('Bellman', [])
if bellman:
print('\ncntn_to_dcsn_mover.Bellman:')
for line in (bellman if isinstance(bellman, list) else bellman.split('\n')):
line = line.strip()
if line:
print(f' {line}')
Stage: cons_stage
Parameters: ['β', 'ρ']
States: ['m_d']
Controls: ['c']
Poststates: ['a']
Equation blocks: ['arvl_to_dcsn_transition', 'dcsn_to_cntn_transition', 'cntn_to_dcsn_mover', 'dcsn_to_arvl_mover']
cntn_to_dcsn_mover.Bellman:
u = (c^(1-ρ))/(1-ρ)
V = max_{c}(u + β*V[>])
c = argmax_{c}((c^(1-ρ))/(1-ρ) + β*V[>])
# ---------------------------------------------------------------
# Pipeline: methodize → configure → calibrate (cons_stage)
# ---------------------------------------------------------------
#
# Each step returns a NEW stage object with the attachment
# populated; the original remains unchanged.
from dolo.compiler.methodization import methodize as methodize_stage
from dolo.compiler.calibration import (
calibrate as calibrate_stage, configure as configure_stage)
# 1. Methodize: attach solution schemes
cons_M = methodize_stage(cons_mod, cons_methods)
print(f'After methodize: .methods has {len(cons_M.methods)} target(s)')
print(f' targets: {list(cons_M.methods.keys())[:5]}')
# 2. Configure: attach numerical settings (Ρ-level)
cons_MC = configure_stage(cons_M, settings)
print(f'\nAfter configure: .settings has {len(cons_MC.settings)} key(s)')
print(f' e.g. n_m={cons_MC.settings.get("n_m")}, '
f'a_max={cons_MC.settings.get("a_max")}')
# 3. Calibrate: attach parameter values (Υ-level)
cons_MCC = calibrate_stage(cons_MC, base_params)
print(f'\nAfter calibrate: .calibration has {len(cons_MCC.calibration)} key(s)')
sample = {k: v for k, v in list(cons_MCC.calibration.items())[:4]}
print(f' e.g. {sample}')
# The original symbolic stage is untouched:
print(f'\nOriginal cons_mod.methods = {cons_mod.methods}')
print(f'Original cons_mod.settings = {cons_mod.settings}')
After methodize: .methods has 13 target(s)
targets: ['arvl_to_dcsn_transition', 'dcsn_to_cntn_transition', 'cntn_to_dcsn_mover', 'cntn_to_dcsn_mover.Bellman', 'cntn_to_dcsn_mover.InvEuler']
After configure: .settings has 16 key(s)
e.g. n_m=100, a_max=10.0
After calibrate: .calibration has 8 key(s)
e.g. {'β': 0.96, 'ρ': 2.0, 'R': 1.02, 'μ_Ψ': 0.04}
Original cons_mod.methods = None
Original cons_mod.settings = None
Each step in the pipeline is a functor that attaches metadata to the symbolic stage without mutating it:
- Parse — YAML → $\mathbb{S}^{\text{SYM}}$ (symbolic stage). Defines perches, symbol groups, equation blocks. This is what $\Upsilon$ reads.
- Methodize — attaches numerical schemes for each mover
($\mathbb{G}$:
!egm,!vfi; $\mathbb{I}$:!gauss-hermite, etc.). Still symbolic with respect to parameter and setting values. - Configure — binds numerical settings ($\mathrm{P}$-level): grid sizes, bounds, quadrature nodes, tolerances. Changing a setting changes the approximation, not the problem.
- Calibrate — binds parameter values ($\Upsilon$-level): $\beta$, $\rho$, shock moments. Changing a parameter changes the mathematical model.
The output of this pipeline is $\mathbb{S}^{\text{CFG}}$ — a configured stage with all syntax, methods, settings, and parameters bound but not yet executable. None of this is "solving." The whisperer later builds $\mathbb{S}^{\text{CFG}} \to \mathbb{S}^{\text{CMP}}$ (callable operators). The helper bind_stage in ffp_utils collapses the methodize–configure–calibrate steps into a single call — used in the next section when we build full period maps.
# ---------------------------------------------------------------
# Bind port and disc stages (same pipeline, compact form)
# ---------------------------------------------------------------
#
# The explicit walkthrough above showed each functor step.
# Here we use bind_stage, which collapses the same pipeline.
from ffp_utils import bind_stage
port_bound = bind_stage(port_mod, port_methods, settings, base_params)
disc_bound = bind_stage(disc_mod, disc_methods, settings, base_params)
for s in [cons_MCC, port_bound, disc_bound]:
n_m = len(s.methods) if s.methods else 0
n_s = len(s.settings) if s.settings else 0
n_c = len(s.calibration) if s.calibration else 0
print(f'{s.name:16s} methods={n_m} settings={n_s} calibration={n_c}')
cons_stage methods=13 settings=16 calibration=8 port_stage methods=9 settings=16 calibration=8 disc_stage methods=8 settings=16 calibration=8
Build periods¶
A period in dolo-plus is two things: a shared namespace of random variables (the shocks that drive uncertainty within the period) and an ordered list of stage occurrences. The ordering matters: when we solve backward we recurse from the last stage to the first, passing continuation-value functions from one stage to the one before it; when we simulate forward we walk the list in the opposite direction. (We discuss branching stages in a seperate notebook.)
The wiring rule between stages is simple: stages compose when the continuation variable of one stage is the arrival variable of the next. Think of a directed graph in which each stage maps an arrival node to a continuation node — the arrival perch is the in-plug, the continuation perch is the out-plug. If the consumption stage produces a continuation variable $a$ (post-decision savings) and the next stage expects an arrival variable also called $a$, the two stages snap together with no extra machinery. This identity wiring is the default and can be left implicit — in graph terms the two edges share a common interior node labelled $a$, and composition is just path concatenation. Names don't always line up, however. The consumption stage may call its continuation variable $a$ while the portfolio stage expects an arrival variable $k$. A connector bridges the gap: a plain dictionary {"a": "k"} that renames the out-plug of one stage to match the in-plug of the next. In dolo-plus syntax a connector is simply {a: k} — a YAML dict of source-to-target variable names. Stages never need to know what comes before or after them.
The same mechanism works across periods, but with one important difference in namespace scope. Within a single period every variable name refers to exactly one quantity, so a connector {"a": "k"} is unambiguous. Between periods the namespace is not shared — both $\mathcal{P}_t$ and $\mathcal{P}_{t-1}$ may each contain a variable called $a$. An inter-period connector {"a": "m"} sitting between $\mathcal{P}_t$ and $\mathcal{P}_{t-1}$ therefore adopts a directional convention aligned with the backward sweep: keys name variables in the backward-preceding period (the later period $\mathcal{P}_t$, whose continuation value we already have) and values name variables in the backward-successive period ($\mathcal{P}_{t-1}$, whose arrival perch we are about to enter). In the common case where the last stage of period $t$ produces continuation savings $a$ and the first stage of the same period structure at $t-1$ expects arrival cash-on-hand $m$, the inter-period connector is simply {"a": "m"}.
The period operator $\mathbb{T}$ is the composition of its stage operators $\mathbb{S}$ in backward order. For beginning-returns, $\mathbb{T}_{\text{beg}} = \mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{disc}}$.
Finally, in the model we are studying a discount stage $\mathbb{S}_{\text{disc}}$ sits at the end of each period. It is a thin standalone operator $\mathbb{S}_{\text{disc}}(v) = \beta\, v$ that scales the continuation value (and its derivative) by the discount factor. Because it is the last stage in forward time it is applied first in the backward sweep, ensuring that all upstream economic stages — consumption, portfolio, income — remain $\beta$-free and therefore reusable across configurations with different discount factors.
Data structure¶
Since connectors, periods, and nests are straightforward declarative syntax — just names, orderings, and renaming dicts — we represent them directly as plain Python dicts without introducing any special objects. A period is a collection of stages and connectors:
period = {
"stages": {"cons_stage": ..., "port_stage": ...}, # configured stage objects
"connectors": [{"a": "k"}], # intra-period connectors
}
The "stages" dict holds the configured stage objects (output of bind_stage). The "connectors" list holds connector dicts interleaved between consecutive stages — an empty list means all stage boundaries are identity wiring (matching variable names).
A nest is a collection of periods and inter-period connectors:
nest = {
"periods": [period_H, period_H1, ..., period_0], # one per horizon step
"inter_connectors": [{"a": "k"}, {"a": "k"}, ...], # one between each pair
}
At each backward step we build a fresh period (with fresh stage instances), append it to the nest, and record the inter-period connector that wires it to the previous period. The solver walks this list, applying each period's operators in sequence.
# ═══════════════════════════════════════════════════════════════
# Stage map builders and connectors
# ═══════════════════════════════════════════════════════════════
from ffp_utils import bind_stage
# ---------------------------------------------------------------------------
# Stage map builders (one per period configuration)
#
# Each builder takes four explicit inputs — the same four that
# bind_stage needs — and returns a dict of configured stage objects.
#
# All inputs are in-memory objects loaded at the I/O boundary (cell 3).
# No hidden file reads happen here.
# ---------------------------------------------------------------------------
def build_stage_map_beginning(stages, methods, settings, params):
"""Beginning-returns: port → cons (backward: cons → port)."""
return {
"cons_stage": bind_stage(stages['cons_stage'], methods['cons_stage'], settings, params),
"port_stage": bind_stage(stages['port_stage'], methods['port_stage'], settings, params),
}
def build_stage_map_beginning_vfi(stages, methods, settings, params):
"""Beginning-returns with VFI cons (for comparison)."""
return {
"cons_stage": bind_stage(stages['cons_stage'], methods['cons_stage_vfi'], settings, params),
"port_stage": bind_stage(stages['port_stage'], methods['port_stage'], settings, params),
}
def build_stage_map_end(stages, methods, settings, params):
"""End-returns: cons → port (backward: port → cons)."""
return {
"cons_stage": bind_stage(stages['cons_stage'], methods['cons_stage'], settings, params),
"port_stage": bind_stage(stages['port_stage'], methods['port_stage'], settings, params),
}
def build_stage_map_cons_only(stages, methods, settings, params):
"""Cons-only: noport → cons (backward: cons → noport)."""
return {
"noport_stage": bind_stage(stages['noport_stage'], methods['noport_stage'], settings, params),
"cons_stage": bind_stage(stages['cons_stage'], methods['cons_stage'], settings, params),
}
def build_stage_map_port_only(stages, methods, settings, params):
"""Port-only: port (backward: port)."""
return {
"port_stage": bind_stage(stages['port_stage'], methods['port_stage'], settings, params),
}
# ---------------------------------------------------------------------------
# Lookup dicts: collected from the I/O boundary cells above
# ---------------------------------------------------------------------------
# Stage lookup: stage name → parsed symbolic stage (from cell 16)
stage_models = {
'cons_stage': cons_mod,
'port_stage': port_mod,
'noport_stage': noport_mod,
'disc_stage': disc_mod,
}
# Methods lookup: stage name → method config dict (from cell 3)
stage_methods = {
'cons_stage': cons_methods,
'cons_stage_vfi': port_methods, # VFI comparison: swap EGM for VFI
'port_stage': port_methods,
'noport_stage': noport_methods,
'disc_stage': disc_methods,
}
# ---------------------------------------------------------------------------
# Connectors: plain dicts mapping source → target variable names.
#
# Intra-period: unique namespace, so {"a": "k"} is unambiguous.
# Inter-period: keys from backward-preceding period (P_t),
# values into backward-successive period (P_{t-1}).
# ---------------------------------------------------------------------------
inter_connector_a_to_k = {"a": "k"}
inter_connector_m_to_m = {"m": "m"}
inter_connector_m_to_k = {"m": "k"}
connector_cons_to_port = {"a": "k"}
print(f"Intra-period connector: cons→port {connector_cons_to_port}")
print(f"Inter-period connectors: a→k, m→m, m→k")
Intra-period connector: cons→port {'a': 'k'}
Inter-period connectors: a→k, m→m, m→k
De-sugaring: from stage YAML to solver expressions¶
We have bound every stage through the syntactic pipeline (parse → methodize → configure → calibrate). The next step is to cross from the syntax layer into the computational layer — building callable operators that a solver can apply.
Bellman calculus and dolo-plus define the syntax layer: perch-tagged equations, transition blocks, Bellman/Euler conditions. But the syntax says nothing about how to build a solver. That is the job of the whisperer — solver-specific code that reads stage syntax and constructs executable operators. In this notebook, our whisperer is ffp_utils.py.
To build its operators, the whisperer needs to translate the stage YAML's perch-tagged equations into plain Python expressions it can eval at solve time. Three things happen in this translation:
Perch tags → Python names. The YAML uses perch-tagged value functions —
V[>](continuation value) anddV[>](continuation marginal value) — which are abstract functions on the continuation space. The whisperer maps these to callable Python objects with its own naming convention:V_cont(·)anddV_cont(·). These names are a convention of this whisperer, not adolo-plusrequirement.Bare symbols → evaluated expressions. The YAML writes
dV[>]as a bare symbol (a function); the whisperer evaluates it at a concrete grid point, e.g.dV_cont(a)whereais the poststate grid array.$\beta$ is stripped. The YAML declares
c[>] = (\beta \cdot \mathrm{dV}[\succ])^{-1/\rho}with $\beta$ present. The whisperer deliberately drops $\beta$ because the disc stage handles discounting separately — keeping the consumption operator $\beta$-free and reusable across configurations. Similarly,uin the Bellman equationV = \max_c\{u + \beta \, V[\succ]\}is expanded into its explicit form(c**(1-rho))/(1-rho).
For example:
| Stage YAML (syntax layer) | Whisperer expression (computational layer) | Key |
|---|---|---|
| $c[\succ] = (\beta \, \mathrm{dV}[\succ])^{-1/\rho}$ | (dV_cont(a))**(-1/rho) |
inv_euler |
| $V = \max_c\{u + \beta \, V[\succ]\}$ | (c**(1-rho))/(1-rho) + V_cont(a) |
vfi_maximand |
| $a = m_d - c$ | m_d - c |
dcsn_to_cntn_transition |
The result is a plain dict mapping equation roles to expression strings. The operator factories in ffp_utils (make_egm_cons_operator, make_vfi_cons_operator) eval these expressions against concrete arrays (grid values, policy arrays) at each backward step.
This translation can be done by hand (the FALLBACK_RECIPE hardcoded in ffp_utils — a hand-verified set of expressions), or by Matsya — which reads the stage YAML, applies $\Upsilon$, and returns the expressions as JSON. Either way, the output is the same dict of strings; the only difference is whether a human or an LLM did the translation. This entire de-sugaring step is a whisperer concern — it is not part of Bellman calculus or dolo-plus.
# ── Whisperer de-sugaring: stage YAML → eval-able expressions ──
#
# This is a whisperer concern, not a Bellman calculus / dolo-plus concept.
# The whisperer (ffp_utils) needs plain Python expressions to build
# its operator closures (make_egm_cons_operator, make_vfi_cons_operator).
#
# Matsya can produce these expressions from stage YAML via Υ.
# If Matsya is unavailable, a hand-verified FALLBACK_RECIPE is used.
from ffp_utils import matsya_desugar_recipe, set_recipe, FALLBACK_RECIPE
# Step 1: Translate perch-tagged equations → Python expression strings
recipe, from_matsya = matsya_desugar_recipe(STAGES / 'cons_stage.yaml')
# Step 2: Store for operator factories to look up at solve time
set_recipe(recipe)
# ── Display ──
src = 'Matsya (live)' if from_matsya else 'FALLBACK_RECIPE (hand-verified)'
print(f"Expression source: {src}\n")
print(f"Whisperer expressions ({len(recipe)} keys):")
print(f"{'─' * 60}")
for k, v in recipe.items():
print(f" {k:30s} → {v}")
print(f"{'─' * 60}")
print(f"\nThe operator factories in ffp_utils eval these expressions")
print(f"against concrete arrays (grid, policy) at each backward step.")
Expression source: FALLBACK_RECIPE (hand-verified) Whisperer expressions (6 keys): ──────────────────────────────────────────────────────────── vfi_maximand → (c**(1-rho))/(1-rho) + V_cont(m_d - c) inv_euler → (dV_cont(a))**(-1/rho) cntn_to_dcsn_transition → a + c marginal_bellman → c**(-rho) dcsn_to_cntn_transition → m_d - c utility → (c**(1-rho))/(1-rho) ──────────────────────────────────────────────────────────── The operator factories in ffp_utils eval these expressions against concrete arrays (grid, policy) at each backward step.
Four period configurations¶
The same stage building blocks compose into four period configurations, each a valid economic model:
| Config | Stages (forward) | Period operator $\mathbb{T}$ | Intra-period connector | Inter-period connector |
|---|---|---|---|---|
| Cons-only | noport $\to$ cons $\to$ disc | $\mathbb{S}_{\text{noport}} \circ \mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{disc}}$ | — | $\{a \mapsto k\}$ |
| Beg-returns | port $\to$ cons $\to$ disc | $\mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{disc}}$ | — | $\{a \mapsto k\}$ |
| End-returns | cons $\to$ port $\to$ disc | $\mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{disc}}$ | $\{a \mapsto k\}$ | $\{m \mapsto m\}$ |
| Port-only | port $\to$ disc (Merton) | $\mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{disc}}$ | — | $\{m \mapsto k\}$ |
All four end with a disc stage that scales the continuation by $\beta$. The economic stages (cons, port, noport) are $\beta$-free — they see a continuation that has already been discounted.
The two multi-stage configurations illustrate the connector mechanism. In beginning-returns, portfolio returns are realized before consumption: port outputs cash-on-hand $m$, and cons expects arrival $m$ — names match, so no intra-period connector is needed. But between periods, cons outputs savings $a$ while port expects wealth $k$, so the inter-period connector $\{a \mapsto k\}$ bridges the gap. In end-returns the stages swap order and so does the wiring: cons outputs $a$ but port expects $k$ within the same period, requiring an intra-period connector $\{a \mapsto k\}$; between periods, port outputs $m$ and cons expects $m$, so the inter-period connector is identity.
The diagrams read left-to-right in forward time. Nodes are random variables; edges are stages; dashed lines are connectors. The same three stages appear in both configurations — only the wiring changes.
From syntax to callable: the whisperer builds $\mathbb{T}^{\text{CMP}}$¶
We now have everything needed to solve: configured stages $\mathbb{S}^{\text{CFG}}$ (from the binding pipeline), de-sugared expressions (from the recipe step), and the period structures defined above. The remaining step is the whisperer's build: it reads a period dict and constructs a callable period operator $\mathbb{T}^{\text{CMP}}$.
The function make_period_operator in ffp_utils does this. Given a period dict {"stages": ..., "connectors": ...} and an inter-period connector, it:
- derives the backward solve order from the connector graph (following poststate → prestate links),
- dispatches each stage to the appropriate operator factory (
make_egm_cons_operator,make_vfi_port_operator, etc.) based on the method tag attached during methodization, and - chains the resulting callables into a single function $\mathbb{T}^{\text{CMP}}(V_{\text{cont}}) \to V_{\text{arvl}}$.
If a disc operator is provided, it is applied first in the backward direction — wrapping the continuation before any economic stage sees it.
The operator factories for the consumption stage eval the de-sugared expressions stored earlier (the recipe from the previous section) against concrete grid arrays at each backward step. This is the only place where the translation from stage YAML to executable code actually executes — everything before this was syntax manipulation. The recipe is stored as a module-level dict in ffp_utils; a future refactor could attach it directly to each stage object, but for now the global is sufficient.
# ═══════════════════════════════════════════════════════════════
# Nest configuration (builder-level: horizon, terminal, disc)
# ═══════════════════════════════════════════════════════════════
from ffp_utils import ValueFn, build_grids, make_disc_operator, make_period_operator, apply_inter_connector
# Horizon — nest-level parameter
H = 5
# beta and rho from calibration (used by disc and terminal)
beta = float(base_params['β'])
rho = float(base_params['ρ'])
# Grids for terminal conditions (from settings.yaml)
# Stage operators build their own grids internally via _extract_stage_config.
# These are only needed to seed the terminal value function.
grids = build_grids(settings)
a_grid = grids['a']
k_grid = grids['k']
m_grid = grids['m']
# Terminal condition: V(x) = u(x) = x^(1-ρ)/(1-ρ)
def make_terminal(grid):
return ValueFn(
grid,
np.where(grid > 0, grid**(1-rho) / (1-rho), -1e10),
np.where(grid > 0, grid**(-rho), 1e10),
)
# Disc operator: S_disc(V) = beta * V (applied first in backward direction)
S_disc = make_disc_operator(beta)
print(f"Nest: H={H}, beta={beta}, rho={rho}")
print(f"Terminal grids: a[{len(a_grid)}], k[{len(k_grid)}], m[{len(m_grid)}]")
print(f"Disc operator: S_disc scales continuation by beta={beta}")
Nest: H=5, beta=0.96, rho=2.0 Terminal grids: a[100], k[100], m[100] Disc operator: S_disc scales continuation by beta=0.96
1. Cons-only (noport → cons → disc)¶
A simple consumption-savings model with no portfolio choice. The noport stage realizes income shocks: $m = kR + \theta$. The cons stage solves: $\max_c u(c) + \mathrm{v}_\succ(m - c)$.
- Inter-period connector: $\{a \to k\}$ — cons output $a$ becomes noport input $k$
- Backward: disc → cons → noport
# ═══════════════════════════════════════════════════════════════
# Cons-only: noport → cons → disc
# ═══════════════════════════════════════════════════════════════
V_co = make_terminal(a_grid)
nest_co = {"periods": [], "inter_connectors": [], "solutions": []}
print(f"Cons-only (H={H}):\n")
for h in range(H + 1):
stage_map = build_stage_map_cons_only(stage_models, stage_methods, settings, dict(base_params))
period = {
"stages": dict(stage_map),
"connectors": [], # noport.m → cons.m is identity
}
nest_co["periods"].append(period)
nest_co["inter_connectors"].append(inter_connector_a_to_k)
if h > 0:
V_co = apply_inter_connector(nest_co["solutions"][-1], nest_co["inter_connectors"][-2])
T_h = make_period_operator(period, inter_connector_a_to_k, disc_operator=S_disc)
V_co = T_h(V_co)
nest_co["solutions"].append({
"k": V_co,
"m": V_co.policy.get('_cons_stage'),
})
print(f" h={h}: V ∈ [{V_co.values.min():.4f}, {V_co.values.max():.4f}]")
print(f"\nDone: {len(nest_co['solutions'])} periods solved (cons-only).")
Cons-only (H=5):
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=0: V ∈ [-3.9138, -0.3920]
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=1: V ∈ [-4.3010, -0.7767]
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=2: V ∈ [-4.9868, -1.2300]
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=3: V ∈ [-5.7213, -1.7273]
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=4: V ∈ [-6.4560, -2.2517]
backward order: cons_stage → noport_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
noport_stage (expectation): 1 transition(s), 7 quadrature points
h=5: V ∈ [-7.1759, -2.7912]
Done: 6 periods solved (cons-only).
2. Beginning-returns (port → cons → disc, EGM)¶
The classic portfolio-consumption model with beginning-of-period returns. Portfolio stage solves: $\mathrm{v}(k) = \max_\varsigma \mathbb{E}_{\Psi,\theta}[\mathrm{v}_\succ(k(\varsigma\Psi + (1-\varsigma)R) + \theta)]$. Cons stage solves: $\max_c u(c) + \mathrm{v}_\succ(m - c)$.
- No intra-period connector (port output $m$ = cons input $m$)
- Inter-period connector: $\{a \to k\}$
- Backward: disc → cons → port
# ═══════════════════════════════════════════════════════════════
# Beginning-returns: port → cons → disc (EGM)
# ═══════════════════════════════════════════════════════════════
V_br = make_terminal(a_grid)
nest_br = {"periods": [], "inter_connectors": [], "solutions": []}
print(f"Beginning-returns EGM (H={H}):\n")
for h in range(H + 1):
stage_map = build_stage_map_beginning(stage_models, stage_methods, settings, dict(base_params))
period = {
"stages": dict(stage_map),
"connectors": [],
}
nest_br["periods"].append(period)
nest_br["inter_connectors"].append(inter_connector_a_to_k)
if h > 0:
V_br = apply_inter_connector(nest_br["solutions"][-1], nest_br["inter_connectors"][-2])
T_h = make_period_operator(period, inter_connector_a_to_k, disc_operator=S_disc)
V_br = T_h(V_br)
nest_br["solutions"].append({
"k": V_br,
"m": V_br.policy.get('_cons_stage'),
})
print(f" h={h}: V_port ∈ [{V_br.values.min():.4f}, {V_br.values.max():.4f}] "
f"(cons: !{T_h._cons_method}, port: !{T_h._port_method})")
print(f"\nDone: {len(nest_br['solutions'])} periods solved (beginning-returns, EGM).")
Beginning-returns EGM (H=5):
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=0: V_port ∈ [-3.9106, -0.3920] (cons: !egm, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=1: V_port ∈ [-4.2531, -0.7548] (cons: !egm, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=2: V_port ∈ [-4.9057, -1.1704] (cons: !egm, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=3: V_port ∈ [-5.6104, -1.6176] (cons: !egm, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=4: V_port ∈ [-6.3178, -2.0838] (cons: !egm, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
h=5: V_port ∈ [-7.0110, -2.5569] (cons: !egm, port: !vfi)
Done: 6 periods solved (beginning-returns, EGM).
3. Beginning-returns (VFI comparison)¶
Re-solve with cons stage using !vfi instead of !egm. Both
methods converge to the same value functions (up to grid-search
discretization error).
# ═══════════════════════════════════════════════════════════════
# Beginning-returns: VFI variant (for comparison)
# ═══════════════════════════════════════════════════════════════
V_bv = make_terminal(a_grid)
nest_bv = {"periods": [], "inter_connectors": [], "solutions": []}
print(f"Beginning-returns VFI (H={H}):\n")
for h in range(H + 1):
stage_map = build_stage_map_beginning_vfi(stage_models, stage_methods, settings, dict(base_params))
period = {
"stages": dict(stage_map),
"connectors": [],
}
nest_bv["periods"].append(period)
nest_bv["inter_connectors"].append(inter_connector_a_to_k)
if h > 0:
V_bv = apply_inter_connector(nest_bv["solutions"][-1], nest_bv["inter_connectors"][-2])
T_h = make_period_operator(period, inter_connector_a_to_k, disc_operator=S_disc)
V_bv = T_h(V_bv)
nest_bv["solutions"].append({
"k": V_bv,
"m": V_bv.policy.get('_cons_stage'),
})
print(f" h={h}: V_port ∈ [{V_bv.values.min():.4f}, {V_bv.values.max():.4f}] "
f"(cons: !{T_h._cons_method}, port: !{T_h._port_method})")
print(f"\nDone: {len(nest_bv['solutions'])} periods solved (beginning-returns, VFI).")
Beginning-returns VFI (H=5):
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=0: V_port ∈ [-3.9110, -0.3920] (cons: !vfi, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=1: V_port ∈ [-4.2525, -0.7548] (cons: !vfi, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=2: V_port ∈ [-4.9049, -1.1704] (cons: !vfi, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=3: V_port ∈ [-5.6092, -1.6175] (cons: !vfi, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=4: V_port ∈ [-6.3161, -2.0839] (cons: !vfi, port: !vfi)
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (vfi): 1 transition(s), 1 Bellman sub-eqs
port_stage (vfi): 1 transition(s), 49 quadrature points
h=5: V_port ∈ [-7.0097, -2.5569] (cons: !vfi, port: !vfi)
Done: 6 periods solved (beginning-returns, VFI).
# ═══════════════════════════════════════════════════════════════
# End-returns: cons → port → disc
# ═══════════════════════════════════════════════════════════════
V_er = make_terminal(m_grid)
nest_er = {"periods": [], "inter_connectors": [], "solutions": []}
print(f"End-returns (H={H}):\n")
for h in range(H + 1):
stage_map = build_stage_map_end(stage_models, stage_methods, settings, dict(base_params))
period = {
"stages": dict(stage_map),
"connectors": [connector_cons_to_port],
}
nest_er["periods"].append(period)
nest_er["inter_connectors"].append(inter_connector_m_to_m)
if h > 0:
V_er = apply_inter_connector(nest_er["solutions"][-1], nest_er["inter_connectors"][-2])
T_h = make_period_operator(period, inter_connector_m_to_m, disc_operator=S_disc)
V_er = T_h(V_er)
nest_er["solutions"].append({
"m": V_er,
"k": V_er.policy.get('_port_stage'),
})
print(f" h={h}: V ∈ [{V_er.values.min():.4f}, {V_er.values.max():.4f}] "
f"(backward: {' → '.join(T_h._backward_order)})")
print(f"\nDone: {len(nest_er['solutions'])} periods solved (end-returns).")
End-returns (H=5):
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=0: V ∈ [-100.9553, -0.3460] (backward: port_stage → cons_stage)
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=1: V ∈ [-101.8720, -0.6818] (backward: port_stage → cons_stage)
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=2: V ∈ [-102.7539, -1.0740] (backward: port_stage → cons_stage)
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=3: V ∈ [-103.5995, -1.5000] (backward: port_stage → cons_stage)
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=4: V ∈ [-104.4096, -1.9484] (backward: port_stage → cons_stage)
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
h=5: V ∈ [-105.1856, -2.4089] (backward: port_stage → cons_stage)
Done: 6 periods solved (end-returns).
5. Port-only (port → disc, pure Merton)¶
Pure portfolio allocation — no consumption decision. The agent chooses risky share $\varsigma$ each period.
- Inter-period connector: $\{m \to k\}$ (port output $m$ becomes port input $k$)
- Backward: disc → port
# ═══════════════════════════════════════════════════════════════
# Port-only: port → disc (pure Merton)
# ═══════════════════════════════════════════════════════════════
V_po = make_terminal(k_grid)
nest_po = {"periods": [], "inter_connectors": [], "solutions": []}
print(f"Port-only (H={H}):\n")
for h in range(H + 1):
stage_map = build_stage_map_port_only(stage_models, stage_methods, settings, dict(base_params))
period = {
"stages": dict(stage_map),
"connectors": [],
}
nest_po["periods"].append(period)
nest_po["inter_connectors"].append(inter_connector_m_to_k)
if h > 0:
V_po = apply_inter_connector(nest_po["solutions"][-1], nest_po["inter_connectors"][-2])
T_h = make_period_operator(period, inter_connector_m_to_k, disc_operator=S_disc)
V_po = T_h(V_po)
nest_po["solutions"].append({
"k": V_po,
"m": V_po, # port output is m
})
print(f" h={h}: V ∈ [{V_po.values.min():.4f}, {V_po.values.max():.4f}] "
f"(port: !{T_h._port_method})")
print(f"\nDone: {len(nest_po['solutions'])} periods solved (port-only).")
Port-only (H=5):
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=0: V ∈ [-0.9553, -0.0960] (port: !vfi)
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=1: V ∈ [-0.4403, -0.0922] (port: !vfi)
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=2: V ∈ [-0.2708, -0.0885] (port: !vfi)
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=3: V ∈ [-0.1874, -0.0849] (port: !vfi)
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=4: V ∈ [-0.1383, -0.0815] (port: !vfi)
backward order: port_stage (entry: 'm', disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
h=5: V ∈ [-0.1067, -0.0783] (port: !vfi)
Done: 6 periods solved (port-only).
Order semiconjugacy between period configurations¶
Beginning-returns and end-returns use the same two economic stages (cons, port) in opposite composition order. The two undiscounted period operators are:
$$ \mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{cons}} \qquad\text{(beginning-returns)} \qquad \mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{port}} \qquad\text{(end-returns)} $$
These are order semiconjugate (Sargent and Stachurski, 2026). The iterates lemma gives, for $n$ backward steps:
$$ (\mathbb{S}_{\text{port}} \circ \mathbb{S}_{\text{cons}})^n = \mathbb{S}_{\text{port}} \circ (\mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{port}})^{n-1} \circ \mathbb{S}_{\text{cons}} $$
With discounting, the disc operator $\mathbb{S}_{\text{disc}}(v) = \beta\,v$ commutes with the linear $\mathbb{S}_{\text{port}}$ but not with the nonlinear $\mathbb{S}_{\text{cons}}$, so $\mathbb{S}_{\text{disc}}$ must be absorbed into the $\mathbb{S}_{\text{cons}}$ bookend. Writing $\mathbb{T}_{\text{beg}}$ and $\mathbb{T}_{\text{end}}$ for the discounted period operators:
$$ \boxed{ \mathbb{T}_{\text{beg}}^n = \mathbb{S}_{\text{port}} \circ \mathbb{T}_{\text{end}}^{\,n-1} \circ (\mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{disc}}) } \qquad \boxed{ \mathbb{T}_{\text{end}}^n = (\mathbb{S}_{\text{cons}} \circ \mathbb{S}_{\text{disc}}) \circ \mathbb{T}_{\text{beg}}^{\,n-1} \circ \mathbb{S}_{\text{port}} } $$
The upshot: solve end-returns for $n{-}1$ steps, then wrap with one discounted cons step at the start and one port step at the end to recover beginning-returns — or vice versa.
# ═══════════════════════════════════════════════════════════════
# Semiconjugacy demo: recover each configuration from the other
# ═══════════════════════════════════════════════════════════════
#
# T_beg^n = E_port ∘ T_end^{n-1} ∘ (M_cons ∘ D)
# T_end^n = (M_cons ∘ D) ∘ T_beg^{n-1} ∘ E_port
from ffp_utils import make_stage_operator
# ── Standalone stage operators (same calibration as period loops) ──
stages_sc = build_stage_map_beginning(stage_models, stage_methods, settings, dict(base_params))
S_cons = make_stage_operator(stages_sc['cons_stage']) # M_cons
S_port = make_stage_operator(stages_sc['port_stage']) # E_port
# ── Period operators for inner iterations (each includes disc) ──
period_end_sc = {
"stages": dict(build_stage_map_end(stage_models, stage_methods, settings, dict(base_params))),
"connectors": [connector_cons_to_port],
}
T_end_sc = make_period_operator(period_end_sc, inter_connector_m_to_m,
disc_operator=S_disc)
period_beg_sc = {
"stages": dict(build_stage_map_beginning(stage_models, stage_methods, settings, dict(base_params))),
"connectors": [],
}
T_beg_sc = make_period_operator(period_beg_sc, inter_connector_a_to_k,
disc_operator=S_disc)
n = H + 1 # total backward steps (matches direct solve loops)
# ── Part 1: Recover beg-returns from end-returns ──
# Right bookend: D then M_cons
V = S_disc(make_terminal(a_grid))
V = S_cons(V)
# Inner: n-1 iterations of T_end
print(f"\nPart 1 — recovering beg-returns ({n-1} inner T_end iterations):")
for i in range(n - 1):
V = T_end_sc(V)
V_m_beg = V # on m-space, save for consumption comparison
# Left bookend: E_port (no disc)
V_beg_recovered = S_port(V)
diff_beg = np.max(np.abs(V_beg_recovered(V_br.grid) - V_br.values))
print(f" max|V_recovered(k) - V_direct(k)| = {diff_beg:.2e}")
# ── Part 2: Recover end-returns from beg-returns ──
# Right bookend: E_port (no disc)
V = S_port(make_terminal(m_grid))
# Inner: n-1 iterations of T_beg
print(f"\nPart 2 — recovering end-returns ({n-1} inner T_beg iterations):")
for i in range(n - 1):
V = T_beg_sc(V)
# Left bookend: D then M_cons
V = S_disc(V)
V_end_recovered = S_cons(V)
diff_end = np.max(np.abs(V_end_recovered(V_er.grid) - V_er.values))
print(f" max|V_recovered(m) - V_direct(m)| = {diff_end:.2e}")
print(f"\nSemiconjugacy verified: both recoveries match to interpolation tolerance.")
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
backward order: port_stage → cons_stage (entry: 'm', 1 intra-connector(s), disc)
port_stage (vfi): 1 transition(s), 49 quadrature points
cons_stage (egm): EGM via recipe
backward order: cons_stage → port_stage (entry: 'a', disc)
cons_stage (egm): EGM via recipe
port_stage (vfi): 1 transition(s), 49 quadrature points
Part 1 — recovering beg-returns (5 inner T_end iterations):
max|V_recovered(k) - V_direct(k)| = 2.66e-15
Part 2 — recovering end-returns (5 inner T_beg iterations):
max|V_recovered(m) - V_direct(m)| = 4.44e-15
Semiconjugacy verified: both recoveries match to interpolation tolerance.
# ── Semiconjugacy comparison plots (dual-theme) ──
from ffp_plots import display_semiconj_plots
display_semiconj_plots(
V_br, V_beg_recovered, V_m_beg,
V_er, V_end_recovered,
n, beta,
)
# ═══════════════════════════════════════════════════════════════
# Plots: all four configurations — dual-theme (light/dark)
# ═══════════════════════════════════════════════════════════════
from ffp_plots import display_dual_plots
display_dual_plots(nest_co, nest_br, nest_bv, nest_er, nest_po, H)
Plots saved (4 PNGs) and displayed with light/dark theme switching.
Summary¶
| Config | Stages | Key feature |
|---|---|---|
| Cons-only | noport, cons | Income shocks only |
| Beg-returns | port, cons | Returns then consumption |
| End-returns | cons, port | Consumption then investment |
| Port-only | port | Pure Merton portfolio |
The design principles — modular operators, connector-driven solve order, equations from the stage syntax — are exercised throughout.
References¶
- Backus, J. (1978). Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs. Communications of the ACM, 21(8), 613–641.
- Carroll, C. D. (2006). The Method of Endogenous Gridpoints for Solving Dynamic Stochastic Optimization Problems. Economics Letters, 91(3).
- Sargent, T. J. & Stachurski, J. (2022). Economic and Financial Decisions under Uncertainty. Princeton University Press.
- Sargent, T. J. & Stachurski, J. (2026). Dynamic Programming: General States. Cambridge University Press.
- Bellman calculus foundations:
docs/theory/ddsl-foundations/anddocs/theory/core-ddsl-FFP-concepts.md
Appendix: seven lives detail¶
| # | Level | What happens | Object | Source |
|---|---|---|---|---|
| 1 | Symbolic | YAML parsed into symbols, equations, spaces | dict (raw parse tree) |
cons_stage.yaml |
| 2 | Parameterized | Parameter values bound ($\beta$, $\rho$, $R$, ...) | stage.param |
calibration.yaml |
| 3 | Methodized | Solution method attached (!egm, !vfi, ...) |
stage.methods |
cons_stage_methods.yml |
| 4 | Configured | Numerical settings bound (grid sizes, tolerances) | stage.settings |
settings.yaml |
| 5 | Callable | Grids, interpolants, quadrature nodes built | np.ndarray, interp1d |
builder code |
| 6 | Solved | Bellman operator iterated to convergence | ValueFn (policy + value) |
builder code |
| 7 | Simulated | Policy applied to generate paths / distributions | np.ndarray (paths) |
builder code |
Steps 1–4 produce a fully specified stage object (syntax layer). Step 5 builds callable computational objects (representation). Steps 6–7 are application (Backus): applying the callables to produce solutions and simulations.