Skip to content

Function Compilation Process

Overview

Dolo compiles each equation block into a fast, vectorized numpy function. This happens through a sophisticated pipeline involving recipes.yaml, function factories, and code generation.

The Compilation Pipeline

Equation Block (YAML)
get_factory() [uses recipes.yaml]
FlatFunctionFactory
make_method_from_factory()
NumPy UFunc (vectorized)
standard_function wrapper
model.functions['equation_name']

Step-by-Step Process

1. Trigger: Accessing model.functions

# In model.py
@property
def functions(self):
    if self.__functions__ is None:
        self.__compile_functions__()  # Lazy compilation
    return self.__functions__

2. Main Compilation Loop

def __compile_functions__(self):
    # Get all equation names from the model
    funnames = [*self.equations.keys()]
    # Example: ['transition', 'arbitrage', 'expectation', ...]

    functions = LoosyDict()

    for funname in funnames:
        # Create factory for this equation
        fff = get_factory(self, funname)

        # Compile to vectorized function
        fun, gufun = make_method_from_factory(fff, vectorize=True)

        # Wrap in standard_function
        n_output = len(fff.content)
        functions[funname] = standard_function(gufun, n_output)

    self.__functions__ = functions

3. The Factory Creation (get_factory)

def get_factory(model, eq_type: str):
    # Look up equation type in recipes
    specs = recipes["dtcc"]["specs"][eq_type]

    # Get the equations
    eqs = model.equations[eq_type]

    # Build argument list from specs
    args = []
    for sg in specs["eqs"]:
        if sg[0] == "parameters":
            args.append(model.symbols["parameters"])
        else:
            # sg = ['states', 0, 's'] means states at time 0
            args.append([(s, sg[1]) for s in model.symbols[sg[0]]])

    # Create factory
    return FlatFunctionFactory(
        preamble=definitions,
        equations=eqs,
        arguments=args,
        eq_type=eq_type
    )

The recipes.yaml File

Purpose

recipes.yaml defines the signature for each equation type - what inputs it needs and in what order.

Structure

dtcc:  # Model type
    specs:
        transition:
            target: ['states', 0, 'S']  # Output
            eqs:  # Inputs (in order)
                - ['exogenous', -1, 'm']    # y[t-1]
                - ['states', -1, 's']       # w[t-1]
                - ['controls', -1, 'x']     # c[t-1]
                - ['exogenous', 0, 'M']     # y[t]
                - ['parameters', 0, 'p']    # params

        expectation:
            target: ['expectations', 0, 'z']
            eqs:
                - ['exogenous', 1, 'M']     # y[t+1]
                - ['states', 1, 'S']        # w[t+1]
                - ['controls', 1, 'X']      # c[t+1]
                - ['parameters', 0, 'p']    # params

Format Explanation

['symbol_type', time_index, internal_name]
  • symbol_type: Category from model.symbols
  • time_index: -1 (past), 0 (current), 1 (future)
  • internal_name: Variable name in generated code

The Letter Convention

Letter Meaning Usage
m/M Exogenous (Markov) Current/Future
s/S States Current/Future
x/X Controls Current/Future
p Parameters Always current
a Auxiliary/Assets Post-states
z Expectations Expected values
v/V Values Value function

Capital letters indicate future values (t+1).

Compilation Example

Input: YAML Equation

expectation: |
    mr[t] = β*(c[t+1])^(-γ)*r

Step 1: Recipe Lookup

expectation:
    target: ['expectations', 0, 'z']
    eqs:
        - ['exogenous', 1, 'M']
        - ['states', 1, 'S']
        - ['controls', 1, 'X']
        - ['parameters', 0, 'p']

Step 2: Factory Creation

factory = FlatFunctionFactory(
    equations={'mr_0': 'β*(c_1)^(-γ)*r'},
    arguments={
        'M': ['y_1'],      # Future exogenous
        'S': ['w_1'],      # Future state
        'X': ['c_1'],      # Future control
        'p': ['β', 'γ', 'σ', 'ρ', 'r']  # Parameters
    },
    targets=['mr_0']
)

Step 3: Code Generation

The factory generates Python code:

def expectation(y_1, w_1, c_1, params):
    # Extract parameters
    β, γ, σ, ρ, r = params[0], params[1], params[2], params[3], params[4]

    # Compute equation
    mr_0 = β * (c_1 ** (-γ)) * r

    return mr_0

Step 4: Vectorization

The code is compiled with numba to create a ufunc:

@vectorize(['float64(float64, float64, float64, float64[:])'])
def expectation_ufunc(y_1, w_1, c_1, params):
    # ... equation computation ...
    return mr_0

Step 5: Wrapping

Finally wrapped in standard_function:

model.functions['expectation'] = standard_function(
    expectation_ufunc,
    n_output=1  # Number of output values
)

Key Compilation Features

1. Automatic Vectorization

All functions are compiled to work on arrays:

# Single point
y = np.array([0.0])
result = func(y, w, c, mr, params)  # Shape: (1,)

# Multiple points
y = np.zeros(1000)
result = func(y, w, c, mr, params)  # Shape: (1000,)

2. Consistent Signatures

All functions of the same type have the same signature, even if they don't use all variables:

# auxiliary_direct_egm: a[t] = w[t] - c[t]
# Still needs all inputs defined in recipes.yaml:
func(y, w, c, mr, params)  # Even though y and mr aren't used

3. Parameter Broadcasting

Parameters stay constant across vectorized evaluations:

# params is shape (5,) but broadcasts to all N evaluations
y = np.zeros(1000)      # 1000 points
params = np.array([...])  # 5 parameters
result = func(y, w, c, mr, params)  # Works!

4. Automatic Differentiation

Functions support computing derivatives:

result, derivatives = func(y, w, c, mr, params, diff=True)

Adding New Equation Types

To add a new equation type:

  1. Add to YAML model:

    equations:
        my_new_equation: |
            z[t] = custom_expression
    

  2. Add to recipes.yaml:

    my_new_equation:
        optional: True
        target: ['my_output', 0, 'z']
        eqs:
            - ['states', 0, 's']
            - ['controls', 0, 'x']
            - ['parameters', 0, 'p']
    

  3. Use in model:

    result = model.functions['my_new_equation'](s, x, params)
    

Performance Considerations

Why Compiled Functions?

  1. Speed: 100-1000x faster than interpreted Python
  2. Vectorization: Evaluate thousands of points in parallel
  3. Memory efficiency: No Python overhead
  4. GPU potential: NumPy ufuncs can be GPU-accelerated

Compilation Cost

  • Happens once on first access (lazy)
  • Cached for entire model lifetime
  • Takes ~100ms per equation
  • Worth it for any iterative algorithm

Next: Function Signatures Guide →