Overriding YAML Configs#

If you have a library of existing YAML files that you want to use, tyro can be used to override values in them.

 1import yaml
 2
 3import tyro
 4
 5# YAML configuration. Note that this could also be loaded from a file! Environment
 6# variables are an easy way to select between different YAML files.
 7default_yaml = r"""
 8exp_name: test
 9optimizer:
10    learning_rate: 0.0001
11    type: adam
12training:
13    batch_size: 32
14    num_steps: 10000
15    checkpoint_steps:
16    - 500
17    - 1000
18    - 1500
19""".strip()
20
21if __name__ == "__main__":
22    # Convert our YAML config into a nested dictionary.
23    default_config = yaml.safe_load(default_yaml)
24
25    # Override fields in the dictionary.
26    overridden_config = tyro.cli(dict, default=default_config)
27
28    # Print the overridden config.
29    overridden_yaml = yaml.safe_dump(overridden_config)
30    print(overridden_yaml)

python 03_config_systems/02_overriding_yaml.py --help
usage: 02_overriding_yaml.py [-h] [OPTIONS]

╭─ options ───────────────────────────────────────────────╮
│ -h, --help              show this help message and exit │
│ --exp-name STR          (default: test)                 │
╰─────────────────────────────────────────────────────────╯
╭─ optimizer options ─────────────────────────────────────╮
│ --optimizer.learning-rate FLOAT                         │
│                         (default: 0.0001)               │
│ --optimizer.type STR    (default: adam)                 │
╰─────────────────────────────────────────────────────────╯
╭─ training options ──────────────────────────────────────╮
│ --training.batch-size INT                               │
│                         (default: 32)                   │
│ --training.num-steps INT                                │
│                         (default: 10000)                │
│ --training.checkpoint-steps [INT [INT ...]]             │
│                         (default: 500 1000 1500)        │
╰─────────────────────────────────────────────────────────╯

python 03_config_systems/02_overriding_yaml.py --training.checkpoint-steps 300 1000 9000
exp_name: test
optimizer:
  learning_rate: 0.0001
  type: adam
training:
  batch_size: 32
  checkpoint_steps:
  - 300
  - 1000
  - 9000
  num_steps: 10000