OpenWQ

Supporting Scripts
Python tools for model setup, reading HDF5 outputs,
creating time series plots, and generating spatial maps

Contents

• Scripts Organization

• Python Environment Setup

Section 01: Model Configuration

• Template Workflow Pattern

• Model Configuration

• Configuration Sections

• Source/Sink Methods

Model Execution (Docker/Apptainer)

HTML Report Generation

Section 02: Reading Results

• Reading HDF5 Output

Section 03: Visualization

• Time Series Plots

• Spatial Maps

• Complete Workflow

• Tips & Best Practices

Scripts Organization

📁 supporting_scripts/
├── setup_venv.sh # Environment setup
├── requirements.txt # Dependencies
├── 📁 1_Model_Config/ # Configuration
│ ├── model_config_template.py
│ └── 📁 config_support_lib/
├── 📁 2_Read_Outputs/ # Post-processing
│ ├── reading_plotting_results_template.py
│ └── 📁 hdf5_support_lib/
│ ├── Read_h5_driver.py
│ ├── Plot_h5_driver.py
│ └── Map_h5_driver.py
└── 📁 3_Calibration/ # Optimization
├── calibration_config_template.py
└── 📁 calibration_lib/
Quick Start: ./setup_venv.sh

🔧 1_Model_Config

Generate all JSON configuration files from a single Python template

📊 2_Read_Outputs

Read HDF5 results, create plots, and generate spatial maps/GIFs

🎯 3_Calibration

Sensitivity analysis and parameter optimization (covered in separate presentation)

Python Environment Setup

Automated Setup (Recommended)

# Navigate to scripts folder
cd supporting_scripts

# Run setup script
./setup_venv.sh

# Activate environment
source .venv/bin/activate
                            

Alternative: Conda

conda create -n openwq python=3.10
conda activate openwq
pip install -r requirements.txt
                            

Included Dependencies

numpy, pandas, scipyCore
h5py, netCDF4File I/O
geopandas, shapelyGeospatial
matplotlib, tqdmPlotting
SALibSensitivity
All dependencies in:
requirements.txt
01

Model Configuration

Template Workflow Pattern

All supporting scripts follow the same Copy → Edit → Run workflow:

1 Model Config

model_config_template.py

# Copy and edit
python model_config_template.py
# → All JSON files generated
                        

2 Read Outputs

reading_plotting_results_template.py

# Copy and edit paths
python reading_plotting_results_template.py
# → Plots and maps
                        

3 Calibration

calibration_config_template.py

python calibration_config_template.py [flags]
# --sensitivity-only
# --prepare-obs-only
# --dry-run | --resume
                        
Key Principle: Never edit library files (*_lib/ folders) — copy templates to your working directory and customize them. This keeps your configurations separate from the library code.

Model Configuration (model_config_template.py)

Edit one file → run it → done.

All JSON configuration files, model execution, and HTML report are handled automatically.

How to Use

1

Copy the template

Keep it in the same directory (needs config_support_lib/)

2

Update paths (Section 1)

Set executable_path and file_manager_path

3

Choose modules (Section 4)

BGC, transport, sorption, sediment

4

Configure loads (Section 5)

CSV, Copernicus LULC, climate-adjusted, or ML

5

Set output species (Section 7)

Use "all" to auto-detect from BGC file

6

Generate report

Model configuration summary, run instructions, and how to examine results

Auto-Generated Outputs

  • openWQ_master.json
  • openWQ_config.json
  • openWQ_MODULE_*.json
  • openWQ_SS_*.json
  • openWQ_EWF_*.json

9 Configuration Sections

1) General Info · 2) Computational · 3) Initial Conditions · 4) Modules (BGC, TD, LE, TS, SI) · 5) Source/Sink · 6) External Fluxes · 7) Output · 8) Model Execution · 9) Report

That's it!
The script generates all JSON configs, optionally runs the model (Docker/Apptainer), and creates an interactive HTML report.

Configuration Sections

Section Key Variables Description
1. General Info project_name, hostmodel, dir2save Project metadata and output directory
2. Computational solver, use_num_threads Solver type and parallelization
3. Initial Conditions ic_all_value, ic_all_units Starting concentrations
4a. BGC Module bgc_module_name, path2framework NATIVE_BGC_FLEX or PHREEQC
4b. Transport Module td_module_name, dispersion_xyz Advection/dispersion settings
4c. Lateral Exchange le_module_name, le_module_config Inter-compartment exchange
4d. Sediment Transport ts_module_name, ts_parameters HYPE_MMF or HYPE_HBVSED
4e. Sorption si_module_name, si_species_params Freundlich or Langmuir isotherms
5. Source/Sink ss_method, ss_method_csv_config CSV, LULC, or ML-based loads
6. External Fluxes ewf_method, ewf_method_* Concentrations in external inputs
7. Output output_format, chemical_species What to save and where

Source/Sink Configuration Methods

CSV load_from_csv

Load time series from CSV files

ss_method_csv_config = [
  {
    "Chemical_name": "NO3-N",
    "Compartment": "REACHES",
    "Type": "source",
    "Units": "kg",
    "Filepath": "loads.csv"
  }
]
                        

LULC Copernicus

Land use + export coefficients

ss_method = "using_copernicus_
            lulc_with_static_coeff"

# Maps LULC classes to loads
# e.g., cropland → 20 kg/ha/yr
                        

ML ml_model

Train ML from monitoring data

ss_method = "ml_model"
ss_ml_model_type = "xgboost"
ss_ml_training_data_csv = \
    "monitoring_data.csv"
                        
Cell ID Mapping: Set ss_use_cellid_mapping = True to use host model IDs (reachID for mizuroute, hruId_z{layer} for SUMMA) instead of internal (ix, iy, iz) indices.

HTML Report Generation (Section 9)

Set generate_report = True to auto-generate a self-contained interactive HTML report. Open it in any browser — no server needed.

Report Preview (dark theme)
Summary
Basin Map
GRQA Data
Configuration
Sources/Sinks
Run Metadata
Next Steps

OpenWQ — Simulation Report

Authors: User mizuroute 2026-02-27
Summary
15
Species
1
Compartments
FWD_EULER
Solver
15
Output Spp.
1 hr
Timestep
Configuration
ModuleSelection
BiogeochemistryNATIVE_BGC_FLEX
TransportNATIVE_TD_ADVDISP
SedimentHYPE_HBVSED
SorptionNONE
Source/Sinkcopernicus_lulc

Report Sections

  • KPI summary cards (species, solver, timestep)
  • Module configuration table with badges
  • Interactive basin map (Leaflet.js)
  • GRQA observation stations matched
  • Source/sink load summary
  • Run metadata (exit code, timing)
  • Ready-to-run code snippets (next slide)
# ======= 9. REPORT SETTINGS =======
generate_report = True

# River network shapefile (for map)
shapefile_path = "/path/to/river.shp"
shapefile_reach_id_col = "seg_id"
                        

Features

  • Self-contained single HTML file
  • Dark/light theme toggle
  • Copy buttons on all code blocks
  • All paths pre-filled from your config
Output: openwq_config_report.html — open in any browser

Report: Ready-to-Run Code Snippets

The report's "Next Steps" section provides copy-paste code for every step — from starting Docker to plotting results. All paths are pre-filled from your configuration.

Report "Next Steps" Preview
Summary
Basin Map
Configuration
Sources/Sinks
Run Metadata
Next Steps

OpenWQ — Simulation Report

mizuroute2026-02-27
Next Steps
1. Start the Docker container
cd /path/to/containers
docker compose up -d
2. Run the model
docker exec docker_openwq /bin/bash -c \
  "mpirun -np 2 /code/.../mizuroute_Release \
  /code/.../mizuroute.control"
4. Read & visualize results
import Read_h5_driver as h5_rlib
results = h5_rlib.Read_h5_driver(
  chemSpec=["NO3-N", "NH4-N", ...], ...)

What the Report Provides

  • Docker startup command — cd + docker compose up
  • Full MPI execution command — with all container paths resolved
  • Output directory path — where HDF5 files are saved
  • Python code to read HDF5 — using Read_h5_driver
  • WebGL 3D visualization — interactive browser-based map
  • Time series plots — one block per species
All paths pre-filled — the report reads your configuration and fills in every path (executable, control file, output directory, shapefile) so you can copy and run immediately.

Copy Buttons

Every code block has a Copy button. Click it, paste into terminal or Jupyter — done.

Tip: Set run_model = True and the template will generate configs, run the model, and produce the report — all in one command.
02

Reading Model Results

Reading HDF5 Output (Read_h5_driver)

import sys
sys.path.insert(0, 'hdf5_support_lib')
import Read_h5_driver as h5_rlib

# Define paths
openwq_info = {
    "path_to_results": "/path/to/openwq_out",
    "mapping_key": "reachID"
}

# Read results
results = h5_rlib.Read_h5_driver(
    openwq_info=openwq_info,
    output_format='HDF5',
    cmp=['RIVER_NETWORK_REACHES'],
    space_elem='all',
    chemSpec=["NO3-N", "NH4-N"],
    chemUnits="MG/L",
    noDataFlag=-9999,
    sediment_as_well=True,   # also read sediment
    debugmode=False
)
                        

Parameters

cmpCompartment(s) to read
space_elem'all' or list of cell IDs
chemSpecSpecies to extract
chemUnitsUnits in output files
sediment_as_wellAlso read sediment HDF5
debugmodeRead debug outputs
Returns: Dictionary with datetime index, concentrations per species/cell, and optional sediment data
03

Plotting Time Series

Time Series Plots (Plot_h5_driver)

import Plot_h5_driver as h5_plib

h5_plib.Plot_h5_driver(
    # What to plot?
    what2map='openwq',        # or 'hostmodel'
    hostmodel='mizuroute',

    # Which locations?
    mapping_key_values=[1200014181, 200014181],

    # OpenWQ data (pre-loaded)
    openwq_results=results,
    chemSpec=["NO3-N", "N_ORG_active"],
    sediment_as_well=True,

    # Host model comparison (optional)
    hydromodel_info=hydromodel_info,
    hydromodel_var2print='DWroutedRunoff',

    # Output
    output_path='/path/to/plotSeries.png'
)
                        

Plot Features

  • Multi-panel plots per species
  • Multiple locations overlay
  • Sediment concentration subplot
  • Host model variable comparison
  • Auto-scaled axes
┌───────────────────────────────────┐
│  NO3-N Concentration              │
│  ────────────────────             │
│     ╱╲    ╱╲                      │
│ ───╱──╲──╱──╲───  location 1      │
│   ╱    ╲╱    ╲    location 2      │
├───────────────────────────────────┤
│  N_ORG_active                     │
│  ─────────────                    │
│      ╱╲                           │
│  ───╱  ╲───────                   │
└───────────────────────────────────┘
                            
04

Spatial Mapping & GIFs

Spatial Maps (Map_h5_driver)

import Map_h5_driver as h5_mplib

# 1) Shapefile for visualization
shpfile_info = {
    'path_to_shp': '/path/to/segments.shp',
    'mapping_key': 'SegId'
}

# 2) Generate maps
h5_mplib.Map_h5_driver(
    what2map='openwq',
    hostmodel='mizuroute',

    # Shapefile
    shpfile_info=shpfile_info,

    # Results data
    openwq_results=results,
    chemSpec=["NO3-N", "N_ORG_fresh"],
    sediment_as_well=True,

    # Output settings
    output_html_path="/path/to/maps/",
    create_gif=True,
    timeframes=30,
    gif_duration=30
)
                        

Output Options

output_html_pathDirectory for output files
create_gifGenerate animated GIF
timeframesNumber of frames to include
gif_durationGIF duration in seconds
Requirements:
• Shapefile with matching IDs
geopandas, matplotlib
imageio (for GIF creation)

Host Model Mapping

Set what2map='hostmodel' to map discharge, runoff, or other hydro variables

Complete Post-Processing Workflow

1

Set Up Paths

Define openwq_info, shpfile_info, and hydromodel_info dictionaries

2

Read HDF5 Results

Use Read_h5_driver to load concentration data for selected species and compartments

3

Create Time Series Plots

Use Plot_h5_driver to generate temporal evolution graphs at specific locations

4

Generate Spatial Maps

Use Map_h5_driver to create static maps or animated GIFs of spatial distribution

Template available: supporting_scripts/2_Read_Outputs/reading_plotting_results_template.py

Debug Mode Outputs

When run_mode_debug = True in configuration, additional diagnostic files are generated:

Debug HDF5 Files

d_output_dt_chemistryChemical reaction rates
d_output_dt_transportAdvection/dispersion fluxes
d_output_ssSource/sink contributions
d_output_ewfExternal water flux inputs
d_output_icInitial condition values

Reading Debug Data

results = h5_rlib.Read_h5_driver(
    ...
    debugmode=True  # Enable debug reading
)

# Access debug data
chemistry_rates = results['d_chemistry']
transport_flux = results['d_transport']
                            
Note: Debug mode significantly increases output file size and simulation time

Tips & Best Practices

✅ Do

  • Use template files as starting point
  • Enable cell_id mapping for portability
  • Start with debugmode=False
  • Test with small time windows first
  • Verify shapefile ID matches output
  • Use relative paths in Docker mode

❌ Don't

  • Edit generated JSON files directly
  • Mix host model IDs with (ix,iy,iz)
  • Run debug mode for production
  • Load all cells if only need subset
  • Create GIFs with 1000+ frames
Quick Start: Copy reading_plotting_results_template.py, update paths, and run — all plots and maps generated automatically!

AI-Powered Assistance

Get help with OpenWQ scripts and configuration using AI assistants.

🖥️ Claude Code CLI (Users & Developers)

# macOS/Linux: Install + add to PATH
curl -fsSL https://claude.ai/install.sh | sh
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc

# Windows: Install using winget, then restart terminal
winget install Anthropic.ClaudeCode

# Navigate to OpenWQ & start Claude
cd /path/to/openwq/ && claude

# CLAUDE.md auto-loaded - ask anything:
> How do I set up nitrification?

Can read, write & edit files directly

🌐 Web/API Users (Everyone)

Copy prompt from docs/OPENWQ_ASSISTANT_PROMPT.md

Paste into claude.ai or any AI assistant

What AI Can Help With

  • BGC reaction configuration
  • Calibration framework setup
  • Troubleshooting errors
  • Understanding source code
  • Writing JSON configurations
Pro Tip: Create a Claude Project with uploaded docs for persistent context across conversations

Model Execution (Section 8)

The configuration template can now run the model directly after generating config files — no separate step needed.

# ======= 8. MODEL EXECUTION (optional) =======
run_model = True

# Container runtime
container_runtime = "docker"   # or "apptainer"

# Docker settings
docker_container_name = "docker_openwq"

# Apptainer settings (for HPC)
apptainer_sif_path = "/path/to/openwq.sif"
apptainer_bind_path = "/scratch/user:/code"

# Executable inside container
executable_path = "/code/.../mizuroute_openwq_Release"

# mizuRoute control file inside container
file_manager_path = "/code/.../mizuroute.control"

# MPI processes (min 2 for mizuRoute)
mpi_np = 2
                        

How It Works

  • Generates all config files (Sections 1-7)
  • Maps host paths to container paths via docker-compose.yml
  • Runs mpirun -np 2 inside Docker or Apptainer
  • Streams stdout/stderr in real-time
  • Saves full log to model_output.log

Supported Runtimes

  • Docker — local development
  • Apptainer — HPC clusters (SLURM/PBS)
Workflow: python model_config_template.py
generates configs + runs the model in one command!

Summary

🔧 Configure

model_config_template.py
→ All JSON files auto-generated

🚀 Run

run_model = True
→ Execute in Docker/Apptainer

📊 Report

generate_report = True
→ Interactive HTML report

One command does it all: python model_config_template.py
generates configs → runs the model → produces an interactive report!

Thank You

Questions & Resources

GitHub: github.com/ue-hydro/openwq

Documentation: openwq.readthedocs.io

Scripts: supporting_scripts/