Copy the template
Keep it in the same directory (needs config_support_lib/)
• Scripts Organization
• Python Environment Setup
Section 01: Model Configuration
• Template Workflow Pattern
• Model Configuration
• Configuration Sections
• Source/Sink Methods
• Model Execution (Docker/Apptainer)
• HTML Report Generation
Section 02: Reading Results
• Reading HDF5 Output
Section 03: Visualization
• Time Series Plots
• Spatial Maps
• Complete Workflow
• Tips & Best Practices
./setup_venv.sh
Generate all JSON configuration files from a single Python template
Read HDF5 results, create plots, and generate spatial maps/GIFs
Sensitivity analysis and parameter optimization (covered in separate presentation)
# Navigate to scripts folder cd supporting_scripts # Run setup script ./setup_venv.sh # Activate environment source .venv/bin/activate
conda create -n openwq python=3.10
conda activate openwq
pip install -r requirements.txt
| numpy, pandas, scipy | Core |
| h5py, netCDF4 | File I/O |
| geopandas, shapely | Geospatial |
| matplotlib, tqdm | Plotting |
| SALib | Sensitivity |
requirements.txt
All supporting scripts follow the same Copy → Edit → Run workflow:
model_config_template.py
# Copy and edit python model_config_template.py # → All JSON files generated
reading_plotting_results_template.py
# Copy and edit paths python reading_plotting_results_template.py # → Plots and maps
calibration_config_template.py
python calibration_config_template.py [flags] # --sensitivity-only # --prepare-obs-only # --dry-run | --resume
*_lib/ folders) — copy templates to your working directory and customize them. This keeps your configurations separate from the library code.
Edit one file → run it → done.
All JSON configuration files, model execution, and HTML report are handled automatically.
Keep it in the same directory (needs config_support_lib/)
Set executable_path and file_manager_path
BGC, transport, sorption, sediment
CSV, Copernicus LULC, climate-adjusted, or ML
Use "all" to auto-detect from BGC file
Model configuration summary, run instructions, and how to examine results
openWQ_master.jsonopenWQ_config.jsonopenWQ_MODULE_*.jsonopenWQ_SS_*.jsonopenWQ_EWF_*.json1) General Info · 2) Computational · 3) Initial Conditions · 4) Modules (BGC, TD, LE, TS, SI) · 5) Source/Sink · 6) External Fluxes · 7) Output · 8) Model Execution · 9) Report
| Section | Key Variables | Description |
|---|---|---|
| 1. General Info | project_name, hostmodel, dir2save | Project metadata and output directory |
| 2. Computational | solver, use_num_threads | Solver type and parallelization |
| 3. Initial Conditions | ic_all_value, ic_all_units | Starting concentrations |
| 4a. BGC Module | bgc_module_name, path2framework | NATIVE_BGC_FLEX or PHREEQC |
| 4b. Transport Module | td_module_name, dispersion_xyz | Advection/dispersion settings |
| 4c. Lateral Exchange | le_module_name, le_module_config | Inter-compartment exchange |
| 4d. Sediment Transport | ts_module_name, ts_parameters | HYPE_MMF or HYPE_HBVSED |
| 4e. Sorption | si_module_name, si_species_params | Freundlich or Langmuir isotherms |
| 5. Source/Sink | ss_method, ss_method_csv_config | CSV, LULC, or ML-based loads |
| 6. External Fluxes | ewf_method, ewf_method_* | Concentrations in external inputs |
| 7. Output | output_format, chemical_species | What to save and where |
Load time series from CSV files
ss_method_csv_config = [
{
"Chemical_name": "NO3-N",
"Compartment": "REACHES",
"Type": "source",
"Units": "kg",
"Filepath": "loads.csv"
}
]
Land use + export coefficients
ss_method = "using_copernicus_ lulc_with_static_coeff" # Maps LULC classes to loads # e.g., cropland → 20 kg/ha/yr
Train ML from monitoring data
ss_method = "ml_model" ss_ml_model_type = "xgboost" ss_ml_training_data_csv = \ "monitoring_data.csv"
ss_use_cellid_mapping = True to use host model IDs
(reachID for mizuroute, hruId_z{layer} for SUMMA) instead of internal (ix, iy, iz) indices.
Set generate_report = True to auto-generate a self-contained interactive HTML report.
Open it in any browser — no server needed.
| Module | Selection |
|---|---|
| Biogeochemistry | NATIVE_BGC_FLEX |
| Transport | NATIVE_TD_ADVDISP |
| Sediment | HYPE_HBVSED |
| Sorption | NONE |
| Source/Sink | copernicus_lulc |
# ======= 9. REPORT SETTINGS ======= generate_report = True # River network shapefile (for map) shapefile_path = "/path/to/river.shp" shapefile_reach_id_col = "seg_id"
openwq_config_report.html — open in any browser
The report's "Next Steps" section provides copy-paste code for every step — from starting Docker to plotting results. All paths are pre-filled from your configuration.
cd + docker compose upRead_h5_driverEvery code block has a Copy button. Click it, paste into terminal or Jupyter — done.
run_model = True and the template will
generate configs, run the model, and produce the report — all in one command.
import sys sys.path.insert(0, 'hdf5_support_lib') import Read_h5_driver as h5_rlib # Define paths openwq_info = { "path_to_results": "/path/to/openwq_out", "mapping_key": "reachID" } # Read results results = h5_rlib.Read_h5_driver( openwq_info=openwq_info, output_format='HDF5', cmp=['RIVER_NETWORK_REACHES'], space_elem='all', chemSpec=["NO3-N", "NH4-N"], chemUnits="MG/L", noDataFlag=-9999, sediment_as_well=True, # also read sediment debugmode=False )
| cmp | Compartment(s) to read |
| space_elem | 'all' or list of cell IDs |
| chemSpec | Species to extract |
| chemUnits | Units in output files |
| sediment_as_well | Also read sediment HDF5 |
| debugmode | Read debug outputs |
import Plot_h5_driver as h5_plib h5_plib.Plot_h5_driver( # What to plot? what2map='openwq', # or 'hostmodel' hostmodel='mizuroute', # Which locations? mapping_key_values=[1200014181, 200014181], # OpenWQ data (pre-loaded) openwq_results=results, chemSpec=["NO3-N", "N_ORG_active"], sediment_as_well=True, # Host model comparison (optional) hydromodel_info=hydromodel_info, hydromodel_var2print='DWroutedRunoff', # Output output_path='/path/to/plotSeries.png' )
┌───────────────────────────────────┐
│ NO3-N Concentration │
│ ──────────────────── │
│ ╱╲ ╱╲ │
│ ───╱──╲──╱──╲─── location 1 │
│ ╱ ╲╱ ╲ location 2 │
├───────────────────────────────────┤
│ N_ORG_active │
│ ───────────── │
│ ╱╲ │
│ ───╱ ╲─────── │
└───────────────────────────────────┘
import Map_h5_driver as h5_mplib # 1) Shapefile for visualization shpfile_info = { 'path_to_shp': '/path/to/segments.shp', 'mapping_key': 'SegId' } # 2) Generate maps h5_mplib.Map_h5_driver( what2map='openwq', hostmodel='mizuroute', # Shapefile shpfile_info=shpfile_info, # Results data openwq_results=results, chemSpec=["NO3-N", "N_ORG_fresh"], sediment_as_well=True, # Output settings output_html_path="/path/to/maps/", create_gif=True, timeframes=30, gif_duration=30 )
| output_html_path | Directory for output files |
| create_gif | Generate animated GIF |
| timeframes | Number of frames to include |
| gif_duration | GIF duration in seconds |
geopandas, matplotlibimageio (for GIF creation)
Set what2map='hostmodel' to map discharge, runoff, or other hydro variables
Define openwq_info, shpfile_info, and hydromodel_info dictionaries
Use Read_h5_driver to load concentration data for selected species and compartments
Use Plot_h5_driver to generate temporal evolution graphs at specific locations
Use Map_h5_driver to create static maps or animated GIFs of spatial distribution
supporting_scripts/2_Read_Outputs/reading_plotting_results_template.py
When run_mode_debug = True in configuration, additional diagnostic files are generated:
| d_output_dt_chemistry | Chemical reaction rates |
| d_output_dt_transport | Advection/dispersion fluxes |
| d_output_ss | Source/sink contributions |
| d_output_ewf | External water flux inputs |
| d_output_ic | Initial condition values |
results = h5_rlib.Read_h5_driver(
...
debugmode=True # Enable debug reading
)
# Access debug data
chemistry_rates = results['d_chemistry']
transport_flux = results['d_transport']
debugmode=Falsereading_plotting_results_template.py,
update paths, and run — all plots and maps generated automatically!
Get help with OpenWQ scripts and configuration using AI assistants.
# macOS/Linux: Install + add to PATH curl -fsSL https://claude.ai/install.sh | sh echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc # Windows: Install using winget, then restart terminal winget install Anthropic.ClaudeCode # Navigate to OpenWQ & start Claude cd /path/to/openwq/ && claude # CLAUDE.md auto-loaded - ask anything: > How do I set up nitrification?
Can read, write & edit files directly
Copy prompt from docs/OPENWQ_ASSISTANT_PROMPT.md
Paste into claude.ai or any AI assistant
The configuration template can now run the model directly after generating config files — no separate step needed.
# ======= 8. MODEL EXECUTION (optional) ======= run_model = True # Container runtime container_runtime = "docker" # or "apptainer" # Docker settings docker_container_name = "docker_openwq" # Apptainer settings (for HPC) apptainer_sif_path = "/path/to/openwq.sif" apptainer_bind_path = "/scratch/user:/code" # Executable inside container executable_path = "/code/.../mizuroute_openwq_Release" # mizuRoute control file inside container file_manager_path = "/code/.../mizuroute.control" # MPI processes (min 2 for mizuRoute) mpi_np = 2
docker-compose.ymlmpirun -np 2 inside Docker or Apptainermodel_output.logpython model_config_template.pymodel_config_template.py
→ All JSON files auto-generated
run_model = True
→ Execute in Docker/Apptainer
generate_report = True
→ Interactive HTML report
python model_config_template.pyGitHub: github.com/ue-hydro/openwq
Documentation: openwq.readthedocs.io
Scripts: supporting_scripts/