• What is FLUXOS?
• Core Capabilities
• Coupled Host Models
Section 01: Mesh & Numerical Methods
• Regular & Triangular Meshes
• SWE Conservative Form
• Finite Volume & Roe Solver
• Wetting/Drying & MUSCL
• ADE & Soil Infiltration
Section 02: Parallelization
• OpenMP + MPI + CUDA
• GPU Acceleration
Section 03: Applications & Tools
• Application Areas
• Technical Stack
• WebGL 3D Viewer
• Summary
FLUXOS is an open-source, physics-based 2D hydrodynamic model for event-based flood simulation, coupled with solute transport and soil infiltration.
┌─────────────────────────────────┐
│ INPUT DATA │
│ DEM + Boundary Conditions │
│ + Soil Map + Meteo Forcing │
└────────────────┬────────────────┘
│
▼
┌─────────────────────────────────┐
│ 2D SHALLOW WATER EQUATIONS │
│ Roe flux + wetting/drying │
└────────────────┬────────────────┘
│
┌───────┴───────┐
│ │
▼ ▼
┌──────────────┐ ┌────────────────┐
│ ADE SOLVER │ │ SOIL │
│ Solute │ │ INFILTRATION │
│ Transport │ │ Horton model │
└──────────────┘ └────────────────┘
Full dynamic wave solution of the 2D shallow water equations. Roe's approximate Riemann solver with robust wetting/drying and back-flooding capture.
Mass-conservative ADE solver with first-order upwind advection, Fickian diffusion, and monotonicity clamp. Two-pass fully parallel formulation.
Horton decay model with USDA 12-class soil texture classification (Rawls et al. 1983). Per-cell infiltration tracking.
CUDA kernels for both regular and triangular meshes. Typical 10–50x speedup over single-core CPU. Supports Pascal GPUs and newer.
Interactive browser-based visualization with particle tracking, satellite imagery, concentration overlays, and real-time animation controls.
ASCII grids, VTK/ParaView (.vtu + .pvd), KML/KMZ for Google Earth, and WebGL data export for web hosting.
FLUXOS can operate standalone or coupled with other modelling frameworks:
| Integration | Domain | Description |
|---|---|---|
| Standalone | Overland Flow | Event-based 2D flood simulation with precipitation or inflow forcing |
| WINTRA | Nutrient Release | Runoff-soil interaction algorithm for snowmelt-driven nutrient transport |
dxy)arma::Mat)| Flux scheme | Roe's approximate Riemann |
| Time stepping | CFL-adaptive |
| Wetting/drying | Velocity desingularization |
| Friction | Manning's equation |
| Parallelization | OpenMP, MPI, CUDA |
Rosa_2m test domain (859×618, 2m cells):
~16 minutes on single CPU core
std::vector)Same Rosa_2m domain:
~3 seconds (vs 16 min on regular mesh!)
| Reconstruction | MUSCL with least-squares gradients |
| Limiter | Barth-Jespersen (monotonicity) |
| Well-balanced | Hydrostatic reconstruction |
| Dry fronts | Ritter dry-front solution |
| CFL | Inradius-based adaptive |
Saint-Venant (1871) · Manning (1891) · Elder (1959) after Boussinesq (1877)
Ensures mass and momentum are preserved exactly across shocks, wetting fronts, and hydraulic jumps — essential for flood simulation where discontinuities arise naturally.
Godunov (1959); LeVeque (2002)
Courant, Friedrichs & Lewy (1928)
Time step is recomputed every iteration as the global minimum over all wet cells:
Naturally handles shocks and wetting/drying without artificial viscosity. CFL-adaptive stepping (CFL < 1; typical 0.5) guarantees stability while maximizing computational efficiency.
Roe (1981); Toro (2001)
Audusse et al. (2004) — preserves lake-at-rest (C-property)
Roe’s solver resolves all 3 wave families (shocks, rarefactions, shear) with minimal numerical diffusion. Hydrostatic reconstruction preserves the lake-at-rest state exactly, preventing spurious flows over irregular bathymetry.
Ritter (1892)
At wet-dry interfaces, analytical dam-break solution replaces Roe flux:
Kurganov & Petrova (2007)
Avoids division by zero as h → 0. Smoothly transitions to u = 0 in dry areas.
Implicit treatment avoids stiff ODE instability at small depths:
Ritter’s analytical dam-break replaces the Riemann solver at wet-dry fronts, avoiding zero-depth singularities. Semi-implicit Manning friction is unconditionally stable — no stiff time-step restriction at shallow depths. Mass balance limiter guarantees h ≥ 0.
Unstructured triangular mesh only — achieves 2nd-order spatial accuracy while preserving monotonicity.
van Leer (1979)
Barth & Jespersen (1989)
MUSCL achieves 2nd-order spatial accuracy on unstructured meshes, critical for resolving flow features on complex terrain. Barth-Jespersen prevents spurious oscillations near discontinuities while preserving maximum accuracy in smooth regions.
Advection-Diffusion Equation ∂(hC)/∂t + ∇·(uhC) = ∇·(Dh∇C) solved with a two-pass explicit scheme:
Fick (1855); Leonard (1991)
Two-pass design eliminates read-write race conditions, enabling fully parallel GPU execution. First-order upwind is unconditionally TVD, preventing unphysical concentration overshoots. Mass-conservative by construction on both mesh types.
Horton (1933); Rawls, Brakensiek & Miller (1983)
| Class | Ks (mm/h) | f0 (mm/h) |
|---|---|---|
| Sand | 210.0 | 630.0 |
| Loamy Sand | 61.0 | 183.0 |
| Sandy Loam | 26.0 | 78.0 |
| Loam | 13.0 | 39.0 |
| Silt Loam | 6.8 | 20.4 |
| Sandy Clay Loam | 4.3 | 12.9 |
| Clay Loam | 2.3 | 6.9 |
| Silty Clay | 0.9 | 2.7 |
| Clay | 0.6 | 1.8 |
Computationally cheap (one exponential per cell per timestep), widely validated for overland flow, requires only standard USDA soil parameters. Per-cell independent — trivially parallelizable. Backward compatible: no config section = module disabled.
Both mesh types share the same physics — the triangular mesh adds 2nd-order spatial accuracy via MUSCL reconstruction.
| Feature | Regular Cartesian Mesh | Unstructured Triangular Mesh |
|---|---|---|
| Flux solver | Roe approximate Riemann | Rotated Roe / HLL Riemann |
| Spatial order | 1st order (piecewise constant) | 2nd order (MUSCL + Barth-Jespersen) |
| Reconstruction | None (cell-averaged values) | Least-squares gradient + limiter |
| Wetting/drying | Hydrostatic reconstruction + Ritter | Hydrostatic reconstruction + Ritter |
| Bed friction | Semi-implicit Manning | Semi-implicit Manning |
| Time stepping | CFL · Δx / (|u| + c) | CFL · rin / (|u| + c) |
| Mass limiter | Scale outflow per cell | Each edge ≤ 1/3 cell volume |
| Turbulence | Eddy viscosity (νt = cvu*h) | Eddy viscosity (νt = cvu*h) |
| ADE transport | Two-pass (upwind + diffusion) | Two-pass (upwind + diffusion) |
| Soil infiltration | Horton decay (per cell) | Horton decay (per cell) |
Key References: Saint-Venant (1871) • Godunov (1959) • Roe (1981) • van Leer (1979) • Barth & Jespersen (1989) • Audusse et al. (2004) • Horton (1933) • Rawls et al. (1983) • Toro (2001) • LeVeque (2002)
| Technology | Memory Model | Use Case | CMake Flag |
|---|---|---|---|
| OpenMP | Shared memory | Multi-core workstations (default) | Always enabled |
| MPI | Distributed memory | HPC clusters, 100+ cores | -DUSE_MPI=ON |
| CUDA | GPU device memory | NVIDIA GPU acceleration | -DUSE_CUDA=ON |
| Hybrid | All combined | GPU clusters with multi-node | All flags combined |
7 specialized kernels for unstructured triangular mesh (edge-based finite volume):
| Speedup | 10–50x over single CPU core |
| Min GPU | Compute Capability 6.0+ |
| Architecture | Pascal, Volta, Turing, Ampere |
| CUDA version | 11.0+ |
atomicAdd for shared edges between triangular cells.
| C++17 | Core simulation engine |
| CUDA C++ | GPU kernels |
| Python | Pre/post-processing tools |
| Armadillo | Linear algebra (regular mesh) |
| nlohmann/json | Configuration parsing |
| WebGL/JS | 3D visualization |
| CMake 3.10+ | Build system |
| OpenMP | Shared-memory parallelism |
| OpenMPI | Distributed parallelism |
| CUDA 11.0+ | GPU acceleration |
| Gmsh | Mesh generation (trimesh) |
-DUSE_TRIMESH=ON -DUSE_CUDA=ON -DUSE_MPI=ON
Interactive browser-based 3D visualization — no plugins or backend required:
GPU-accelerated with CUDA. OpenMP + MPI for HPC. 10–50x speedup on modern GPUs.
Regular or triangular meshes. JSON-driven configuration with optional ADE transport and soil infiltration modules.
WebGL 3D viewer with particle tracking, satellite imagery, and real-time animation.
GitHub: github.com/ue-hydro/FLUXOS_cpp
Documentation: fluxos-cpp.readthedocs.io
Contact: diogo.costa@uevora.pt