saber.prep
calculate_fdc(workdir, df, n_steps=41)
Creates the hindcast_fdc.parquet and hindcast_fdc_transformed.parquet tables in the workdir/tables directory
Parameters:
Name | Type | Description | Default |
---|---|---|---|
workdir |
str
|
path to the working directory for the project |
required |
df |
pd.DataFrame
|
the hindcast hydrograph data DataFrame with 1 column per stream, 1 row per timestep, string column names containing the stream's ID, and a datetime index. E.g. the shape should be (n_timesteps, n_streams). If not provided, the function will attempt to load the data from workdir/tables/hindcast_series_table.parquet |
required |
n_steps |
int
|
the number of exceedance probabilities to estimate from 0 to 100%, inclusive. Default is 41, which produces, 0, 2.5, 5, ..., 97.5, 100. |
41
|
Returns:
Type | Description |
---|---|
None
|
None |
Source code in saber/prep.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
gis_tables(workdir, gauge_gis=None, drain_gis=None)
Generate copies of the drainage line attribute tables in parquet format using the Saber package vocabulary
Parameters:
Name | Type | Description | Default |
---|---|---|---|
workdir |
str
|
path to the working directory for the project |
required |
gauge_gis |
str
|
path to the GIS dataset (e.g. geopackage) for the gauge locations (points) |
None
|
drain_gis |
str
|
path to the GIS dataset (e.g. geopackage) for the drainage line locations (polylines) |
None
|
Returns:
Type | Description |
---|---|
None
|
None |
Source code in saber/prep.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|