Modules

dataprocessing

geohydroconvert

Created on Thu 16 Dec 2021

@author: Alexandre Kenshilik Coche @contact: alexandre.co@hotmail.fr

This module is a collection of tools for manipulating hydrological space-time data, especially netcdf data. It has been originally developped to provide preprocessing tools for CWatM (https://cwatm.iiasa.ac.at/) and HydroModPy (https://gitlab.com/Alex-Gauvain/HydroModPy), but most functions have been designed to be of general use.

geohydroconvert.compute_Erefs_from_Epan(input_file)[source]
geohydroconvert.compute_ldd(dem_path, method)[source]

dem_path = r”D:- Postdoc- Travaux- Veille- Donnees- MNTIGNMNT_fusionRGEALTI_FXX_1m_0318_6774_0340_6781_MNT_LAMB93_IGN69.tif”

geohydroconvert.compute_relative_humidity(*, dewpoint_input_file, temperature_input_file, pressure_input_file, method='Penman-Monteith')[source]

cf formula on https://en.wikipedia.org/wiki/Dew_point

gc.compute_relative_humidity(

dewpoint_input_file = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoERA5Brittany1-2021 Dewpoint temperature.nc”, temperature_input_file = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoERA5Brittany1-2021 Temperature.nc”, pressure_input_file = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoERA5Brittany1-2021 Surface pressure.nc”, method = “Sonntag”)

geohydroconvert.compute_scale_and_offset(min, max, n)[source]

Computes scale and offset necessary to pack a float32 (or float64?) set of values into a int16 or int8 set of values.

Parameters

minfloat

Minimum value from the data

maxfloat

Maximum value from the data

nint

Number of bits into which you wish to pack (8 or 16)

Returns

scale_factorfloat

Parameter for netCDF’s encoding

add_offsetfloat

Parameter for netCDF’s encoding

geohydroconvert.compute_wind_speed(u_wind_data, v_wind_data)[source]

U-component of wind is parallel to the x-axis V-component of wind is parallel to the y-axis

geohydroconvert.convert_coord(pointXin, pointYin, inputEPSG=2154, outputEPSG=4326)[source]

Il y a un soucis dans cette fonction. X et Y se retrouvent inversées. Il vaut mieux passer par les fonctions rasterio (voir plus haut) :

coords_conv = rasterio.warp.transform(rasterio.crs.CRS.from_epsg(inputEPSG),

rasterio.crs.CRS.from_epsg(outputEPSG), [pointXin], [pointYin])

pointXout = coords_conv[0][0] pointYout = coords_conv[1][0]

geohydroconvert.convert_downwards_radiation(input_file, is_dailysum=False)[source]
geohydroconvert.convert_from_h5_newsurfex(*, input_file, mesh_file, scenario='historic', output_format='NetCDF', **kwargs)[source]

% DESCRIPTION: This function converts Ronan *.h5 files from SURFEX into NetCDF files, or GeoTIFF images, in order to make output files readable by QGIS.

% EXAMPLE: >> import geoconvert as gc >> gc.convert_from_h5_newsurfex(input_file = r”D:2- Postdoc2- Travaux1- Veille4- Donnees8- MeteoSurfexBZHREA.h5”,

mesh_file = r”D:2- Postdoc2- Travaux1- Veille4- Donnees8- MeteoSurfexBZHshapefilemaille_meteo_fr_pr93.shp”, output_format = “NetCDF”, fields = [“REC”, “TAS”])

% ARGUMENTS: >

% OPTIONAL ARGUMENTS: > output_format = ‘NetCDF’ (défault) | ‘GeoTIFF’ > scenario = ‘historic’ | ‘RCP2.6’ | ‘RCP4.5’ | ‘RCP8.5’ > kwargs:

> fields = variable(s) to conserve (among all attributes in input file)

= [‘ETP’, ‘PPT’, ‘REC’, ‘RUN’, ‘TAS’]

(One file per variable will be created.)

> dates = for instance : [‘2019-01-01’, ‘2019-01-02’, …] (If no date is specified, all dates from input file are considered)

geohydroconvert.convert_from_h5_oldsurfex(*, output_format='csv', **kwargs)[source]

% DESCRIPTION : Cette fonction formate les fichiers Surfex de Quentin en fichiers *.csv organisés par dates (lignes) et identifiants de mailles (colonnes).

% EXEMPLES : >> import surfexconvert as sc #(NB : il faut au préalable que le dossier soit ajouté dans le PYTHONPATH)) >> sc.convert_from_h5_oldsurfex(output_format = “csv”,

start_years = list(range(2005, 2011, 1)), variables = [‘DRAIN’, ‘ETR’])

>> sc.convert_from_oldsurfex(output_format = “nc”,

mesh_file = r”D:2- Postdoc2- Travaux1- Veille4- Donnees8- MeteoSurfexBZHshapefilemaille_meteo_fr_pr93.shp”)

% ARGUMENTS (OPTIONNELS) : > output_format = ‘csv’ (défault) | ‘NetCDF’ | ‘GeoTIFF’ > kwargs:

> input_folder = dossier contenant les fichiers à traiter.

Si rien n’est spécifié, le dossier du script est pris en compte

> variables = variable(s) à traiter (parmi DRAIN, ETR, PRCP et RUNOFF)

Si ce n’est pas spécifié, toutes les variables sont considérées

> start_years = années à traiter

(par ex : 2012 correspond au fichier blabla_2012_2013) Si rien n’est spécifié, toutes les années sont considérées

> mesh_file = chemin d’accès au fichier de correspondances entre les id des tuiles et leurs coordonnées

(nécessaire uniquement pour NetCDF et GeoTIFF)

geohydroconvert.convert_to_cwatm(data, data_type, reso_m=None, EPSG_out=None, EPSG_in=None, coords_extent='Bretagne')[source]

Example

# Standard: ds = gc.prepare_CWatM_input(data = r””,

data_type = ‘DRIAS’, reso_m = 8000, EPSG_out = 3035)

# Crop coefficients: gc.prepare_CWatM_input(data = r”D:- Postdoc- Travaux_CWatM_EBRdatainput_1km_LeMeulandcovergrasslandcropCoefficientGrassland_10days_refined.nc”,

data_type = “crop coeff”, reso_m = 1000, EPSG_out = 3035, EPSG_in = 4326)

Parameters

datastr

Fichier à convertir.

data_typestr

Type de données : ‘ERA5’ | ‘mask’ | ‘soil depth’ | ‘DRIAS’

reso_mfloat

Résolution de sortie [m] : 75 | 1000 | 5000 | 8000

EPSG_outint

Système de coordonnées de référence de sortie : 2154 | 3035 | 4326…

EPSG_inint

Système de coordonnées de référence d’entrée

coords_extentlist, or str

Emprise spatiale : [x_min, x_max, y_min, y_max] (for Elias: [3202405.9658273868262768,

4314478.7219533622264862, 2668023.2067883270792663, 2972043.0969522628001869]) -> shape = (35, 22)

or keywords: “Bretagne”, “from_input”

Returns

None.

geohydroconvert.correct_bias(input_file, correct_factor=1, to_dailysum=True, progressive=False)[source]

Fonction plutôt à utiliser sur les données finales formatées correct_factor:

  • for precipitations (hourly) : 1.8

  • for precipitations (daily sum): 0.087

  • for radiations (daily sum): 0.0715 (and progressive = True)

  • for potential evapotranspiration pos (daily sum): 0.04

geohydroconvert.date_to_index(_start_date, _date, _freq)[source]
geohydroconvert.export(data, output_filepath)[source]
geohydroconvert.extract_watershed(d8_path, outlets_file, method)[source]
geohydroconvert.format_xy_resolution(*, resolution=None, bounds=None, shape=None)[source]

Format x_res and y_res from a resolution value/tuple/list, or from bounds and shape.

Parameters

resolutionnumber | iterable, optional

xy_res or (x_res, y_res). The default is None.

boundsiterable, optional

(x_min, y_min, x_max, y_max). The default is None.

shapeiterable, optional

(height, width). The default is None.

Returns

x_res and y_res

geohydroconvert.georef(*, data, data_type='other', include_crs=None, export_opt=False, crs=None)[source]

Description

Il est fréquent que les données de source externe présentent des défauts de formattage (SCR non inclus, coordonnées non standard, incompatibilité avec QGIS…). Cette fonction permet de générer un raster ou shapefile standardisé, en particulier du point de vue de ses métadonnées, facilitant ainsi les opérations de géotraitement mais aussi la visualisation sous QGIS.

Exemple

import geoconvert as gc gc.georef(data = r”D:CWatMraw_resultstest1modflow_watertable_monthavg.nc”,

data_type = ‘CWatM’)

Parametres

datastr or xr.Dataset (or xr.DataArray)

Chemin d’accès au fichier à modifier (le fichier original ne sera pas altéré, un nouveau fichier ‘(…)_QGIS.nc’

sera créé.)

data_typestr
Type de données :

‘modflow’ | ‘DRIAS-Climat 2020’ | ‘DRIAS-Eau 2021’ ‘SIM 2021’ | ‘DRIAS-Climat 2022’ ‘Climat 2022’ | ‘DRIAS-Eau 2024’ ‘SIM 2024’ | ‘CWatM’ | ‘autre’ ‘other’ (case insensitive)

include_crsbool, optional

DESCRIPTION. The default is True.

export_optbool, optional

DESCRIPTION. The default is True. Le NetCDF crée est directement enregistré dans le même dossier que le fichier d’origine, en rajoutant ‘georef’ à son nom.

crsint, optional

Destination CRS, only necessary when data_type == ‘other’ The default is None.

Returns

xarray.Dataset or geopandas.GeoDataFrame.

geohydroconvert.get_filelist(data, filetype='.nc')[source]

This function converts a folder (or a file) in a list of relevant files.

Parameters

data: str or iterable

Folder, filepath or iterable of filepaths

filetype: str

Extension.

Returns

data_folderstr

Root of the files.

filelistlist of str

List of files.

geohydroconvert.get_shape(x_res, y_res, bounds, x0=0, y0=0)[source]
geohydroconvert.gzip(data, complevel=3, shuffle=False)[source]

Quick tool to apply lossless compression on a NetCDF file using gzip.

examples

gc.gzip(filepath_comp99.8, complevel = 4, shuffle = True) gc.gzip(filepath_drias2022like, complevel = 5)

Parameters

dataTYPE

DESCRIPTION.

Returns

None.

geohydroconvert.hourly_to_daily(data, mode='sum')[source]
geohydroconvert.hourly_to_daily_old(*, data, mode='mean', **kwargs)[source]

Example

import geoconvert as gc # full example: gc.hourly_to_daily(input_file = r”D:/2011-2021_hourly Temperature.nc”,

mode = ‘max’, output_path = r”D:/2011-2021_daily Temperature Max.nc”, fields = [‘t2m’, ‘tp’])

# input_file can also be a folder: gc.hourly_to_daily(input_file = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoERA5Brittany est”,

mode = ‘mean’)

Parameters

input_filestr, or list of str

Can be a path to a file (or a list of paths), or a path to a folder, in which cas all the files in this folder will be processed.

modestr, or list of str, optional

= ‘mean’ (default) | ‘max’ | ‘min’ | ‘sum’

**kwargs

fieldsstr or list of str, optional

e.g: [‘t2m’, ‘tp’, ‘u10’, ‘v10’, …] (if not specified, all fields are considered)

output_pathstr, optional

e.g: [r”D:/2011-2021_daily Temperature Max.nc”] (if not specified, output_name is made up according to arguments)

Returns

None. Processed files are created in the output destination folder.

geohydroconvert.index_to_date(_start_date, _time_index, _freq)[source]
geohydroconvert.load_any(data, name=None, infer_time=False, **xr_kwargs)[source]

This function loads any common spatio-temporal file or variable, without the need to think about the file or variable type.

import geoconvert as gc data_ds = gc.load_any(r’D:data.nc’, decode_times = True, decode_coords = ‘all’)

Parameters

dataTYPE

DESCRIPTION.

nameTYPE, optional

DESCRIPTION. The default is None.

**xr_kwargs: keyword args

Argument passed to the xarray.open_dataset function call. May contain:

. decode_coords . decode_times . decode_cf > help(xr.open_dataset

Returns

data_dsTYPE

DESCRIPTION.

geohydroconvert.main_space_coords(data_ds)[source]
geohydroconvert.main_time_coord(data_ds)[source]
geohydroconvert.main_var(data_ds)[source]
geohydroconvert.merge_folder(data)[source]

This function merge all NetCDF inside a folder.

Parameters

data: str or iterable

Folder, filepath or iterable of filepaths.

Returns

Merged xarray.Dataset.

geohydroconvert.name_xml_attributes(*, output_file, fields)[source]
geohydroconvert.nearest(x=None, y=None, x0=700012.5, y0=6600037.5, res=75)[source]

Exemple

import geoconvert as gc gc.nearest(x = 210054) gc.nearest(y = 6761020)

Parameters

xfloat, optional

Valeur de la coordonnée x (ou longitude). The default is None.

yfloat, optional

Valeur de la coordonnée y (ou latitude). The default is None.

Returns

Par défault, cette fonction retourne la plus proche valeur (de x ou de y) alignée sur la grille des cartes topo IGN de la BD ALTI. Il est possible de changer les valeurs de x0, y0 et res pour aligner sur d’autres grilles.

geohydroconvert.pack(data, nbits=16)[source]

examples

Parameters

dataTYPE

DESCRIPTION.

Returns

None.

geohydroconvert.pack_value(unpacked_value, scale_factor, add_offset)[source]

Compute the packed value from the original value, a scale factor and an offset.

Parameters

unpacked_valuenumeric

Original value.

scale_factornumeric

Scale factor, multiplied to the original value.

add_offsetnumeric

Offset added to the original value.

Returns

numeric

Packed value.

geohydroconvert.pick_dates_fields(*, input_file, output_format='NetCDF', **kwargs)[source]

% DESCRIPTION: This function extracts the specified dates or fields from NetCDF files that contain multiple dates or fields, and exports it as a single file.

% EXAMPLE: import geoconvert as gc gc.pick_dates_fields(input_file = r”D:/path/test.nc”,

dates = [‘2020-10-15’, ‘2021-10-15’])

% OPTIONAL ARGUMENTS: > output_format = ‘NetCDF’ (default) | ‘GeoTIFF’ > kwargs:

> dates = [‘2021-10-15’, ‘2021-10-19’] > fields = [‘T2M’, ‘PRECIP’, …]

geohydroconvert.process_rht(shp_file, attrs_file, fields='all')[source]

Pour rajouter les attributs dans le shapefile, de anière à pouvoir l’afficher facilement dans QGIS.

Parameters

shp_filestr

Chemin vers le fichier shapefile

attrs_filestr

Chemin vers la table externe d’attributs

fieldslist

Liste des noms de colonnes que l’on veut insérer dans le shapefile

Returns

Create a new file.

geohydroconvert.remove_double(data_folder, data_type)[source]

The data downloaded from DRIAS website contains some data sets available for different periods (it is the same data, but one version goes from 1950 to 2005, and another from 1970 to 2005). For some models, there are no duplicate of that sort. These duplicates are unnecessary. This script moves these duplicates to a subfolder named “doublons”.

Example

folder_list = [r”Eau-SWIAV_Saisonnier_EXPLORE2-2024_historical”,

r”Eau-SWIAV_Saisonnier_EXPLORE2-2024_rcp45”, r”Eau-SWIAV_Saisonnier_EXPLORE2-2024_rcp85”]

root_folder = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoDRIASDRIAS-EauEXPLORE2-SIM2 2024Pays BasqueIndicateurs”

for folder in folder_list:

base_folder = os.path.join(root_folder, folder) subfolder_list = [os.path.join(base_folder, f)

for f in os.listdir(base_folder) if os.path.isdir(os.path.join(base_folder, f))]

for subfolder in subfolder_list:

gc.remove_double(subfolder, data_type = “indicateurs drias eau 2024”)

geohydroconvert.reproject(data, *, src_crs=None, base_template=None, bounds=None, x0=None, y0=None, mask=None, **rio_kwargs)[source]

Reproject space-time data, using rioxarray.reproject().

Parameters

datastr, pathlib.Path, xarray.Dataset or xarra.Dataarray

Data to reproject. Supported file formats are .tif and .nc.

src_crsint or str or rasterio.crs.CRS, optional, default None

Source coordinate reference system. For integers, src_crs refers to the EPSG code. For strings, src_crs can be OGC WKT string or Proj.4 string.

base_templatestr, pathlib.Path, xarra.Dataarray or geopandas.GeoDataFrame, optional, default None

Filepath, used as a template for spatial profile. Supported file formats are .tif, .nc and .shp.

boundstuple (float, float, float, float), optional, default None

Boundaries of the target domain: (x_min, y_min, x_max, y_max)

x0: number, optional, default None

Origin of the X-axis, used to align the reprojection grid.

y0: number, optional, default None

Origin of the Y-axis, used to align the reprojection grid.

maskstr or pathlib.Path, optional, default None

Filepath of geopandas.dataframe of mask.

**rio_kwargskeyword args, optional, defaults are None

Argument passed to the xarray.Dataset.rio.reproject() function call.

Note: These arguments are prioritary over base_template attributes.

May contain:
  • dst_crs : str

  • resolution : float or tuple

  • shape : tuple (int, int)

  • transform : Affine

  • resampling

  • nodata : float or None

  • see help(xarray.Dataset.rio.reproject)

Returns

Reprojected xarray.Dataset.

geohydroconvert.river_pct(input_file, value)[source]

Creates artificial modflow_river_percentage inputs (in *.nc) to use for drainage.

Parameters

input_filestr

Original modflow_river_percentage.tif file to duplicate/modify

valuefloat

Value to impose on cells (from [0 to 1], not in percentage!) This value is added to original values as a fraction of the remaining “non-river” fraction:

For example, value = 0.3 (30%):
  • cells with 0 are filled with 0.3

  • cells with 1 remain the same

  • cells with 0.8 take the value 0.86, because 30% of what should

have been capillary rise become baseflow (0.8 + 0.3*(1-0.8)) - cells with 0.5 take the value 0.65 (0.5 + 0.3*(1-0.5))

Returns

None.

geohydroconvert.secondary_era5_climvar(data)[source]
This function computes the secondary data from ERA5-Land data, such as:
  • crop and water standard ETP from pan evaporation

  • wind speed from U- and V-components

  • relative humidity from T°, P and dewpoint

Parameters

datafilepath or xarray.dataset (or xarray.dataarray)

Main dataset used to generate secondary quantities.

Returns

None. Generates intended files.

geohydroconvert.shortname(data, data_type, ext='nc')[source]

ext : str

geohydroconvert.standard_fill_value(*, data_ds, attrs, encod)[source]
geohydroconvert.standard_grid_mapping(data, epsg=None)[source]
QGIS needs a standard structure for grid_mapping information:
  • grid_mapping info should be in encodings and not in attrs

  • grid_mapping info should be stored in a coordinate names ‘spatial_ref’

In MeteoFrance data, these QGIS standards are not met. This function standardizes grid_mapping handling, so that it is compatible with QGIS.

Parameters

dataTYPE

DESCRIPTION.

epsgTYPE

DESCRIPTION.

Returns

data_dsTYPE

DESCRIPTION.

geohydroconvert.switch_direction_map(input_file, input_mapping, output_mapping)[source]
geohydroconvert.time_series(*, input_file, epsg_coords=None, epsg_data=None, coords=None, mode='mean', dates=None, fields=None, cumul=False)[source]

% DESCRIPTION: This function extracts the temporal data in one location given by coordinate.

% EXAMPLE: import geoconvert as gc era5 = gc.time_series(input_file = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoERA5Brittanydaily/2011-2021_Temperature_daily_mean.nc”,

coords = (-2.199337, 48.17824), epsg = 4326, fields = ‘t2m’)

cwatm_ds = gc.time_series(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_calib_groundwater_base_Ronansum_gwRecharge_daily.nc”,

coords = (3417964, 2858067), epsg = 3035)

% OPTIONAL ARGUMENTS:
> coords = coordinates of one point

/!Coordinates should be indicated in (X,Y) or (lon,lat) order (and not (lat,lon) !!!)

> coords can also indicate a mask:

coords = ‘all’ | filepath to a mask.tiff | filepath to a mask.shp

> epsg_coords = 4326 | 3035 | 2154 | 27572 | etc.

EPSG of the coords ! Useless if coords is a mask that includes a CRS

> epsg_data = same principle, for data without any included information about CRS > mode = ‘mean’ (standard) | ‘sum’ | ‘max’ | ‘min’

> dates = [‘2021-09’, ‘2021-12-01’ …] > fields = [‘T2M’, ‘PRECIP’, …]

geohydroconvert.to_instant(data, derivative=False)[source]
geohydroconvert.transform_nc(*, input_file, x_shift=0, y_shift=0, x_size=1, y_size=1)[source]
EXAMPLE:

import datatransform as dt dt.transform_nc(input_file = r”D:- Postdoc- Travaux_CWatM_EBRdatainput_1km_LeMeulandsurface opodemmin.nc”,

x_shift = 200, y_shift = 400)

geohydroconvert.transform_tif(*, input_file, x_shift=0, y_shift=0, x_size=1, y_size=1)[source]
EXAMPLE:

import datatransform as dt dt.transform_tif(input_file = r”D:- Postdoc- Travaux_CWatM_EBRdatainput_1km_LeMeureamapsmask_cwatm_LeMeu_1km.tif”,

x_shift = 200, y_shift = 300)

geohydroconvert.tss_to_dataframe(*, input_file, skip_rows, start_date, cumul=False)[source]

base = gc.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_prelim_cotech‚2-03-19_basedischarge_daily.tss”,

skip_rows = 4, start_date = ‘1991-08-01’)

precip = gc.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_artif‚2-03-25_basePrecipitation_daily.tss”,

skip_rows = 4, start_date = ‘2000-01-01’)

precip.val = precip.val*534000000/86400 # (le BV topographique du Meu fait 471 851 238 m2) precip[‘rolling_mean’] = precip[‘val’].rolling(10).mean()

input_filestr

Chemin d’accès au fichier d’entrée

skip_rowsint

Nombre de lignes à enlever en tête de fichier. /!ce nombre n’est ‘

start_datestr ou datetime

Date de la 1re valeur du fichier /!Si str, il faut qu’elle soit au format “%Y-%m-%d”

df : pandas.DataFrame

Récupérer la start_date à partir du fichier de settings indiqué au début du fichier *.tss., et regarder ensuite le SpinUp

geohydroconvert.unpack_value(packed_value, scale_factor, add_offset)[source]

Retrieve the original value from a packed value, a scale factor and an offset.

Parameters

packed_valuenumeric

Value to unpack.

scale_factornumeric

Scale factor that was multiplied to the original value to retrieve.

add_offsetnumeric

Offset that was added to the original value to retrieve.

Returns

numeric

Original unpacked value.

geohydroconvert.unzip(data)[source]

In some cases, especially for loading in QGIS, it is much quicker to load uncompressed netcdf than compressed netcdf. This function only applies to non-destructive compression.

Parameters

dataTYPE

DESCRIPTION.

Returns

None.

geohydroconvert.use_valid_time(data_ds)[source]

Use ‘valid_time’ as the temporal coordinate. Standardize its names into ‘time’. If not the main time coordinate, swap it with the main time coordinate.

Parameters

data_dsxarray.dataset

Dataset whose temporal coordinate should be renamed.

Returns

data_dsxarray.dataset

Dataset with the modified name for the temporal coordinate.

geohydroconvert.xr_to_pd(xr_data)[source]

Format xr objects (such as those from gc.time_series) into pandas.DataFrames formatted as in gc.tss_to_dataframe.

Parameters

xr_dataxarray.DataSet or xarray.DataArary

Initial data to convert into pandas.DataFrame NB: xr_data needs to have only one dimension.

Returns

Pandas.DataFrame

PAGAIE_interface

Created on Tue Sep 17 16:58:10 2024

@author: Alexandre Kenshilik Coche @contact: alexandre.co@hotmail.fr

PAGAIE_interface est une interface regroupant les fonctions de geoconvert pertinentes pour géotraiter les données géographiques dans le cadre de la méthodologie Eau et Territoire (https://eau-et-territoire.org/ ). A la difference de geoconvert, trajectoire_toolbox propose une sélection des fonctions disponibles dans geoconvert, ainsi que leur traduction en Français.

PAGAIE_interface.compresser(data, nbits=16)[source]
PAGAIE_interface.convertir_cwatm(data, data_type)[source]
PAGAIE_interface.dezipper(data)[source]

Dans certains cas, notamment pour le chargement des netcdf dans QGIS sous forme de maillage (MESH), il est beaucoup plus rapide de charger un fichier netcdf non compressé qu’un fichier netcdf compressé. Cette fonction ne s’applique que dans le cas d’une compression non destructive.

PAGAIE_interface.exporter(data, output_filepath)[source]
PAGAIE_interface.formater(data, data_type, *, mask=None, bounds=None, resolution=None, x0=None, y0=None, base_template=None, **rio_kwargs)[source]

# DEM ttbox.formater(os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“0- MNTIGNBDALTIV2_2-0_25M_ASC_LAMB93-IGN69_PAYS-BASQUE”),

‘BD ALTI’, mask = os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“15- Territoire Pays Basquemasque_zone_etude.shp”),

resolution = [200, 1000], resampling = 9) # min

ttbox.formater(os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“0- MNTIGNBDALTIV2_2-0_25M_ASC_LAMB93-IGN69_PAYS-BASQUE”),

‘BD ALTI’, mask = os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“15- Territoire Pays Basquemasque_zone_etude.shp”),

resolution = [200, 1000], dst_crs = 27572, resampling = 9) # min

# Climatic ttbox.formater(os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“8- MeteoDRIASDRIAS-ClimatEXPLORE2-Climat2022”, “Model1”, “evspsblpotAdjust_France_MPI-M-MPI-ESM-LR_historical_r1i1p1_CLMcom-CCLM4-8-17_v1_MF-ADAMONT-SAFRAN-1980-2011_day_19500101-20051231_Hg0175.nc”),

‘DRIAS 2022’, mask = os.path.join(r”D:- Postdoc- Travaux- Veille- Donnees”,

“15- Territoire Pays Basquemasque_zone_etude.shp”),

resolution = 1000, resampling = 5) # average

ttbox.formater(data = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoDRIASDRIAS-ClimatEXPLORE2-Climat2022Model1”,

data_type = ‘DRIAS 2022’, resolution = 1000, resampling = 5, mask = r”D:- Postdoc- Travaux- Veille- Donnees

  • Territoire Pays Basquemasque_zone_etude.shp”

    )

    ttbox.formater(data = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoDRIASDRIAS-ClimatEXPLORE2-Climat2022Model1”,

    data_type = ‘DRIAS 2022’, mask = r”D:- Postdoc- Travaux- Veille- Donnees

  • Territoire Pays Basquemasque_zone_etude.shp”

    )

    ttbox.formater(data = r”D:- Postdoc- Travaux- Veille- Donnees8- MeteoDRIASDRIAS-ClimatEXPLORE2-Climat2022Model4”,

    data_type = ‘DRIAS 2022’, mask = [r”D:- Postdoc- Travaux- Veille- Donnees

  • Territoire Pays Basquemasque_zone_etude.shp”,

    r”D:- Postdoc- Travaux- Veille- Donnees- Territoire Annecymasque_zone_etude.shp”]

    )

    dataTYPE

    DESCRIPTION.

    data_typeTYPE

    DESCRIPTION.

    • : TYPE

      DESCRIPTION.

    maskTYPE, optional

    DESCRIPTION. The default is None.

    resolutionTYPE, optional

    DESCRIPTION. The default is None.

    resamplingTYPE, optional

    DESCRIPTION. The default is None.

    None.

PAGAIE_interface.georeferencer(*, data, data_type='other', include_crs=True, export_opt=False, crs=None)[source]

Il est fréquent que les données de source externe présentent des défauts de formattage (SCR non inclus, coordonnées non standard, incompatibilité avec QGIS…). Cette fonction permet de générer un raster ou shapefile standardisé, en particulier du point de vue de ses métadonnées, facilitant ainsi les opérations de géotraitement mais aussi la visualisation sous QGIS.

Parameters

datastr or xarray.dataset

Path (str) to .tif or .nc raster, or xarray.dataset.

scr_sourceTYPE, optional

DESCRIPTION. The default is None.

scr_destTYPE, optional

DESCRIPTION. The default is None.

Returns

xarray.dataset

DESCRIPTION.

PAGAIE_interface.liste_fichiers(data, filetype)[source]
PAGAIE_interface.ouvrir(data, name=None, infer_time=False, **xr_kwargs)[source]
PAGAIE_interface.reprojeter(data, *, src_crs=None, base_template=None, bounds=None, x0=None, y0=None, mask=None, **rio_kwargs)[source]
PAGAIE_interface.var_principale(data_ds)[source]
PAGAIE_interface.zipper(data, complevel=3, shuffle=False)[source]

Compression sans perte sur NetCDF

PAGAIE_interface.zones_basses(mnt_raster)[source]

graphic

ncplot

Created on Mon Mar 21 13:16:05 2022

@author: Alexandre Kenshilik Coche @contact: alexandre.co@hotmail.fr

ncplot.plot_time_series(*, figweb=None, data, labels, title='title', linecolors=None, fillcolors=None, cumul=False, date_ini_cumul=None, reference=None, ref_norm=None, mean_norm=False, mean_center=False, legendgroup=None, legendgrouptitle_text=None, stack=False, col=None, row=None, lwidths=None, lstyles=None, yaxis='y1', fill=None, mode='lines', markers=None, showlegend=True, visible=True, bar_widths=None)[source]

Description

This function provides a wrapper to facilitate the use of plotly.graph_objects class. It facilitates the input of several arguments:

  • data can be passed in any format

  • colors can be passed in any format (np.arrays, lists of strings…) which makes it possible to use indifferently plotly or matplotlib colormaps functions.

  • colors can be passed universally as linecolors and fillcolors argument, no matter what graphical function is used then by plotly (for instance go.Bar normally needs colors to be passed in the marker dict whereas go.Scatter needs colors to be passed in the line dict as well as through an fillcolor argument)

It also offers additional treatments in an easy way:
  • plot cumulative values

  • normalized values

This function is particularly appropriate to plot time series.

Example

import cwatplot as cwp [_, _, figweb] = cwp.plot_time_series(data = [dfA, dfV],

labels = [‘Altrui’, ‘Vergogn’])

Parameters

figweb: plotly figure

Can plot on top of a previous plotly figure.

data: array of pandas.DataFrames

Data to plot.

labels: array of strings

Texts for legend.

title: string

Title used for the matplotlib figure (fig1, ax1). Not used for plotly figure (figweb).

linecolors: np.array

Colors are stored in [R, G, B, Alpha]. For instance: linecolors = [[1.0, 0.5, 0.4, 1],[…]].

cumul: bool

Option to plot cumulated curves (True).

date_ini_cumul: string

The string should indicate the date in the format ‘YYYY-MM-DD’. (for instance: date_ini_cumul = ‘2000-07-31’) Only used when cumul = True.

reference: pandas.DataFrame

Used for displaying metrics (NSE, NSElog, VOLerr), computed against the reference data provided here.

ref_norm: pandas.DataFrame (or xarray.DataSet, beta version…)

Used for plotting values normalized against the provided reference data.

mean_norm: bool

Option to normalize each curve against its mean (True).

mean_center: bool

Option to center each curve on its mean (True).

legendgroup: string

To group curves under the provided group identifier. One group at a time.

legendgrouptitle_text:

Text associated with the group identifier.

stack: bool

Option to plot stacked curves (True).

col: int

Column number for subplots.

row: int

Row number for subplots.

visible: {True, False, “legendonly”}, optional, default True

Determines whether or not this trace is visible. If "legendonly", the trace is not drawn, but can appear as a legend item (provided that the legend itself is visible).

mode: {“markers”, “lines”, “lines+markers”, “bar”}, optional, default “lines”

To select the representation mode.

markers : list of dict

Returns

fig1: matplotlib figure

OBSOLETE: recent developments (normalized curves, stacked curves…) have not been implemented in this figure.

ax1: matplotlib axis

Related to the previous figure. OBSOLETE.

figweb: plotly figure

This figure version includes all options.

ncplot.precip_like_discharge(*, input_file)[source]

% EXEMPLE : import cwatplot as cwp cwp.precip_like_discharge(input_file = input_file)

% ARGUMENTS > input_file = fichier des précipitations > freq = ‘daily’ | ‘monthly’

ncplot.tss_to_dataframe(*, input_file, skip_rows, start_date)[source]

# Base base = cwp.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_prelim_cotech‚2-03-19_basedischarge_daily.tss”,

skip_rows = 4, start_date = ‘1991-08-01’)

# Virginie virg = cwp.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBRdata%ARCHIVE fichiers Virginiesimulationssim0discharge_daily.tss”,

skip_rows = 0, start_date = ‘1991-01-01’)

# Nouvelle base base = cwp.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_artif‚2-03-25_basedischarge_daily.tss”,

skip_rows = 4, start_date = ‘2000-01-01’)

# Données data = cwp.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux- Veille- Donnees- Stations et debitsDebitsHydroDataPyStations_Bretagnemeu_montfort.csv”,

skip_rows = 0, start_date = ‘1969-01-01’)

# Precip precip = cwp.tss_to_dataframe(input_file = r”D:- Postdoc- Travaux_CWatM_EBR

esults aw_results_artif‚2-03-25_basePrecipitation_daily.tss”,

skip_rows = 4, start_date = ‘2000-01-01’)

precip.val = precip.val*534000000/86400 precip[‘rolling_mean’] = precip[‘val’].rolling(10).mean()

  • : TYPE

    DESCRIPTION.

input_fileTYPE

DESCRIPTION.

skip_rowsTYPE

DESCRIPTION.

start_dateTYPE

DESCRIPTION.

dfTYPE

DESCRIPTION.

trajplot

class trajplot.Figure(var: str, root_folder, scenario: str = 'RCP 8.5', epsg_data=None, coords='all', epsg_coords=None, rolling_days: int = 1, period_years: int = 10, annuality='calendar', plot_type: str = None, repres: (None, <class 'str'>) = 'area', cumul: bool = False, relative: bool = False, language: str = 'fr', color='scale', plotsize='wide', name: str = '', credit: (None, <class 'str'>) = 'auto', showlegend: bool = True, shadow: (None, <class 'str'>) = None, verbose: bool = None)[source]

Bases: object

Main data available as Figure attributes

Main variables

Description

self.model_names_list

List of model names.
For example in EXPLORE2 it corresponds to the identifier of the
climatic experiment: ‘Model1’, ‘Model2’, ‘Model3’…, ‘Model17’.

self.original_data

List of the pandas.Dataframes retrieved from the NetCDF data,
for each model (climatic experiments). NetCDF data are converted
into time series by considering the spatial average over the
coords argument (mask).

self.relative_ref

Equivalent to self.original_data, but contains the time
series used as reference (historic) to compute relative
values (if user-chosen).

self.rea_data

Equivalent to self.original_data, but contains the
reanalysis time series, which are added to the plots in
order to provide a historic reference.

self.all_res

List of pd.Dataframes for each period (according to
period_years argument), containing timeseries averaged
over a year (365 days) for each model (climatic experiments)
(one column per model). The year starts on the month defined by
annuality argument.

self.graph_res

Results formated for the plots.
Either in the form of a list of pd.Dataframes, one for each
period, each pd.Dataframe containing the aggregated result
(min, mean, sum…) from self.all_res [in case of
plot_type = 'temporality'].
Or in the form of a single pd.Dataframe containing the
aggregated values for each day [in case of plot_type is
a metric].

Examples

from watertrajectories_pytools.src.graphics import trajplot as tjp

mask = r"D:- Postdoc- Travaux- Veille- Donnees- Territoire Annecy\masque_zone_etude.shp"

F = tjp.Figure(
    var = 'T',
    root_folder = r"E:\Inputs\Climat",
    scenario = 'rcp8.5',
    coords = mask,
    rolling_days = 30,
    period_years = 10,
    annuality = 3,
    name = 'Annecy',
    )

for m in mask_dict:
    for scenario in ['SIM2', 'rcp8.5']:
        figweb, _, _ = tjp.temporality(
            var = 'PRETOT', scenario = scenario, root_folder = r"E:\Inputs\Climat",
            coords = mask_dict[m], name = m,
            period_years = 10, rolling_days = 30, cumul = False, 
            plot_type = 'temporality', annuality = 3, relative = False, 
            language = 'fr', plotsize = 'wide', verbose = True)
        figweb, _, _ = tjp.temporality(
            var = 'DRAIN', scenario = scenario, root_folder = r"E:\Inputs\Climat",
            coords = mask_dict[m], name = m,
            period_years = 10, rolling_days = 30, cumul = False, 
            plot_type = 'temporality', annuality = 10, relative = False, 
            language = 'fr', plotsize = 'wide', verbose = True)
        figweb, _, _ = tjp.temporality(
            var = 'SWI', scenario = scenario, root_folder = r"E:\Inputs\Climat",
            coords = mask_dict[m], name = m,
            period_years = 10, rolling_days = 30, cumul = False, 
            plot_type = 'temporality', annuality = 3, relative = False, 
            language = 'fr', plotsize = 'wide', verbose = True)
        figweb, _, _ = tjp.temporality(
            var = 'T', scenario = scenario, root_folder = r"E:\Inputs\Climat",
            coords = mask_dict[m], name = m,
            period_years = 10, rolling_days = 30, cumul = False, 
            plot_type = 'temporality', annuality = 10, relative = False, 
            language = 'fr', plotsize = 'wide', verbose = True)

Parameters

varstr

‘PRETOT’ | ‘PRENEI’ | ‘ETP’ | ‘EVAPC’ | ‘RUNOFFC’ | ‘DRAINC’ | ‘T’ | ‘SWI’ | …

scenariostr

‘SIM2’ | ‘historical’ | ‘RCP 4.5’ | ‘RCP 8.5’

root_folderstr, path
Path to the folder containing climatic data.

r”D:- Postdoc- Travaux- Veille- Donnees8- Meteo” (on my PC) r”E:InputsClimat” (on the external harddrive)

epsg_dataTYPE, optional

DESCRIPTION. The default is None.

coordsTYPE, optional

DESCRIPTION. The default is ‘all’.

epsg_coordsTYPE, optional

DESCRIPTION. The default is None.

languageTYPE, optional

DESCRIPTION. The default is ‘fr’.

plotsizeTYPE, optional

DESCRIPTION. The default is ‘wide’.

rolling_daysTYPE, optional

DESCRIPTION. The default is 1.

period_yearsTYPE, optional

DESCRIPTION. The default is 10.

cumulTYPE, optional

DESCRIPTION. The default is False.

plot_typeTYPE

DESCRIPTION.

repres{None, “area” or “bar”}, optional, default None

Only used when plot_type is a metric. Defines the type of graphical representation of the metric plot.

  • "area": curves representing rolling averages

  • "bar": rectangles representing period averages

nameint

Suffix to add in the filename. Especially usefull to indicate the name of the site.

creditstr, optional, default ‘auto’

To display the acknowledgement for data and conception. If 'auto', the standard info about data source and conception author will be displayed. To remove all mention of credits, pass credit=''.

color‘scale’, ‘discrete’, <colormap> (str) or list of colors, optional, default ‘scale’

Colors for the plots (for now, only for temporality plots)

relativebool

Whether the values should be computed as absolute or relatively to the reference period.

showlegendbool, optional, default True

Whether to display the legend or not.

shadow{None, ‘last’, ‘first’, ‘firstlast’, ‘lastfirst’, ‘all’}, optional, default = None

Whether to display dialy values in grey shadows, and for which period.

annualityint or str
Type of year

‘calendar’ | ‘meteorological’ | ‘meteo’ | ‘hydrological’ | ‘hydro’ 1 | 9 | 10

verbose: bool

Whether or not to display memory diagnotics.

agg_labels = {'decrease': ['decrease', 'diminution'], 'increase': ['increase', 'augmentation'], 'max': ['maximum', 'maximum'], 'mean': ['mean', 'moyenne'], 'min': ['minimum', 'minimum'], 'range': ['range', 'amplitude'], 'sum': ['sum', 'somme']}
classmethod get_plot_mode(plot_type)[source]
layout(*, plotsize=None, name=None, credit=None, color=None, language=None, showlegend=None, shadow=None, repres=None, cumul=None)[source]
load()[source]
Results

This method also update the instance attributes all_res and graph_res, which respectively store the whole results and the final results used for the plot.

Warning

This method can lead to memory size issues. It seems to appear when the garbage collector is not doing its job fast enough in the xarray variable data from C:ProgramDataMiniconda3envscwatenvLibsite-packagesxarraycodingvariables.py.

To solve this, you might just need to open this folder on Windows.

Examples (pipeline)

import os mask_folder = r”D:2- Postdoc2- Travaux1- Veille4- Donnees15- Territoire Pays Basque” mask_dict = dict() mask_dict[‘cotier’] = os.path.join(mask_folder, r”Sage-cotier-basque.shp”) mask_dict[‘CAPB’] = os.path.join(mask_folder, r”zone_etude_fusion.shp”) mask_dict[‘Nive-Nivelle’] = os.path.join(mask_folder, r”Nive-Nivelle.shp”) for var in [‘PRETOT’, ‘DRAIN’, ‘SWI’, ‘T’]:

for scenario in [‘SIM2’, ‘rcp8.5’]:
for m in mask_dict:
F = tjp.Figure(

var = var, root_folder = r”E:InputsClimat”, scenario = scenario, coords = mask_dict[m], name = m, rolling_days = 30, period_years = 10, )

F.plot(plot_type = ‘annual_sum’) F.plot(plot_type = ‘annual_sum’, period_years = 30) F.plot(plot_type = ‘annual_sum’, period_years = 1) F.plot(plot_type = ‘annual_scatter’, agg_rule = ‘sum’) F.plot(plot_type = ‘15/11’, period_years = 10) F.plot(plot_type = ‘15/05’, period_years = 10) if var == ‘DRAIN’:

F.plot(plot_type = ‘> 1.5’)

classmethod metric(merge_res, startdate, plot_type)[source]
metric_labels = {'annual': ['annual {}', '{} annuelle'], 'date': ['date of {}', 'date de {}']}
metric_list = ['annual', 'date']
plot(*, rolling_days=None, period_years=None, annuality=None, plot_type=None, plotsize=None, name=None, credit=None, color=None, language=None, showlegend=None, shadow=None, repres=None, cumul=None)[source]
F = tjp.Figure(

var = ‘DRAINC’, root_folder = r”E:InputsClimat”, scenario = ‘rcp8.5’, coords = r”C:

ile.shp”,

rolling_days = 30, annuality = 10, name = ‘myCatchment’, )

F.plot(plot_type = ‘> 1.5’, period_years = 10) F.plot(plot_type = ‘annual_sum’) F.plot(plot_type = ‘annual_sum’, period_years = 30) F.plot(plot_type = ‘annual_scatter’, period_years = 1) F.plot(plot_type = ‘15/11’, period_years = 10)

plot_typestr

‘temporality’ | ‘annual_sum’

None.

static timeseries(*, data_type, var, scenario, season, domain='Pays-Basque', epsg_data=None, coords='all', epsg_coords=None, language='fr', plotsize='wide', rolling_window=1, plot_type='2series')[source]
Examples

import climatic_plot as tjp

tjp.timeseries(data_type = “Indicateurs DRIAS-Eau 2024 SWIAV”,

scenario = ‘historical’, season = ‘JJA’, domain = ‘Pays Basque’, rolling_window = 10, plot_type = ‘2series’)

# —- Original and rolled series for SWIAV for season in [‘JJA’, ‘SON’, ‘DJF’, ‘MAM’]:

for scenario in [‘historical’, ‘rcp45’, ‘rcp85’]:
for domain in [‘Pays Basque’, ‘Annecy’]:
tjp.timeseries(data_type = “Indicateurs DRIAS-Eau 2024”,

var = ‘SWIAV’, scenario = scenario, season = season, domain = domain, rolling_window = 10, plot_type = ‘2series’)

# —- Same for SSWI for season in [‘JJA’, ‘SON’, ‘DJF’, ‘MAM’]:

for scenario in [‘rcp45’, ‘rcp85’]:
for domain in [‘Pays Basque’, ‘Annecy’]:
tjp.timeseries(data_type = “Indicateurs DRIAS-Eau 2024”,

var = ‘SSWI’, scenario = scenario, season = season, domain = domain, rolling_window = 10, plot_type = ‘2series’)

# —- All different colors with SIM2 tjp.timeseries(data_type = “Indicateurs DRIAS-Eau 2024”,

var = ‘SWIAV’, scenario = ‘historical’, season = ‘JJA’, domain = ‘Pays Basque’, rolling_window = 10, plot_type = ‘all with sim2’)

# —- Narratifs for season in [‘JJA’, ‘SON’, ‘DJF’, ‘MAM’]:

for domain in [‘Pays Basque’, ‘Annecy’]:
tjp.timeseries(data_type = “Indicateurs DRIAS-Eau 2024”,

var = ‘SWIAV’, scenario = ‘rcp85’, season = season, domain = domain, rolling_window = 10, plot_type = ‘narratifs’)

Parameters
  • : TYPE

    DESCRIPTION.

root_folderTYPE

DESCRIPTION.

scenariostr

‘historical’ | ‘rcp45’ | ‘rcp85’

seasonstr

‘DJF’ | ‘MAM’ | ‘JJA’ | ‘SON’ | ‘NDJFMA’ | ‘MAMJJASO’ | ‘JJASO’ | ‘SONDJFM’

data_typeTYPE

DESCRIPTION.

epsg_dataTYPE, optional

DESCRIPTION. The default is None.

coordsTYPE, optional

DESCRIPTION. The default is ‘all’.

epsg_coordsTYPE, optional

DESCRIPTION. The default is None.

languageTYPE, optional

DESCRIPTION. The default is ‘fr’.

plotsizeTYPE, optional

DESCRIPTION. The default is ‘wide’.

rolling_windowTYPE, optional

DESCRIPTION. The default is 10.

plot_typeTYPE, optional

DESCRIPTION. The default is ‘2series’.

: TYPE

DESCRIPTION.

Returns

None.

update(*, var=None, root_folder=None, scenario=None, epsg_data=None, coords=None, epsg_coords=None, relative=None, rolling_days=None, period_years=None, annuality=None, plot_type=None, repres=None, cumul=None, language=None, plotsize=None, color=None, name=None, credit=None, showlegend=None, shadow=None, verbose=None)[source]
classmethod update_verbose(verbose: bool)[source]
verbose: bool = False

cmapgenerator

Created on Thu Sep 5 15:55:24 2024

@author: Alexandre Kenshilik Coche

cmapgenerator.custom(n_steps, *args)[source]
args:

color1, color2, color3… en format [Rouge Vert Bleu Alpha] (valeurs entre 0 et 1)

cmapgenerator.custom_2_colors(n_steps, first_color, last_color)[source]
cmapgenerator.discrete(sequence_name='ibm', alpha=1, black=True, alternate=True, color_format='float')[source]

Generate a standardized colorscale, based on predefined color maps.

Parameters

sequence_name{“trio”, “duo”, “uno”, “ibm”, “wong”}, optional, default “ibm”

A flag to choose the colorscale among the available ones.

  • "trio": a 3x9-color scale (+ grays) based on 9 distinct hues.

    • Two other colorscales can be derived from this one:

    • "duo": only the dark and light variations of each hue are returned.

    • "uno": only the middle variation of each hue is returned.

  • "wong": a 9-color scale (+ black) extended from Wong, adapted for colorblindness.

  • "ibm": a 6-color scale (+ black) extended from IBM colorscale, adapted for colorblindness.

alphaNone or float, optional, default 1

Transparency (from 0 to 1). If None, colors are returned without the 4th value.

blackbool, optional, default True

If False, the black color (and related gray variations) are not included in the colorscale.

alternatebool, optional, default True

If True, the colorscale is not in rainbow order.

color_format{“float”, “rbg_str”, “rgba_tuple”}, optional, default “float”

The way to define colors:

  • "float": [0.22, 0.5, 0.99, 0.85]

  • "rgba_str": "rgba(56.1, 127.5, 252.45, 0.82)"

  • "rgba_tuple": (56.1, 127.5, 252.45, 0.82)

Returns

Return a numpy.array where each row is a 1D-array [red, green, blue, alpha], with values between 0 and 1, or corresponding list with values converted to rgba tuples or strings.

cmapgenerator.to_rgba_str(color)[source]

This function can convert a color variable of any format ('float', 'rgba_tuple', 'rgba_str') and any shape (one color or several colors) into a color variable in the 'rgba_str' format, for example:

'rgb(239.95, 227.97, 66.045)'

or:

['rgb(239.95, 227.97, 66.045)',
 'rgb(135.92, 33.915, 84.915)',
 'rgb(188.95, 133.11, 254.49)',
 'rgb(212.92, 94.095, 0.51)',
 'rgb(120.105, 195.08, 236.895)']

Parameters

colorlist, numpy.array, tuple or str

Input color variable to convert.

Returns

A color variable similar to the input, but in the 'rgba_str' format.

SIM2_tools

Created on Fri Apr 5 18:08:57 2024

@author: Alexandre Kenshilik Coche @contact: alexandre.co@hotmail.fr

Based on the work of Ronan Abhervé and Loic Duffar (https://github.com/loicduffar)

SIM2_tools.clip(filepath, maskpath)[source]
SIM2_tools.clip_folder(folder, maskpath)[source]
SIM2_tools.compress(filepath)[source]
SIM2_tools.compress_folder(folder)[source]
SIM2_tools.folder_to_netcdf(folder)[source]

Parameters

folderstr

Folder containing the .csv files.

Returns

None. Creates the .nc files in the folder ‘netcdf’

SIM2_tools.merge(filelist)[source]
SIM2_tools.merge_folder(folder)[source]
SIM2_tools.plot_map(var, *, file_folder=None, mode='sum', timemode='annual')[source]

Generates interactive maps from SIM2 data (html).

Example

import SIM2_tools as smt

smt.plot_map(‘PRETOT’, mode = “sum”, timemode = ‘annual’)

for timemode in [‘JJA’, ‘SON’, ‘DJF’, ‘MAM’]:

smt.plot_map(‘SWI’, mode = “mean”, timemode = timemode, file_folder = folder)

Parameters

varstr
SIM2 variables:

‘ETP’ | ‘EVAP’ | ‘PRELIQ’ | ‘PRENEI’ | ‘PRETOT’ | ‘DRAINC’ | ‘RUNC’ | ‘T’ | ‘TINF_H’ | ‘TSUP_H’ | ‘WG_RACINE’ | ‘WGI_RACINE’ | ‘SWI’ …

modestr, optional

‘sum’ | ‘min’ | ‘max’ | ‘mean’ | ‘ratio’ | ‘ratio_precip’ | ‘mean_cumdiff’ | ‘sum_cumdiff’ | ‘min_cumdiff’ | ‘max_cumdiff’ ‘mean_cumdiff_ratio’ | ‘sum_cumdiff_ratio’ ‘mean_deficit’ | ‘sum_deficit’ The default is “sum”.

timemodestr, optional

‘annual’ | ‘ONDJFM’ | ‘AMJJAS’ | ‘DJF’ | ‘MAM’ | ‘JJA’ | ‘SON’. The default is ‘annual’.

Returns

None. Creates the html maps.

SIM2_tools.to_netcdf(csv_file_path)[source]