snail package¶
snail - the spatial networks impact assessment library
Subpackages¶
Submodules¶
snail.cli module¶
snail.damages module¶
Damage assessment
- class snail.damages.DamageCurve[source]¶
Bases:
ABC
A damage curve
Methods
Evaluate damage fraction for exposure to a given hazard intensity
- class snail.damages.PiecewiseLinearDamageCurve(curve: DataFrame[PiecewiseLinearDamageCurveSchema])[source]¶
Bases:
DamageCurve
A piecewise-linear damage curve
Methods
clip_curve_data
(intensity, damage)Clip damage curve values to valid 0-1 damage range
damage_fraction
(exposure)Evaluate damage fraction for exposure to a given hazard intensity
from_csv
(fname[, intensity_col, damage_col, ...])Read a damage curve from a CSV file.
from_excel
(fname[, sheet_name, ...])Read a damage curve from an Excel file.
interpolate
(a, b, factor)Interpolate damage values between two curves
plot
([ax])Plot a line chart of the damage curve
scale_x
(x)Scale intensity by a factor, x
scale_y
(y)Scale damage by a factor, y
translate_x
(x)Translate intensity by a factor, x
translate_y
(y)Translate damage by a factor, y
- static clip_curve_data(intensity, damage)[source]¶
Clip damage curve values to valid 0-1 damage range
- damage: Series[float]¶
- damage_fraction(exposure: array) array [source]¶
Evaluate damage fraction for exposure to a given hazard intensity
- classmethod from_csv(fname, intensity_col='intensity', damage_col='damage_ratio', comment='#', **kwargs)[source]¶
Read a damage curve from a CSV file.
By default, the CSV should have columns named “intensity” and “damage_ratio”, with any additional header lines commented out by “#”.
Any additional keyword arguments are passed through to
pandas.read_csv
- Parameters:
- fnamestr, path object or file-like object
- intensity_colstr, default “intensity”
Column name to read hazard intensity values
- damage_colstr, default “damage_ratio”
Column name to read damage values
- commentstr, default “#”
Indicates remainder of the line in the CSV should not be parsed. If found at the beginning of a line, the line will be ignored altogether.
- kwargs
see pandas.read_csv documentation
- Returns:
- PiecewiseLinearDamageCurve
- classmethod from_excel(fname, sheet_name=0, intensity_col='intensity', damage_col='damage_ratio', comment='#', **kwargs)[source]¶
Read a damage curve from an Excel file.
By default, the file should have columns named “intensity” and “damage_ratio”, with any additional header lines commented out by “#”.
Any additional keyword arguments are passed through to
pandas.read_excel
- Parameters:
- fnamestr, path object or file-like object
- sheet_namestr or int
Strings are used for sheet names. Integers are used in zero-indexed sheet positions (chart sheets do not count as a sheet position).
- intensity_colstr, default “intensity”
Column name to read hazard intensity values
- damage_colstr, default “damage_ratio”
Column name to read damage values
- commentstr, default “#”
Indicates remainder of the line in the CSV should not be parsed. If found at the beginning of a line, the line will be ignored altogether.
- kwargs
see pandas.read_csv documentation
- Returns:
- PiecewiseLinearDamageCurve
- intensity: Series[float]¶
- classmethod interpolate(a, b, factor: float)[source]¶
Interpolate damage values between two curves
` new_curve_damage = a_damage + ((b_damage - a_damage) * factor) `
- Parameters:
- aPiecewiseLinearDamageCurve
- bPiecewiseLinearDamageCurve
- factorfloat
Interpolation factor, used to calculate the new curve
- Returns:
- PiecewiseLinearDamageCurve
- class snail.damages.PiecewiseLinearDamageCurveSchema(*args, **kwargs)[source]¶
Bases:
DataFrameModel
Methods
example
(**kwargs)Generate an example of a particular size.
get_metadata
()Provide metadata for columns and schema level
pydantic_validate
(schema_model)Verify that the input is a compatible dataframe model.
strategy
(**kwargs)Create a
hypothesis
strategy for generating a DataFrame.to_json_schema
()Serialize schema metadata into json-schema format.
to_schema
()Create
DataFrameSchema
from theDataFrameModel
.to_yaml
([stream])Convert Schema to yaml using io.to_yaml.
validate
(check_obj[, head, tail, sample, ...])Validate a DataFrame based on the schema specification.
Config
build_schema_
- class Config¶
Bases:
BaseConfig
- Attributes:
- description
- dtype
- from_format
- from_format_kwargs
- metadata
- multiindex_name
- multiindex_unique
- title
- to_format
- to_format_buffer
- to_format_kwargs
- unique
- name: str | None = 'PiecewiseLinearDamageCurveSchema'¶
name of schema
- damage: Series[float] = 'damage'¶
- intensity: Series[float] = 'intensity'¶
snail.intersection module¶
- class snail.intersection.GridDefinition(crs: str, width: int, height: int, transform: Tuple[float])[source]¶
Bases:
object
Store a raster transform and CRS
A note on transform - these six numbers define the transform from i,j cell index (column/row) coordinates in the rectangular grid to x,y geographic coordinates, in the coordinate reference system of the input and output files. They effectively form the first two rows of a 3x3 matrix:
| x | | a b c | | i | | y | = | d e f | | j | | 1 | | 0 0 1 | | 1 |
In cases without shear or rotation, a and e define scaling or grid cell size, while c and f define the offset or grid upper-left corner:
| x_scale 0 x_offset | | 0 y_scale y_offset | | 0 0 1 |
Methods
from_extent
(xmin, ymin, xmax, ymax, ...)GridDefinition for a given extent, cell size and CRS
from_raster
(fname)GridDefinition for a raster file (readable by rasterio)
from_rasterio_dataset
(dataset)GridDefinition for a rasterio dataset
- crs: str¶
- classmethod from_extent(xmin: float, ymin: float, xmax: float, ymax: float, cell_width: float, cell_height: float, crs)[source]¶
GridDefinition for a given extent, cell size and CRS
- height: int¶
- transform: Tuple[float]¶
- width: int¶
- snail.intersection.apply_indices(features: GeoDataFrame, grid: GridDefinition, index_i='index_i', index_j='index_j') GeoDataFrame [source]¶
- snail.intersection.generate_grid_boxes(grid: GridDefinition)[source]¶
Generate all the box polygons for a grid
- snail.intersection.get_indices(geom, grid: GridDefinition, index_i='index_i', index_j='index_j') Series [source]¶
Given a geometry, find the cell index (i, j) of its midpoint for the enclosing grid.
N.B. There is no checking whether a geometry spans more than one cell.
- snail.intersection.get_raster_values_for_splits(splits: DataFrame, data: ndarray, index_i: str = 'index_i', index_j: str = 'index_j') Series [source]¶
For each split geometry, lookup the relevant raster value.
Cell indices must have been previously calculated and stored as index_i and index_j.
N.B. This will pass through no data values from the raster (no filtering).
- Parameters:
- splits: pandas.DataFrame
Table of features, each with cell indices to look up raster pixel. Indices must be stored under columns with names referenced by index_i and index_j.
- data: numpy.ndarray
Raster data (2D array)
- index_i: str
Column name for i-indices
- index_j: str
Column name for j-indices
- Returns:
- pd.Series
Series of raster values, with same row indexing as df.
- snail.intersection.prepare_points(features: GeoDataFrame) GeoDataFrame [source]¶
Prepare points for splitting
- snail.intersection.split_features_for_rasters(features: GeoDataFrame, grids: List[GridDefinition], split_func: Callable)[source]¶
- snail.intersection.split_linestrings(linestring_features: GeoDataFrame, grid: GridDefinition) GeoDataFrame [source]¶
Split linestrings along a grid
- snail.intersection.split_points(points: GeoDataFrame, grid: GridDefinition) GeoDataFrame [source]¶
Split points along a grid
This is a no-op, written for equivalence when processing multiple geometry types.
- snail.intersection.split_polygons(polygon_features: GeoDataFrame, grid: GridDefinition) GeoDataFrame [source]¶
Split polygons along a grid
- snail.intersection.split_polygons_experimental(polygon_features: GeoDataFrame, grid: GridDefinition) GeoDataFrame [source]¶
Split polygons along a grid
Experimental implementation of split_polygons, possibly fast/incorrect with some inputs.
snail.io module¶
- snail.io.associate_raster_files(splits, rasters)[source]¶
Read values from a list of raster files for a set of indexed split geometries
- Parameters:
- splits: pandas.DataFrame
split geometries with raster indices in columns named “i_{grid_id}”, “j_{grid_id}” for each grid_id in rasters
- rasters: pandas.DataFrame
table of raster metadata with columns: key, grid_id, path, bands
- Returns:
- pandas.DataFrame
split geometries with raster data values at indexed locations
- snail.io.extend_rasters_metadata(rasters: DataFrame) Tuple[DataFrame, List[GridDefinition]] [source]¶
- snail.io.read_raster_metadata(path) Tuple[GridDefinition, Tuple[int]] [source]¶
snail.routing module¶
- snail.routing.shortest_paths(sources, destinations, graph, weight)[source]¶
Compute all shortest paths from an ensemble of sources to an ensemble of destinations.
Positional arguments: sources – list of source node ids (string or int). destinations – list of destination node ids (string or int). graph: igraph.Graph instance representing the network. weight – Edge attribute according to which paths should be weighted (string)
Returns: A list of tuples (source, destination) A list of list of edge ids corresponding to shortest paths. For each (source, destination) pair, their is either 0, 1 or several shortest paths.