wolfhece.PyVertex._model

Author: HECE - University of Liege, Pierre Archambeau Date: 2024

Copyright (c) 2024 University of Liege. All rights reserved.

This script and its content are protected by copyright law. Unauthorized copying or distribution of this file, via any medium, is strictly prohibited.

Module Contents

wolfhece.PyVertex._model._NUMPY_MISSING[source]
class wolfhece.PyVertex._model.StorageMode[source]

Bases: str, enum.Enum

Inheritance diagram of wolfhece.PyVertex._model.StorageMode

Storage backend mode for cloud vertices.

DICT = 'dict'[source]
NUMPY = 'numpy'[source]
class wolfhece.PyVertex._model.wolfvertex(x: float, y: float, z: float = -99999.0)[source]

WOLF vertex — 3D point with associated values.

Represents a point in space (x, y, z) with an optional dictionary of named values (e.g. elevation, discharge, concentration…).

Variables:
  • x – X coordinate (Easting)

  • y – Y coordinate (Northing)

  • z – Z coordinate (elevation), -99999. by default (= undefined)

  • in_use – whether the vertex is active

  • values – dictionary {key: value} of associated quantities, None if empty

x: float[source]
y: float[source]
z: float[source]
values: dict[source]
in_use = True[source]
rotate(angle: float, center: tuple)[source]

Rotate the vertex

Parameters:
  • angle – angle in radians (positive for counterclockwise)

  • center – center of the rotation (x, y)

as_shapelypoint() shapely.geometry.Point[source]

Convert the vertex to a shapely.geometry.Point.

Returns:

Shapely Point object (x, y, z)

copy() wolfvertex[source]

Independent copy of the vertex (values are not copied).

Returns:

new wolfvertex with the same coordinates

getcoords() numpy.ndarray[source]

Return coordinates as a NumPy array [x, y, z].

Returns:

np.ndarray of shape (3,)

dist3D(v: wolfvertex) float[source]

Return the 3D distance to another vertex

Parameters:

v – vertex to compare

dist2D(v: wolfvertex) float[source]

Return the 2D distance to another vertex

Parameters:

v – vertex to compare

addvalue(id, value)[source]

Add an associated value to the vertex.

Creates the values dictionary if it does not exist yet.

Parameters:
  • id – key identifying the value (e.g. 'discharge', 'concentration')

  • value – value to associate (numeric, string…)

add_value(id, value)[source]

Alias for addvalue() — add an associated value to the vertex.

Parameters:
  • id – key identifying the value

  • value – value to associate

add_values(values: dict)[source]

Add multiple associated values to the vertex at once.

Parameters:

values – dictionary {key: value} to merge into the vertex

getvalue(id)[source]

Return the value associated with the key id.

Parameters:

id – key of the requested value

Returns:

the value if it exists, None otherwise

get_value(id)[source]

Alias for getvalue() — return an associated value from the vertex.

Parameters:

id – key of the requested value

Returns:

the value if it exists, None otherwise

get_values(ids: list) dict[source]

Return a subset of values associated with the vertex.

Parameters:

ids – list of keys to extract

Returns:

dictionary {key: value} containing only the found keys

limit2bounds(bounds=None)[source]

Clamp the vertex coordinates to the given bounding box.

Modifies self.x and self.y in place.

Parameters:

bounds – bounding box [[xmin, xmax], [ymin, ymax]]. If None, no action is taken.

is_like(v: wolfvertex, tol: float = 1e-06) bool[source]

Test near-equality with another vertex.

Comparison is done component-wise (x, y, z) using absolute differences.

Parameters:
  • v – reference vertex to compare against

  • tol – absolute tolerance on each component (default 1e-6)

Returns:

True if all three differences are below tol

class wolfhece.PyVertex._model.cloudproperties(lines=[], parent: cloud_vertices = None)[source]

Visual and legend properties for a cloud of vertices.

Stores the display configuration (color, size, style, transparency…) as well as the legend parameters associated with the cloud.

Variables:
  • used – whether these properties are active

  • color – drawing color (RGB integer encoded by getIfromRGB)

  • width – point size in pixels

  • style – rendering style index (see Cloud_Styles in the GUI)

  • alpha – opacity (0 = opaque, 255 = fully transparent)

  • filled – symbol fill (True = filled)

  • legendvisible – legend display (True = visible)

  • transparent – OpenGL transparency toggle

  • animationspeed – animation speed multiplier (cycles per second)

  • animationmode – animation mode (0=none, 1=blink, 2=fade, 3=grow, 4=seasons, 5=pulse)

  • animationamplitude – animation amplitude factor

  • legendtext – text displayed in the legend

  • legendrelpos – relative legend position (1–9, numpad layout)

  • legendx – absolute X coordinate of the legend (when legendrelpos == 0)

  • legendy – absolute Y coordinate of the legend (when legendrelpos == 0)

  • legendbold – bold text

  • legenditalic – italic text

  • legendunderlined – underlined text

  • legendfontname – font name (e.g. 'Arial')

  • legendfontsize – font size in points

  • legendcolor – legend text color (RGB integer)

  • legendpriority – rendering priority used by Font_Priority

  • legendorientation – text orientation angle in degrees

  • legendwidth – legend texture width in pixels

  • legendheight – legend texture height in pixels

  • renderingmode – OpenGL backend for cloud points (0=list, 1=shader)

  • symbolpreset – symbol selected from the bundled wolfhece/symbols library

  • symbolsource – shared symbol image path used when style=SYMBOL

  • symboltintwithcolor – if True, multiplies symbol by draw color

  • symbolrastersize – target SVG rasterization size in pixels

  • symbolrotation – per-cloud symbol rotation in degrees (CCW, Option A)

  • symbolscale – per-cloud symbol scale factor (Option A)

  • highlightselectedpoint – if True, highlights currently selected cloud point in interactive tools

  • highlightselectedpointsizefactor – multiplicative size factor for the selected point highlight marker

  • highlightselectedpointcolor – color used for selected point highlight marker (RGB integer)

  • extrude – if True, the cloud is extruded in 3D (rendering only)

used: bool[source]
color: int[source]
width: int[source]
style: int[source]
alpha: int[source]
filled: bool[source]
legendvisible: bool[source]
transparent: bool[source]
animationspeed: float[source]
animationmode: int[source]
animationamplitude: float[source]
legendtext: str[source]
legendrelpos: int[source]
legendx: float[source]
legendy: float[source]
legendbold: bool[source]
legenditalic: bool[source]
legendunderlined: bool[source]
legendfontname[source]
legendfontsize: int[source]
legendcolor: int[source]
legendpriority: int[source]
legendorientation: int[source]
legendwidth: int[source]
legendheight: int[source]
renderingmode: int[source]
symbolpreset: str[source]
symbolsource: str[source]
symboltintwithcolor: bool[source]
symbolrastersize: int[source]
symbolrotation: float[source]
symbolscale: float[source]
highlightselectedpoint: bool[source]
highlightselectedpointsizefactor: float[source]
highlightselectedpointcolor: int[source]
extrude: bool = False[source]
parent = None[source]
to_dict() dict[source]

Serialize properties to a plain dictionary.

Colors are stored as [R, G, B] lists for readability.

classmethod from_dict(d: dict, parent: cloud_vertices = None) cloudproperties[source]

Create a cloudproperties from a dictionary.

Parameters:
  • d – dictionary as produced by to_dict().

  • parent – owning cloud instance.

Returns:

new cloudproperties instance.

class wolfhece.PyVertex._model.cloud_vertices(fname: str | pathlib.Path = '', fromxls: str = '', header: bool = False, toload=True, idx: str = '', bbox: shapely.geometry.Polygon = None, dxf_imported_elts=['MTEXT', 'INSERT'], **kwargs)[source]

3D point cloud with associated values.

Supported formats: DXF (.dxf), Shapefile (.shp), ASCII (all others).

For ASCII files, the separator is auto-detected among tab, semicolon, comma and space.

DXF format is recognised by the file extension; otherwise an ASCII file is assumed.

If a header exists on the first line, it must be indicated with header=True.

The total number of columns (nb) determines the interpretation:

  • nb > 3: a header is required.

  • if header[2].lower() == 'z', the 3rd column is the Z elevation; otherwise all columns beyond the 1st are values associated with (X, Y).

  • number of values = nb − (2 or 3) depending on whether Z is present.

Data are stored in myvertices (indexed dictionary):

{0: {'vertex': wolfvertex, 'head1': val1, 'head2': val2, ...},
 1: {'vertex': wolfvertex, ...}, ...}

See readfile(), import_from_dxf(), import_from_shapefile().

Variables:
  • filename – source file path (empty string if created in memory)

  • myvertices – dictionary {id: {'vertex': wolfvertex, key: value, ...}}

  • xbounds – tuple (xmin, xmax) of the X extent

  • ybounds – tuple (ymin, ymax) of the Y extent

  • zbounds – tuple (zmin, zmax) of the Z extent

  • myprop – visual properties (cloudproperties instance)

  • mytree – Scipy KDTree, None until create_kdtree() is called

  • loadedTrue if data was loaded successfully

  • idx – text identifier of the cloud

filename: str[source]
property myvertices: dict[source]

Legacy row storage accessor.

Reading myvertices guarantees a dict-based view. If the cloud is currently in NumPy backend mode, rows are materialized first.

_myvertices: dict[int, dict['vertex':wolfvertex, str:float]][source]
xbounds: tuple[source]
ybounds: tuple[source]
zbounds: tuple[source]
myprop: cloudproperties[source]
mytree: scipy.spatial.KDTree[source]
_mytree_dim: int | None[source]
AUTO_NUMPY_SWITCH_THRESHOLD = 100000[source]
idx = ''[source]
parent_collection: cloud_of_clouds | None = None[source]
xmin = 0.0[source]
ymin = 0.0[source]
xmax = 0.0[source]
ymax = 0.0[source]
_numpy_xyz = None[source]
_numpy_keys = None[source]
_numpy_values[source]
loaded = False[source]
_header = False[source]
on_changed_vertices()[source]

Hook called after vertices are added/removed/updated.

Base model implementation is a no-op. GUI subclasses can override this method to invalidate OpenGL caches and trigger a redraw.

_make_cloud_vertices(**kwargs) cloud_vertices[source]

Create a sibling cloud_vertices. GUI subclass returns the GUI variant.

_make_cloudproperties(**kwargs) cloudproperties[source]

Create a cloudproperties. GUI subclass returns the GUI variant.

_make_cloudproperties_from_dict(d: dict, **kwargs) cloudproperties[source]

Create a cloudproperties from a dictionary. GUI subclass returns the GUI variant.

property myname: str[source]

Cloud name accessor (alias for idx).

_materialize_numpy_storage()[source]

Convert optional NumPy storage back to legacy dict rows.

_reset_storage_for_reload()[source]

Clear both storage backends before a full data reload.

property storage_mode: StorageMode[source]

Current storage backend for cloud rows.

switch_storage_mode(mode: Literal['dict', 'numpy'] | StorageMode = StorageMode.DICT)[source]

Switch storage backend between legacy dict rows and NumPy arrays.

Parameters:

mode – target backend. 'dict' materializes rows in myvertices; 'numpy' compacts current rows into array storage while preserving row keys.

create_kdtree()[source]

Build a Scipy KDTree from the current vertex coordinates.

The KDTree is stored in self.mytree and used by find_nearest() for nearest-neighbor queries.

static _is_undefined_z(z: numpy.ndarray | float, atol: float = 1e-09)[source]

Return mask/flag for coordinates considered undefined in Z.

property z_dimension_mode: Literal['2d', '3d', 'mixed'][source]

Describe cloud Z content mode.

  • '2d': all Z are undefined (default sentinel -99999)

  • '3d': all Z are defined

  • 'mixed': both defined and undefined Z values coexist

_normalize_query_xyz(xyz: numpy.ndarray | list) numpy.ndarray[source]

Normalize query coordinates to a 2D float64 array.

_select_kdtree_dim(query_cols: int) int | None[source]

Choose KDTree dimensionality (2 or 3) based on cloud/query context.

_get_query_and_tree(xyz: numpy.ndarray | list)[source]

Return normalized query array, KDTree and row keys for nearest search.

find_nearest(xyz: numpy.ndarray | list, nb: int = 1)[source]

Find nearest neighbors from Scipy KDTree structure based on a copy of the vertices.

See : https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html

Parameters:
  • xyz – coordinates to find nearest neighbors – shape (n, m) - where m is the number of coordinates (2 or 3)

  • nb – number of nearest neighbors to find

Returns:

list of distances, list of “Wolfvertex”, list of elements stored in self.myvertices - or list of lists if xyz is a list of coordinates

find_nearest_id(xyz: numpy.ndarray | list, max_distance: float | None = None)[source]

Find nearest row id(s) for one or several query points.

Parameters:
  • xyz – coordinates used for nearest-neighbor search. Accepted shapes: [x, y, z] or [[x, y, z], ...].

  • max_distance – optional upper bound on accepted nearest distance. If provided and the nearest point is farther than this value, None is returned for that query.

Returns:

nearest id (single query), list of nearest ids (multiple queries), or None on error.

init_from_nparray(array: numpy.ndarray, numpy_backend: bool | None = None)[source]

Populate the cloud from a NumPy array.

Existing vertices are overwritten (added with sequential keys starting from 0).

Parameters:
  • array – array of shape (n, 3) with columns X, Y, Z.

  • numpy_backend

    backend selection mode:
    • True: force NumPy backend;

    • False: force legacy dict rows;

    • None (default): auto-switch to NumPy

    when len(array) >= AUTO_NUMPY_SWITCH_THRESHOLD (default threshold: 100_000 points).

readfile(fname: str = '', header: bool = False)[source]

Reading an ascii file with or without header

Parameters:
  • fname – (str) file name

  • header – (bool) header in file (first line with column names)

The separator is automatically detected among : tabulation, semicolon, space, comma.

The file must contain at least 2 columns (X, Y) and may contain a third one (Z) and more (values). If values are present, they are stored in the dictionnary with their header name as key.

import_from_dxf(fn: str = '', imported_elts=['MTEXT', 'INSERT'])[source]

Import points from a DXF file using the ezdxf library.

Supported entity types: MTEXT, INSERT, POLYLINE, LWPOLYLINE, LINE. Only entities on visible (active) layers are imported. For MTEXT/INSERT, points with Z == 0 are skipped.

Parameters:
  • fn – DXF file path. If empty or non-existent, no action is taken.

  • imported_elts – list of DXF entity types to import (e.g. ['MTEXT', 'INSERT', 'POLYLINE', 'LINE']).

Returns:

number of imported points, or None if the file is not found.

_resolve_shapefile_column(gdf, targetcolumn: str) str[source]

Resolve the column to use when targetcolumn is not found.

Called by import_from_shapefile() when neither targetcolumn nor 'geometry' are available. The base implementation logs an error and returns None. The GUI subclass overrides this to present an interactive column chooser.

Parameters:
  • gdfGeoDataFrame already loaded from the Shapefile.

  • targetcolumn – the originally requested column name.

Returns:

resolved column name, or None to abort the import.

_resolve_value_columns(gdf, value_columns, excluded_columns: list[str] | None = None) list[str][source]

Resolve the list of attributes to import from a GeoDataFrame.

Parameters:
  • gdf – source GeoDataFrame.

  • value_columnsNone (disabled), 'all' or explicit iterable.

  • excluded_columns – columns that must not be imported.

Returns:

list of selected column names.

_import_from_geodataframe(gdf, source_label: str, targetcolumn: str = 'X1_Y1_Z1', value_columns=None)[source]

Import points and optional attributes from an existing GeoDataFrame.

Three extraction strategies are tried in order:

  1. targetcolumn is present: each cell is a 'X,Y,Z' comma-separated string (format used by SPW-ARNE-DCENN).

  2. A geometry column is present: coordinates are read from the Shapely Point geometry of each row.

  3. Neither: _resolve_shapefile_column() is called to let subclasses (e.g. the GUI) choose an alternative column.

After import, find_minmax() is called and self.loaded is set to True.

Parameters:
  • gdf – source GeoDataFrame (already read by the caller).

  • source_label – human-readable file path used in log/error messages.

  • targetcolumn – name of the column containing 'X,Y,Z' coordinate strings. Default: 'X1_Y1_Z1'.

  • value_columns – optional attribute import selector. None (default) imports geometry only; 'all' imports all non-geometry/non-coordinate columns; a list/tuple/set imports the named columns only. If the number of imported rows reaches AUTO_NUMPY_SWITCH_THRESHOLD, storage is automatically switched to the NumPy backend.

Returns:

number of imported points, or None on error.

import_from_shapefile(fn: str = '', targetcolumn: str = 'X1_Y1_Z1', bbox: shapely.geometry.Polygon = None, value_columns=None)[source]

Import points from a Shapefile using geopandas.

Two extraction modes:

  1. If targetcolumn exists in the columns, each row is read as a 'X,Y,Z' string (format used by SPW-ARNE-DCENN).

  2. Otherwise, the geometry column is used (Point or MultiPoint).

If neither is found, _resolve_shapefile_column() is called to allow subclasses (e.g. the GUI) to propose an alternative column.

Parameters:
  • fn – Shapefile path (.shp). If empty or non-existent, no action is taken.

  • targetcolumn – column name containing coordinates as 'X,Y,Z'.

  • bbox – Shapely polygon delimiting the area of interest. Passed to gpd.read_file(fn, bbox=...) to spatially filter features during reading.

  • value_columns – optional attribute import selector. None (default) imports geometry only; 'all' imports all non-geometry/non-XYZ-source columns; explicit list imports selected columns.

Returns:

number of imported points, or None on error.

import_from_geopackage(fn: str = '', layer: str = None, targetcolumn: str = 'X1_Y1_Z1', bbox: shapely.geometry.Polygon = None, value_columns=None)[source]

Import points from a GeoPackage using geopandas.

Parameters:
  • fn – GeoPackage path (.gpkg). If empty or non-existent, no action is taken.

  • layer – optional layer name. If None, geopandas default layer is used.

  • targetcolumn – column name containing coordinates as 'X,Y,Z'.

  • bbox – optional spatial filter passed to gpd.read_file.

  • value_columns – optional attribute import selector.

Returns:

number of imported points, or None on error.

_resolve_export_value_columns(value_columns) list[str][source]

Resolve which attributes should be exported.

Parameters:

value_columnsNone (no attributes), 'all' or explicit iterable.

Returns:

ordered list of attribute names to export.

_build_geodataframe_for_export(value_columns='all', include_xyz_column: bool = True, xyz_column: str = 'X1_Y1_Z1', crs=None)[source]

Build a GeoDataFrame representation of the cloud for export.

export_to_shapefile(fn: str, value_columns='all', include_xyz_column: bool = True, xyz_column: str = 'X1_Y1_Z1', crs=None)[source]

Export cloud vertices to a Shapefile using geopandas.

Parameters:
  • fn – destination .shp path.

  • value_columns – attributes to export (None, 'all' or explicit iterable).

  • include_xyz_column – write X,Y,Z CSV string column for roundtrip import.

  • xyz_column – name of the optional XYZ text column.

  • crs – optional CRS forwarded to GeoDataFrame.

Returns:

number of exported points, or None on error.

export_to_geopackage(fn: str, layer: str = 'points', value_columns='all', include_xyz_column: bool = True, xyz_column: str = 'X1_Y1_Z1', crs=None)[source]

Export cloud vertices to a GeoPackage using geopandas.

Parameters:
  • fn – destination .gpkg path.

  • layer – destination layer name.

  • value_columns – attributes to export (None, 'all' or explicit iterable).

  • include_xyz_column – write X,Y,Z CSV string column for roundtrip import.

  • xyz_column – name of the optional XYZ text column.

  • crs – optional CRS forwarded to GeoDataFrame.

Returns:

number of exported points, or None on error.

to_dict() dict[source]

Serialize the cloud to a plain dictionary.

The dictionary contains the cloud identifier, visual properties, column headers, and vertices as a compact 2-D list.

Returns:

dictionary suitable for json.dumps().

classmethod from_dict(d: dict, **kwargs) cloud_vertices[source]

Create a cloud_vertices from a dictionary.

Parameters:
  • d – dictionary as produced by to_dict().

  • kwargs – extra keyword arguments forwarded to the constructor (e.g. mapviewer, plotted for the GUI subclass).

Returns:

new cloud_vertices instance.

save_json(fn: str | pathlib.Path, indent: int = 2) None[source]

Save the cloud to a JSON file.

Parameters:
  • fn – destination file path.

  • indent – JSON indentation level (None for compact output).

classmethod load_json(fn: str | pathlib.Path, **kwargs) cloud_vertices[source]

Load a cloud from a JSON file.

Parameters:
  • fn – source file path.

  • kwargs – forwarded to from_dict() (and then to the constructor, e.g. mapviewer, plotted).

Returns:

new cloud_vertices instance.

Raises:

ValueError – if the file format is not recognized.

duplicate(idx: str | None = None, **kwargs) cloud_vertices[source]

Create a deep copy of this cloud.

All vertices, properties and metadata are duplicated. The new cloud shares no mutable state with the original.

Parameters:
  • idx – identifier for the copy. If None, the original idx is reused.

  • kwargs – extra keyword arguments forwarded to the constructor (e.g. mapviewer for the GUI).

Returns:

independent cloud_vertices copy.

copy(idx: str | None = None, **kwargs) cloud_vertices[source]

Alias for duplicate method.

iter_on_vertices()[source]

Generator over the cloud vertices.

Yields:

wolfvertex instances one by one.

iter_rows()[source]

Yield cloud rows as (row_id, row_dict) for both backends.

row_dict always contains at least {'vertex': wolfvertex(...)}. In NumPy backend mode, additional value columns are attached when present for that row.

property nbvertices: int[source]

Number of vertices in the cloud

property xyz: numpy.ndarray[source]

Alias for get_xyz method

get_xyz(key='vertex') numpy.ndarray[source]

Return the vertices as numpy array

Parameters:

key – key to be used for the third column (Z) – ‘vertex’ or any key in the dictionnary – if ‘vertex’’, [[X,Y,Z]] are returned

property has_values: bool[source]

Check if the cloud has values (other than X,Y,Z)

property has_value_columns: bool[source]

Whether rows include value columns in addition to vertex.

This explicit alias mirrors the historical _header flag while keeping backward compatibility.

property header: list[str][source]

Return the headers of the cloud

get_vertices() list[wolfvertex][source]

Return all vertices as a list.

Returns:

list of wolfvertex instances (references, not copies).

get_multipoint() shapely.geometry.MultiPoint[source]

Convert the cloud to a shapely.geometry.MultiPoint.

Returns:

MultiPoint object containing all vertices.

_updatebounds(newvert: wolfvertex = None, newcloud: dict = None)[source]

Update the bounds of the cloud

:param newvert : (optional) vertex added to the cloud :param newcloud: (optional) cloud added to the cloud

‘newvert’ or ‘newcloud’ can be passed as argument during add_vertex operation. In this way, the bounds are updated without going through all the vertices -> expected more rapid.

find_minmax(force: bool = False)[source]

Compute the spatial bounds of the cloud.

Updates xmin, xmax, ymin, ymax, zmin, zmax as well as xbounds, ybounds, zbounds.

Parameters:

force – if True, recompute from all coordinates. If False, no action is taken (bounds are already up-to-date thanks to incremental updates from _updatebounds()).

add_vertex(vertextoadd: wolfvertex = None, id=None, cloud: dict = None)[source]

Add one or more vertices to the cloud.

Two usage modes:

  • vertextoadd: add a single vertex. If id is not provided, the identifier defaults to len(myvertices).

  • cloud: merge a dictionary {id: {'vertex': wolfvertex, ...}} into myvertices. Existing keys are overwritten.

Spatial bounds are updated incrementally.

Parameters:
  • vertextoadd – single vertex to add.

  • id – integer vertex identifier. None = auto-assigned.

  • cloud – dictionary of vertices to merge. wolfvertex instances are referenced, not copied.

remove_vertex(id: int)[source]

Remove a vertex from the cloud and recompute bounds.

Parameters:

id – integer identifier of the vertex to remove. A warning is logged if the identifier does not exist.

move_vertex(id: int, x: float, y: float, z: float | None = None, invalidate_tree: bool = True, notify: bool = True, recompute_bounds: bool = True) bool[source]

Move an existing vertex while preserving its row identifier.

Parameters:
  • id – row identifier to move.

  • x – new X coordinate.

  • y – new Y coordinate.

  • z – optional new Z coordinate. If None, keeps current Z.

  • invalidate_tree – if True, clears KDTree cache.

  • notify – if True, calls on_changed_vertices().

  • recompute_bounds – if True, recomputes cloud bounds.

Returns:

True if the vertex was moved, False otherwise.

remove_nearest_vertex(x: float, y: float, z: float = 0.0, max_distance: float | None = None)[source]

Remove the vertex closest to the given coordinates and recompute bounds.

remove_last_vertex()[source]

Remove the last added vertex (highest identifier) from the cloud and recompute bounds.

add_vertices(vertices: list[wolfvertex])[source]

Add a list of vertices to the cloud.

Identifiers are assigned sequentially starting from len(myvertices).

Parameters:

vertices – list of wolfvertex instances to add.

add_values_by_id_list(id: str, values: list[float])[source]

Add values to the cloud

Parameters:
  • id – use as key for the values

  • values – list of values to be added - must be the same length as number of vertices

split_by_keys(keys: str | list[str], include_missing: bool = False) dict[source]

Split the cloud into sub-clouds grouped by one or several keys.

Grouping keys are read from each row dictionary (same keys as iter_rows()). For a single key, the returned mapping uses the scalar value as dictionary key. For multiple keys, it uses tuples.

Parameters:
  • keys – one key name or a list of key names used for grouping.

  • include_missing – if True, rows missing at least one grouping key are still included with value None for missing entries. If False, such rows are ignored.

Returns:

{group_value: cloud_vertices} where each cloud contains only rows belonging to this group.

split_cloud(splitter, inside_prefix: str = 'inside_', outside_prefix: str = 'outside_')[source]

Split this cloud into inside/outside subsets using an external splitter.

The splitter object is expected to provide either:

  • select_points_inside(cloud_vertices) -> list[bool]

  • or isinside(x, y) -> bool

This duck-typed contract avoids importing vector classes here, preventing circular imports between PyVertex and pyvertexvectors.

Parameters:
  • splitter – Geometry-like object used to classify points.

  • inside_prefix – Prefix for the inside cloud identifier.

  • outside_prefix – Prefix for the outside cloud identifier.

Returns:

(cloud_inside, cloud_outside).

split_by_vector(vector_like, inside_prefix: str = 'inside_', outside_prefix: str = 'outside_')[source]

Explicit alias to split this cloud using a vector-like splitter.

This is a convenience wrapper around split_cloud() for the common case where the splitter is a vector object exposing select_points_inside and/or isinside.

Parameters:
  • vector_like – Vector-like object used to classify points.

  • inside_prefix – Prefix for the inside cloud identifier.

  • outside_prefix – Prefix for the outside cloud identifier.

Returns:

(cloud_inside, cloud_outside).

set_legend_column(key: str, visible: bool = True)[source]

Set the legend to display a named value column for each point.

Parameters:
  • key – Column name to display. Use '' for the row identifier, 'ID' for the sequential index, 'X' / 'Y' / 'Z' for coordinates, or any column name previously added with add_values_by_id_list().

  • visible – Whether to make the legend visible. Defaults to True.

interp_on_array(myarray, key: str = 'vertex', method: Literal['linear', 'nearest', 'cubic'] = 'linear')[source]

Interpolation of the cloud on a 2D array

Parameters:
  • myarray – WolfArray instance

  • key – key to be used for the third column (Z) – ‘vertex’ or any key in the dictionnary

  • method – interpolation method – ‘linear’, ‘nearest’, ‘cubic’

see interpolate_on_cloud method of WolfArray for more information

projectontrace(trace, return_cloud: bool = True, proximity: float = 99999.0)[source]

Project the cloud onto a trace (polyline of type vector).

Each point is orthogonally projected onto the trace; the curvilinear coordinate s (distance along the trace) and the original point’s elevation z are extracted.

Parameters:
  • tracevector instance (must have asshapely_ls() and myname attributes).

  • return_cloud – if True, return a new cloud_vertices whose vertices are (s, z). If False, return two lists (s_list, z_list).

  • proximity – search radius around the trace (in map units). Only points within this buffer are kept. The default value (99999) keeps all points.

Returns:

cloud_vertices or tuple (list[float], list[float]) depending on return_cloud.

class wolfhece.PyVertex._model.cloud_of_clouds(idx: str = '', clouds: list[cloud_vertices] | None = None)[source]

Collection of cloud_vertices instances.

Mirrors the Zones zone vector hierarchy but for point clouds: cloud_of_clouds cloud_vertices wolfvertex.

Provides:

  • cloud management (add, remove, reorder, access by index or name);

  • bulk display-property propagation (color, width, style, alpha, legend…);

  • value manipulation across all clouds (add, get, colorize);

  • iteration helpers;

  • spatial queries (bounds, nearest).

Variables:
  • myclouds – ordered list of cloud_vertices instances.

  • idx – text identifier for the collection.

myclouds: list[cloud_vertices][source]
idx: str[source]
filename = None[source]
add_cloud(cloud: cloud_vertices) None[source]

Append a cloud to the collection.

Parameters:

cloud – cloud to add.

_make_cloud_vertices(**kwargs) cloud_vertices[source]

Create a cloud_vertices. GUI subclass returns the GUI variant.

_make_cloud_vertices_from_dict(d: dict, **kwargs) cloud_vertices[source]

Create a cloud_vertices from a dictionary. GUI subclass returns the GUI variant.

create_cloud(idx: str = '', **kwargs) cloud_vertices[source]

Create a new empty cloud and add it to the collection.

Parameters:
Returns:

the newly created cloud.

remove_cloud(key: int | str) cloud_vertices | None[source]

Remove a cloud by index or name.

Parameters:

key – integer index or string idx of the cloud.

Returns:

the removed cloud, or None if not found.

_resolve(key: int | str) cloud_vertices | None[source]

Resolve a cloud by index or name.

Parameters:

key – integer index or string idx.

Returns:

cloud instance, or None if not found.

property nbclouds: int[source]

Number of clouds in the collection.

property cloud_names: list[str][source]

List of cloud identifiers.

property nbvertices: int[source]

Total number of vertices across all clouds.

find_minmax(force: bool = True)[source]

Recompute bounds for all clouds.

Parameters:

force – forwarded to each cloud’s find_minmax().

property xbounds: tuple[float, float][source]

Global X extent across all clouds.

property ybounds: tuple[float, float][source]

Global Y extent across all clouds.

property zbounds: tuple[float, float][source]

Global Z extent across all clouds.

iter_all_vertices()[source]

Yield every vertex across all clouds.

Yields:

wolfvertex instances.

iter_all_rows()[source]

Yield (cloud_idx, row_id, row_dict) for every row across all clouds.

Yields:

(str, row_id, dict) tuples.

add_values(key: str, values: numpy.ndarray | dict)[source]

Add values to the clouds.

Parameters:
  • key – value column identifier.

  • values – either a dict {cloud_idx: list} mapping cloud names to per-vertex value lists, or a flat ndarray whose length must equal :pyattr:`nbvertices` (values are distributed to clouds in order).

get_values(key: str) dict[str, numpy.ndarray][source]

Retrieve values from all clouds.

Parameters:

key – value column identifier.

Returns:

dict {cloud_idx: ndarray} for clouds that have the key.

get_all_xyz() numpy.ndarray[source]

Return all XYZ coordinates as a single array.

Returns:

(N, 3) array with all vertices concatenated.

set_color(color: int) None[source]

Set uniform drawing color for all clouds.

Parameters:

color – RGB integer (see getIfromRGB).

set_width(width: int) None[source]

Set point size for all clouds.

Parameters:

width – size in pixels.

set_style(style: int) None[source]

Set rendering style for all clouds.

Parameters:

style – style index (see Cloud_Styles).

set_alpha(alpha: int) None[source]

Set transparency for all clouds.

Parameters:

alpha – opacity value (0 = opaque, 255 = fully transparent).

set_filled(filled: bool) None[source]

Set symbol fill for all clouds.

Parameters:

filledTrue = filled symbols.

set_legend_visible(visible: bool = True) None[source]

Show or hide legends for all clouds.

Parameters:

visibleTrue to display.

set_legend_text(text: str) None[source]

Set legend text for all clouds.

Parameters:

text – legend text.

set_legend_color(color: int) None[source]

Set legend text color for all clouds.

Parameters:

color – RGB integer.

set_legend_fontsize(size: int) None[source]

Set legend font size for all clouds.

Parameters:

size – font size in points.

set_legend_from_idx(visible: bool = True) None[source]

Set each cloud’s legend text to its own idx.

Parameters:

visible – whether to make legends visible.

find_nearest(xyz: numpy.ndarray | list, nb: int = 1)[source]

Find the nearest vertex across all clouds.

Queries each cloud’s KDTree and returns the overall nearest.

Parameters:
  • xyz – query coordinates [x, y, z] or [[x, y, z], ...].

  • nb – number of nearest neighbors.

Returns:

(distance, wolfvertex, row_dict, cloud_idx) for the closest result, or (None, None, None, None) if empty.

merge(idx: str = '') cloud_vertices[source]

Merge all clouds into a single cloud.

Vertex values are preserved. Cloud origin is tracked via a '__source__' value column.

Parameters:

idx – identifier for the merged cloud.

Returns:

new cloud_vertices containing all vertices.

to_dict() dict[source]

Serialize the collection to a plain dictionary.

Each child cloud is serialized via its own cloud_vertices.to_dict().

classmethod from_dict(d: dict, **kwargs) cloud_of_clouds[source]

Create a cloud_of_clouds from a dictionary.

Parameters:
Returns:

new cloud_of_clouds instance.

save_json(fn: str | pathlib.Path, indent: int = 2) None[source]

Save the collection to a JSON file.

Parameters:
  • fn – destination file path.

  • indent – JSON indentation level (None for compact output).

classmethod load_json(fn: str | pathlib.Path, **kwargs) cloud_of_clouds[source]

Load a collection from a JSON file.

Parameters:
Returns:

new cloud_of_clouds instance.

Raises:

ValueError – if the file format is not cloud_of_clouds.

duplicate(idx: str | None = None, **kwargs) cloud_of_clouds[source]

Create a deep copy of this collection and all its clouds.

Every child cloud is duplicated independently; the new collection shares no mutable state with the original.

Parameters:
  • idx – identifier for the copy. If None, the original idx is reused.

  • kwargs – extra keyword arguments forwarded to each child’s constructor (e.g. mapviewer for the GUI).

Returns:

independent cloud_of_clouds copy.

copy(idx: str | None = None, **kwargs) cloud_of_clouds[source]

Alias to duplicate().