Managing collections of point clouds

Notecloud_of_clouds automatically falls back to its model-only version when wxPython is unavailable. You can also import the model explicitly: from wolfhece.PyVertex._model import cloud_of_clouds. See the Model/GUI architecture tutorial for details.

What is a cloud_of_clouds?

cloud_of_clouds is a container that groups multiple cloud_vertices instances into a single collection. It mirrors the Zones zone vector hierarchy, but for point clouds:

cloud_of_clouds            ←→  Zones   (collection)
    └── cloud_vertices     ←→  zone    (individual)
            └── wolfvertex ←→  vector  (single point)

Typical use cases:

  • Group survey points by category (left bank, right bank, river bed…)

  • Keep spatial data organized while computing global bounds or statistics

  • Save and load multi-cloud datasets as a single JSON file

  • Merge all clouds into a flat cloud when needed

Prerequisites

This tutorial assumes you are familiar with cloud_vertices (see cloudpoints).

[17]:
from wolfhece.PyVertex import cloud_vertices, cloud_of_clouds
import numpy as np
import tempfile, os

Creating a cloud_of_clouds

You can create an empty collection and add clouds later, or pass clouds at construction time.

[18]:
# Method 1: create an empty collection, then add clouds
coc = cloud_of_clouds(idx='survey_2024')

# Add a pre-existing cloud
left_bank = cloud_vertices(idx='left_bank')
left_bank.init_from_nparray(np.array([
    [100., 200., 50.],
    [110., 210., 51.],
    [120., 220., 49.],
]))
coc.add_cloud(left_bank)

# Or create a cloud directly inside the collection
right_bank = coc.create_cloud(idx='right_bank')
right_bank.init_from_nparray(np.array([
    [105., 250., 48.],
    [115., 260., 47.],
    [125., 270., 46.],
    [135., 280., 45.],
]))

print(f'Collection "{coc.idx}" has {coc.nbclouds} clouds and {coc.nbvertices} vertices')
print(f'Cloud names: {coc.cloud_names}')
Collection "survey_2024" has 2 clouds and 7 vertices
Cloud names: ['left_bank', 'right_bank']
[19]:
# Method 2: pass clouds at construction time
bed = cloud_vertices(idx='bed')
bed.init_from_nparray(np.array([
    [102., 230., 40.],
    [112., 240., 39.],
    [122., 250., 38.],
]))

coc2 = cloud_of_clouds(idx='survey_quick', clouds=[left_bank, right_bank, bed])
print(f'coc2: {coc2.nbclouds} clouds, {coc2.nbvertices} vertices')
coc2: 3 clouds, 10 vertices

Accessing clouds

cloud_of_clouds supports indexing by integer position or name, as well as standard Python iteration, len(), and in checks.

[20]:
# By integer index
print('First cloud:', coc[0].idx, '—', coc[0].nbvertices, 'vertices')

# By name
print('right_bank:', coc['right_bank'].nbvertices, 'vertices')

# Containment check
print('"left_bank" in coc?', 'left_bank' in coc)
print('"unknown" in coc?', 'unknown' in coc)

# Iterate
for cloud in coc:
    print(f'  {cloud.idx}: {cloud.nbvertices} pts')
First cloud: left_bank — 3 vertices
right_bank: 4 vertices
"left_bank" in coc? True
"unknown" in coc? False
  left_bank: 3 pts
  right_bank: 4 pts

Removing a cloud

[21]:
# Remove by name (returns the removed cloud, or None)
removed = coc.remove_cloud('right_bank')
print(f'Removed: {removed.idx}' if removed else 'Not found')
print(f'Remaining: {coc.cloud_names}')

# Re-add it for the rest of the tutorial
coc.add_cloud(removed)
Removed: right_bank
Remaining: ['left_bank']

Global bounds

The collection computes the global bounding box across all its clouds.

[22]:
coc.find_minmax()

print(f'X bounds: {coc.xbounds}')
print(f'Y bounds: {coc.ybounds}')
print(f'Z bounds: {coc.zbounds}')
X bounds: (100.0, 135.0)
Y bounds: (200.0, 280.0)
Z bounds: (45.0, 51.0)

Adding values to clouds

Extra value columns (beyond X, Y, Z) can be assigned to each cloud. add_values accepts either:

  • a dict {cloud_name: array} — values are assigned to matching clouds by name

  • a flat ndarray of length nbvertices — distributed in order across clouds

[23]:
# Assign depths via dict
coc.add_values('depth', {
    'left_bank':  np.array([2.1, 1.8, 2.5]),
    'right_bank': np.array([3.0, 2.7, 3.2, 2.9]),
})

# Retrieve values
depths = coc.get_values('depth')
for name, vals in depths.items():
    print(f'  {name}: {vals}')
  left_bank: [2.1 1.8 2.5]
  right_bank: [3.  2.7 3.2 2.9]
[24]:
# Or assign a flat array (distributed across clouds in order)
velocities = np.array([0.5, 0.6, 0.7,   # left_bank  (3 pts)
                       1.1, 1.2, 1.3, 1.4])  # right_bank (4 pts)
coc.add_values('velocity', velocities)

vels = coc.get_values('velocity')
for name, vals in vels.items():
    print(f'  {name}: {vals}')
  left_bank: [0.5 0.6 0.7]
  right_bank: [1.1 1.2 1.3 1.4]

Getting all coordinates

get_all_xyz() concatenates the XYZ coordinates from every cloud into a single (N, 3) array.

[25]:
all_xyz = coc.get_all_xyz()
print(f'Shape: {all_xyz.shape}')
print(all_xyz)
Shape: (7, 3)
[[100. 200.  50.]
 [110. 210.  51.]
 [120. 220.  49.]
 [105. 250.  48.]
 [115. 260.  47.]
 [125. 270.  46.]
 [135. 280.  45.]]

Iterating over all vertices

Two iteration helpers cross all clouds:

  • iter_all_vertices() — yields each wolfvertex object

  • iter_all_rows() — yields (cloud_idx, row_id, row_dict) with full row data

[26]:
# Quick example with iter_all_rows
for cloud_name, row_id, row in coc.iter_all_rows():
    v = row['vertex']
    print(f'  [{cloud_name}] #{row_id}: ({v.x:.1f}, {v.y:.1f}, {v.z:.1f})')
    if row_id >= 1:  # limit output
        break
  [left_bank] #0: (100.0, 200.0, 50.0)
  [left_bank] #1: (110.0, 210.0, 51.0)

Nearest neighbor query

find_nearest(xyz, nb=1) searches across all clouds and returns the globally closest match. It returns a 4-tuple: (distance, wolfvertex, row_dict, cloud_idx).

[27]:
query = [108., 225., 0.]
dist, vert, row, cloud_idx = coc.find_nearest(query)

print(f'Nearest to {query}:')
print(f'  Cloud: {cloud_idx}')
print(f'  Vertex: ({vert.x:.1f}, {vert.y:.1f}, {vert.z:.1f})')
print(f'  Distance: {dist:.2f}')
WARNING:root:xyz is a list of floats -- converting to a list of lists
WARNING:root:xyz is a list of floats -- converting to a list of lists
Nearest to [108.0, 225.0, 0.0]:
  Cloud: left_bank
  Vertex: (120.0, 220.0, 49.0)
  Distance: 50.70

Merging all clouds into one

merge() creates a new cloud_vertices that contains every vertex from every cloud. A __source__ column is automatically added to track which cloud each vertex came from.

[12]:
merged = coc.merge(idx='all_points')

print(f'Merged cloud: {merged.nbvertices} vertices')
print(f'Header: {merged.header}')
Merged cloud: 7 vertices
Header: ['vertex', 'depth', 'velocity', '__source__']

Display properties

Display properties (color, width, alpha, …) can be set on the whole collection. The call is propagated to every child cloud.

[28]:
coc.set_alpha(180)
coc.set_width(3)
coc.set_legend_from_idx()  # use cloud.idx as legend text

# Legends
coc.set_legend_visible(True)
coc.set_legend_fontsize(12)

Saving and loading (JSON)

The collection can be serialized to a JSON file with save_json() and reloaded with load_json(). The file format stores all clouds, their vertices, properties, and value columns.

[29]:
# Save to a temporary file
tmp_dir = tempfile.mkdtemp()
json_path = os.path.join(tmp_dir, 'survey.json')

coc.save_json(json_path)
print(f'Saved to {json_path}')

# Reload
loaded = cloud_of_clouds.load_json(json_path)
print(f'Loaded: "{loaded.idx}" — {loaded.nbclouds} clouds, {loaded.nbvertices} vertices')
print(f'Cloud names: {loaded.cloud_names}')
Saved to C:\Users\pierre\AppData\Local\Temp\tmpowk5e4cv\survey.json
Loaded: "survey_2024" — 2 clouds, 7 vertices
Cloud names: ['left_bank', 'right_bank']

Loading a single cloud_vertices JSON

load_json also accepts files saved by cloud_vertices.save_json() (format "cloud_vertices"). In that case, the single cloud is wrapped in a cloud_of_clouds with one entry.

[15]:
# Save a single cloud
single_path = os.path.join(tmp_dir, 'single.json')
left_bank.save_json(single_path)

# Load it as a cloud_of_clouds (automatic wrapping)
coc_from_single = cloud_of_clouds.load_json(single_path)
print(f'Loaded single as collection: {coc_from_single.nbclouds} cloud, {coc_from_single.nbvertices} vertices')
Loaded single as collection: 1 cloud, 3 vertices

Duplicating a collection

duplicate() (or its alias copy()) creates a deep copy via JSON round-trip. The copy is fully independent — modifying it does not affect the original.

[16]:
coc_copy = coc.duplicate(idx='survey_copy')
print(f'Original: {coc.nbvertices} vertices, Copy: {coc_copy.nbvertices} vertices')
print(f'Copy name: "{coc_copy.idx}"')
Original: 7 vertices, Copy: 7 vertices
Copy name: "survey_copy"

Summary

Operation

Method

Create empty

cloud_of_clouds(idx='...')

Add existing cloud

coc.add_cloud(cloud)

Create cloud in-place

coc.create_cloud(idx='...')

Remove a cloud

coc.remove_cloud('name') or coc.remove_cloud(0)

Access a cloud

coc['name'] or coc[0]

Number of clouds

coc.nbclouds or len(coc)

Total vertices

coc.nbvertices

Global bounds

coc.xbounds, coc.ybounds, coc.zbounds

All XYZ

coc.get_all_xyz()(N, 3) array

Add values

coc.add_values('key', dict_or_array)

Get values

coc.get_values('key')dict[str, ndarray]

Nearest neighbor

coc.find_nearest(xyz)(dist, vertex, row, cloud_name)

Merge all

coc.merge() → single cloud_vertices with __source__ column

Save

coc.save_json('file.json')

Load

cloud_of_clouds.load_json('file.json')

Deep copy

coc.duplicate() or coc.copy()

Set display props

coc.set_color(n), coc.set_width(n), coc.set_alpha(n), …