09. WebSocket Workspace Navigator
Navigate a device's workspace position in real time by streaming set_transform commands over the WebSocket channel. The tutorial uses HTTP to discover the session and device (exactly like Tutorial 08), then opens a WebSocket to push position updates on every tick.
This illustrates the difference between two transform operations:
| Operation | Tutorial | Typical usage |
|---|---|---|
configure.mount — persistent mount transform | 08 | Set once (or rarely) to describe the physical mounting of the device |
set_transform — persistent workspace transform | 09 (this) | Also persistent, but can be streamed every tick for real-time workspace navigation |
Both commands are available over HTTP and WebSocket, and both are persistent — the service remembers the last value you sent. The difference is purpose and update rate: configure.mount is optimised for infrequent updates (physical setup), whereas set_transform is optimised for high-frequency streaming (scene navigation).
Use set_transform when the workspace origin must track something that changes at runtime — following a hand, animating a procedural offset, or letting an operator nudge the virtual workspace live.
Individually, configure.mount and set_transform appear to produce the same effect, but they are parented: the workspace transform is applied on top of the mount in the coordinate pipeline (device → basis → mount → workspace → application). See Mount & Workspace for the full pipeline.
Use cases
- Live workspace navigation. Move the device's cursor workspace (the bounding region the end-effector can reach) without stopping the haptic scene.
- Operator-driven alignment. Let someone in a second terminal hold arrow keys to shift the workspace while the main haptic app runs — useful for per-user desk calibration during a live session.
- Procedural workspace motion. Drive the workspace position from a script (audio-reactive, physics-driven, etc.) by sending position-only transform patches each tick.
Prerequisites
Tutorial 09 needs an active WebSocket session — a client that already has an Inverse3 in a running haptic loop. Tutorial 09 then attaches to that session and shifts its workspace live.
Open Haply Hub and launch the Orb demo. It opens a persistent haptic session you can target immediately:
./09-haply-inverse-ws-remote-control --session co.haply.hub::demo-orb
python 09-haply-inverse-ws-remote-control.py --session "co.haply.hub::demo-orb"
The Orb scene renders a sphere at the workspace origin — moving the workspace with Tutorial 09 shifts the whole scene in real time, giving you instant visual feedback.
Any other tutorial (01 – 07) that runs a haptic loop works too. Start it in one terminal, then launch Tutorial 09 in a second terminal and let it discover the session interactively.
Usage
# Discover session interactively, use first detected device
./09-haply-inverse-ws-remote-control
python 09-haply-inverse-ws-remote-control.py
# Target the Haply Hub Orb demo directly
./09-haply-inverse-ws-remote-control --session co.haply.hub::demo-orb
python 09-haply-inverse-ws-remote-control.py --session "co.haply.hub::demo-orb"
# Target a session directly, specify device
./09-haply-inverse-ws-remote-control --session :my_profile:0 --device A14
python 09-haply-inverse-ws-remote-control.py --session "#42" --device A14
Controls
- Python
- C++
Hold a key for continuous movement. Release to stop.
| Key | Action |
|---|---|
→ / ← | +X / −X |
↑ / ↓ | +Y / −Y |
Page Up / Page Down | +Z / −Z |
= / - | Increase / decrease navigation speed |
0 | Reset workspace to origin |
H | Show help |
Esc | Exit (Ctrl+C also works) |
Line-based — type then press ENTER.
| Command | Action |
|---|---|
x+[N] / x-[N] | Set X velocity to ±N mm/frame (bare x+ uses the current default speed) |
y+[N] / y-[N] | Set Y velocity to ±N mm/frame |
z+[N] / z-[N] | Set Z velocity to ±N mm/frame |
s+[N] / s-[N] | Adjust default speed by ±N mm/frame |
stop | Zero all velocity |
reset | Zero velocity and return to origin |
h | Show help |
Press Ctrl+C (or Ctrl+D / EOF) to exit.
How it works
Step 1 — HTTP session & device discovery
The tutorial reuses the same discover_session() / discover_device() helpers as Tutorial 08: GET /sessions (or GET /sessions/<selector> with --session), then GET /devices (or use --device directly). Both helpers accept the same CLI flags with identical SESSION_HELP text — a selector from Tutorial 00 or 08 works unmodified here.
Step 2 — WebSocket session, first-message handshake
The tutorial opens a WebSocket to ws://localhost:10001. On the first received message only, it sends the session profile registration:
{
"session": {"configure": {"profile": {"name": "co.haply.inverse.tutorials:ws-remote-control"}}},
"inverse3": [{"device_id": "...", "commands": {"set_transform": {"transform": {"position": {"x": 0, "y": 0, "z": 0}}}}}]
}
Subsequent ticks omit session — only the inverse3 command array is sent.
Step 3 — Per-tick set_transform
set_transform accepts a partial transform — only the fields you want to change. This tutorial sends position only; rotation and scale remain at their session defaults (the values set by configure.mount, if any).
{
"inverse3": [{"device_id": "...", "commands": {"set_transform": {"transform": {"position": {"x": 0.02, "y": 0.0, "z": 0.01}}}}}]
}
The position accumulates tick-by-tick. At ~1 kHz the service re-applies set_transform on every tick (zero-order hold), so the workspace stays where you last pushed it.
set_transform vs configure.mountBoth commands store a persistent transform per-session, but the workspace transform sits on top of the mount in the pipeline (see Mount & Workspace). While Tutorial 09 is streaming set_transform on every tick, each new WebSocket frame replaces the previously streamed workspace transform — so any other client streaming set_transform to the same device will be overridden on the next tick. configure.mount is independent and keeps its value. In practice, pick one channel per concern: mount for the physical setup, set_transform for live navigation.
Threading model (C++)
The C++ tutorial uses two threads sharing a position accumulator:
| Thread | Role |
|---|---|
| Main thread | Reads std::getline(std::cin, ...), parses tokens, updates nav_vel[] and default_step |
| WebSocket callback thread | Advances nav_pos[] using nav_vel[] * dt, serialises set_transform, sends |
std::mutex nav_mutex protects nav_pos[], nav_vel[], and default_step. The last_tick timestamp and first_ws_msg flag are local to the WS callback and need no mutex.
ws.onmessage = [&](const std::string &) {
float pos_snap[3];
{
std::lock_guard<std::mutex> lk(nav_mutex);
for (int i = 0; i < 3; ++i) nav_pos[i] += nav_vel[i] * 0.001f; // mm/frame → m
for (int i = 0; i < 3; ++i) pos_snap[i] = nav_pos[i];
}
// build and send set_transform with pos_snap
};
Source
Tutorial 09 is installed locally with the SDK — look in tutorials/09-haply-inverse-ws-remote-control/ under the service install directory.
Related: set_transform command · Sessions — first-message handshake · Mount & Workspace · Selectors · Tutorial 08 — HTTP Remote Config · Tutorial 07 — Basis & Mount