# help
python3 -m hapticnet -h
# simulate packet
python3 -m hapticnet simulate --samples 10
# server
python3 -m hapticnet server --bind 0.0.0.0 --port 9000 --buffer 3
# client (local)
python3 -m hapticnet client --host 127.0.0.1 --port 9000 --rate 100 --samples 1000
# client (cross-machine)
python3 -m hapticnet client --host <SERVER_LAN_IP> --port 9000 --rate 100 --samples 1000
# client (auto-discovery)
python3 -m hapticnet client --discover --rate 100 --samples 1000- Data: UDP
9000 - Discovery: UDP
9001
# help
python3 -m grpc -h
# generate proto
python3 -m grpc gen-proto
# server
python3 -m grpc server --host 0.0.0.0 --port 50051
# client (local)
python3 -m grpc client --host 127.0.0.1 --port 50051 --rate 100 --samples 1000
# client (cross-machine)
python3 -m grpc client --host <SERVER_LAN_IP> --port 50051 --rate 100 --samples 1000
# client (auto-discovery)
python3 -m grpc client --discover --rate 100 --samples 1000- Data: TCP
50051 - Discovery: UDP
50052
# start dashboard backend + hapticnet server + grpc server (all in one)
python3 -m dashboardOpen http://127.0.0.1:8080 for dashboard,
then open http://127.0.0.1:8080/simulate for Web Sim Client (Mouse Drag):
- drag on the pad to stream position data
- choose
hapticnet,grpc, orboth - choose payload mode:
position,position + force,full
python3 -m hapticnet server --bind 0.0.0.0 --port 9000 --buffer 3 & \
python3 -m grpc server --host 0.0.0.0 --port 50051 & \
sleep 1 && \
python3 -m hapticnet client --host 127.0.0.1 --port 9000 --rate 100 --samples 300 & \
python3 -m grpc client --host 127.0.0.1 --port 50051 --rate 100 --samples 300Architectural Review Document - Undergraduate Term Project
| Version | Date | Author | Role | Changes |
|---|---|---|---|---|
| v3.0 | [28/02/2569] | Kittichai Raksawong | Architect | Upgraded to Custom Binary Payload & Dead Reckoning logic |
| Role Name | Assigned To | Primary Responsibilities |
|---|---|---|
| Architect | Kittichai (Honda) | System design, Byte-level payload structure, Algorithm selection |
| Engineer | Aekkarin (Nai) | UDP Socket implementation, Byte Serialization/Deserialization, Jitter Buffer |
| Specialist | Sorawit (Boat) | High-frequency Data Simulator, Dead Reckoning mathematical modeling |
| DevOps | Phatsaporn (Waan) | Local network setup, Environment configuration, Version control |
| Tester/QA | Piyada (Yo) | Packet loss injection, Stress testing, Jitter & Latency metric tracking |
The HapticNet project aims to engineer a highly efficient, low-latency network protocol for transmitting physical interaction data (Haptics) over a Local Area Network. By moving away from text-based payloads like JSON and utilizing raw Byte Array Serialization over UDP, the system minimizes network overhead. Furthermore, the project introduces packet loss compensation algorithms to maintain real-time fluidity.
- Implement deep networking concepts (UDP Sockets, Byte-level Data Serialization).
- Manage network anomalies by building a custom Application-level Jitter Buffer.
- Apply mathematical models (Linear Extrapolation/Dead Reckoning) to solve real-world packet loss issues.
- Understand the trade-offs between processing overhead and network payload size.
| Aspect | In Scope | Out of Scope |
|---|---|---|
| Architecture | UDP Client-Server topology via LAN, Jitter Buffer queueing | TCP fallbacks, Global cloud deployment |
| Payload | Custom 52-Byte Haptic Data Structure (Seq, Timestamp, Pos, Rot, Force) | Text-based formats (JSON/XML) |
| Reliability | Dead Reckoning algorithm for packet loss compensation | Complex AI/Neural Network prediction models |
| HapticNet Protocol Stack | Function |
|---|---|
| Application Layer | Data Simulator & Dead Reckoning Extrapolation |
| Presentation Layer | Custom Byte Array Serialization (Bitwise operations) |
| Transport Layer | Standard UDP with custom Jitter Buffer implementation |
| Network Layer | Local IPv4 Routing |
Design Review Status: Approved
- Concept: Data is packed into a strict 52-byte array to maximize throughput.
Sequence (Int, 4 bytes)Timestamp (Long, 8 bytes)Position X/Y/Z (Float x3, 12 bytes)Rotation W/X/Y/Z (Float x4, 16 bytes)Force (Float, 4 bytes)+TextureID (Long, 8 bytes)
Design Review Status: Approved
- Concept: UDP guarantees speed but not order. The receiver will implement a small Jitter Buffer (e.g., holding 3 packets) to reorder sequences. Packets arriving too late are dropped to prevent lag buildup.
Design Review Status: Approved
- Concept: When a packet is dropped, the system will not freeze. Instead, it will estimate the missing coordinates using Linear Extrapolation based on the velocity of previous packets.
-
Formula:
$P_{t} = P_{t-1} + (\vec{v} \cdot \Delta t)$
- Decision 1: Custom Binary over JSON. (Approved) JSON introduces unnecessary string parsing overhead. A byte array forces strict typing and optimizes bandwidth.
- Decision 2: Math over AI. (Approved) Using a straightforward mathematical equation for Dead Reckoning provides predictable, low-latency compensation compared to training a high-overhead machine learning model.
- Architect (Kittichai): **____**
- Engineer (Aekkarin): **____**
- Specialist (Sorawit): **____**
- DevOps (Phatsaporn): **____**
- Tester (Piyada): **____**
This document explains how the code works internally, with emphasis on:
- How packets are created, serialized, transmitted, received, and translated between producers/consumers.
- How packet loss is simulated and handled.
- How the dashboard unifies UDP (HapticNet) and gRPC streams for device-to-device testing.
NetworkYee contains three runnable stacks:
hapticnet/: custom UDP binary protocol.grpc/: gRPC streaming protocol.dashboard/: FastAPI server + WebSocket UI hub to control and compare both.
Typical run modes:
- Protocol-only mode
- Run
hapticnetserver/client directly. - Run
grpcserver/client directly.
- Run
- Unified dashboard mode
- Run
python3 -m dashboard. - Dashboard starts both protocol servers through adapters.
- Browser clients subscribe to one WebSocket feed (
/ws) for all events.
- Run
Core files:
hapticnet/config.pyhapticnet/models.pyhapticnet/control.py- (legacy/parallel implementation also exists in
hapticnet/__main__.py)
HapticNet uses a fixed binary layout (PAYLOAD_FORMAT = "!Iq3f4ffq") defined in hapticnet/config.py.
Fields encoded in strict order:
- sequence (4-byte int)
- timestamp_ns (8-byte int)
- pos_x, pos_y, pos_z (3 floats)
- rot_w, rot_x, rot_y, rot_z (4 floats)
- force (float)
- texture_id (8-byte int)
Why this matters:
- Constant payload size means very predictable parsing cost.
- No JSON parsing overhead.
- Network byte order is explicit (big-endian), so sender and receiver interpret data identically.
hapticnet.models.HapticPacket is the translation boundary:
to_bytes()translates structured values -> network payload bytes.from_bytes()translates bytes -> structured packet object.
This is the core "between device" translation for UDP peers: each device only needs to agree on this binary contract.
From hapticnet/config.py and hapticnet/logic.py, the receiver uses two important offsets:
SEQUENCE_OFFSET = 0TEXTURE_ID_OFFSET = PAYLOAD_SIZE - 8
Practical layout map (byte indices):
0..3->sequence4..11->timestamp_ns12..23->pos_x, pos_y, pos_z24..39->rot_w, rot_x, rot_y, rot_z40..43->force44..51->texture_id
Decode is intentionally two-phase in the UDP receiver for performance:
- Fast path: read only sequence (
_read_sequence) to decide ordering and jitter-buffer behavior. - Full decode path: call
HapticPacket.from_bytes(...)only when packet is actually consumed (or when checking special markers).
This reduces unnecessary unpack overhead when packets are buffered, dropped, or skipped.
HapticPacket.from_bytes(...) rejects malformed payloads by checking exact byte size (PAYLOAD_SIZE).
Reliability implications:
- Wrong-size UDP payloads are ignored early.
- Corrupted or incompatible packet formats fail fast before entering motion logic.
- Stream end marker is safely recognized by semantic values (
sequence == 0andtexture_id == -1) after decode.
In dashboard mode, translation occurs in both directions:
- Browser sends JSON to
POST /api/simulate/send. dashboard/app.pymaps this JSON into canonical motion fields (_sim_payload).- Depending on target protocol:
- HapticNet: instantiate
HapticPacket->to_bytes()-> UDP send. - gRPC: instantiate
HapticFrameprotobuf -> gRPC stream call.
- HapticNet: instantiate
- On receive side, adapters normalize packets back to common event dict shape and push to WebSocket.
So the dashboard is not only a visual UI; it is also an active protocol translator between web payloads and network-native packet formats.
Core files:
grpc/helloworld.protogrpc/client.pygrpc/server.pygrpc/models.py
The gRPC path translates motion/force data through protobuf messages (HapticFrame) instead of manual struct packing.
Flow:
grpc/client.pygenerates frames in_frame_stream(...).- Frames are streamed to
HapticBridge.StreamHaptics. grpc/server.pyreads each frame and computes latency fromtimestamp_ns.
Compared with HapticNet:
- Translation is schema-driven by protobuf (generated code), not manual
struct. - Transport is TCP-based through gRPC runtime.
- Easier interoperability, at higher protocol overhead than raw UDP bytes.
In hapticnet/control.py:
HapticSimulatorgenerates synthetic motion packets.run_sender(...)sends each packet over UDP at configured rate.- Optional stream-end marker is sent (
sequence=0,texture_id=-1) so receiver can return stream summary.
In run_receiver(...):
- UDP datagram arrives.
- Fast sequence extraction (
_read_sequence) avoids full decode until needed. - Packet enters jitter buffer (
PacketBufferinhapticnet/logic.py). - Receiver attempts in-order consume by
expected_seq. - On consume, it decodes packet, updates dead reckoner, computes one-way latency, and emits stats/event callbacks.
In grpc/server.py (or dashboard GrpcAdapter servicer):
StreamHapticsiterates incomingHapticFramestream.- Per frame: count packet, compute latency, aggregate min/max/avg metrics.
- Emit periodic and final summary stats.
Core file: dashboard/app.py
The dashboard acts as a protocol bridge/control plane between different clients/devices:
- Browser sends control/simulation requests over HTTP.
- Dashboard converts browser payload to either:
- UDP
HapticPacketbytes (_send_haptic_frame), or - protobuf
HapticFrame(_send_grpc_frame).
- UDP
- Adapters (
dashboard/haptic_adapter.py,dashboard/grpc_adapter.py) receive packets and push normalized event dictionaries into one async queue. /wsbroadcasts these normalized events to all web clients.
This gives one UI view for both protocols even though underlying transports are very different.
NetworkYee handles packet loss in two layers:
- A) Loss simulation/injection for testing.
- B) Recovery/continuity logic for UDP stream quality.
HapticNet (hapticnet/control.py):
packet_loss_ratecontrols random dropping of received packets.- If random threshold is hit, packet is skipped intentionally.
- Dashboard can update this at runtime through
HapticAdapter.set_packet_loss_rate(...)and REST endpoint/api/hapticnet/packet-loss.
gRPC (dashboard/grpc_adapter.py):
- Adapter servicer also applies random drop using
_loss_ref[0]for symmetric comparison testing. - Controlled via
/api/grpc/packet-loss.
The UDP path has no delivery ordering guarantee, so receiver adds application-level control:
- Small jitter buffer stores by sequence.
- Receiver consumes only
expected_seq. - If head sequence is ahead of expected, it waits briefly (
reorder_wait_s) to allow missing packet arrival. - If still missing, behavior depends on dead-reckoning mode:
- Dead reckoning off: skip missing gap and advance expected sequence to prevent lag accumulation.
- Dead reckoning on: estimate missing packet(s) to keep motion continuity.
This policy prioritizes real-time smoothness over strict completeness.
Implemented in hapticnet/logic.py (DeadReckoner):
-
On every real packet, update velocity from position delta and time delta.
-
When packet(s) missing, estimate next position via linear extrapolation:
P_t = P_(t-1) + v * dt -
Keep orientation/force/texture based on last known packet.
-
Emit estimated packet as source
dead_reckoned.
Guardrails in receiver:
- Max no-RX gap before DR stops (
dr_max_gap_s) to avoid unbounded prediction drift. - DR emit interval (
dr_emit_interval_s) to cap synthetic output frequency.
DeadReckoner keeps:
- last real packet:
self._last_packet - estimated velocity vector:
self._velocity = (vx, vy, vz)
On each real packet update:
-
Compute time delta:
dt = (t_now - t_prev) / 1e9 -
If
dt > 0, compute per-axis velocity:vx = (x_now - x_prev) / dtvy = (y_now - y_prev) / dtvz = (z_now - z_prev) / dt -
Store current packet as new anchor.
On missing sequence estimation:
-
Compute prediction horizon from last anchor timestamp:
dt_pred = (t_est - t_anchor) / 1e9 -
Extrapolate position:
x_est = x_anchor + vx * dt_predy_est = y_anchor + vy * dt_predz_est = z_anchor + vz * dt_pred -
Build synthetic packet with:
- new
sequence= expected missing sequence - new
timestamp_ns= estimation time - orientation/force/texture copied from anchor packet
- new
This keeps spatial motion smooth during short drop bursts while preserving non-positional fields from the latest trusted sample.
In run_receiver(...), DR emission is gated by state checks:
- Receiver first waits for reorder window (
reorder_wait_s) when head sequence is ahead. - If gap persists and DR mode is enabled, estimate one missing packet.
- Throttle synthetic output by
dr_emit_interval_s. - Stop DR if no real packets have arrived for too long (
dr_max_gap_s).
This avoids uncontrolled free-running prediction and limits drift when the sender disappears.
If DR is disabled and a gap remains after reorder wait:
- Receiver counts dropped packets.
expected_seqjumps forward to available head sequence.
Trade-off:
- Better real-time responsiveness and lower lag buildup.
- Visible motion discontinuity at loss points.
Because the estimator is linear and velocity-based:
- Works best for short gaps and near-linear motion.
- Error grows with longer outages or abrupt acceleration/turn changes.
- Guardrails (
dr_max_gap_s, wait windows) are essential to keep error bounded in practice.
For finite runs, sender transmits an end marker and receiver returns summary including:
- received packet count
- average/min/max one-way latency
- duration
Stats classes (ReceiverStats, StreamStats) also track window-based rates and latency spread for live monitoring.
Both protocol stacks support UDP broadcast discovery:
- HapticNet discovery: default UDP 9001.
- gRPC discovery: default UDP 50052.
Mechanism:
- Client sends discovery request broadcast.
- Server discovery listener replies with service port.
- Client forms target
host:portand starts stream.
This removes hard-coded IP dependency and makes device-to-device testing easier in shared LAN.
When loss/jitter increases:
- HapticNet with jitter buffer + DR attempts to preserve motion continuity at the cost of occasional estimated data.
- gRPC path (in dashboard comparison mode) may also intentionally drop for experiment parity, but does not implement the same application-level DR compensation logic.
So the project demonstrates two philosophies:
- Minimal-overhead custom transport with explicit real-time compensation logic.
- Framework-managed transport with stronger developer ergonomics.
- There is duplicated/parallel logic in
hapticnet/__main__.pyandhapticnet/control.py; current modular path is centered aroundcontrol.pyand adapter usage in dashboard. dashboard/grpc_adapter.pycontains explicit handling for local package name collision (grpc/folder vs installedgrpcio) to ensure realgrpciois loaded for runtime.
- UDP payload contract and constants:
hapticnet/config.py - UDP packet model and stats model:
hapticnet/models.py - UDP sender/receiver/discovery/pipeline:
hapticnet/control.py - Reorder + dead reckoning primitives:
hapticnet/logic.py - gRPC client/server/stream stats:
grpc/client.py,grpc/server.py,grpc/models.py - gRPC discovery:
grpc/discovery.py - Dashboard API + simulation bridge:
dashboard/app.py - Dashboard protocol adapters:
dashboard/haptic_adapter.py,dashboard/grpc_adapter.py
4-Week Sprint Planning - Undergraduate Term Project
| Version | Date | Author | Role | Changes |
|---|---|---|---|---|
| v3.0 | [28/02/2569] | Phatsaporn Musanthia | DevOps | Revised sprints for Binary Payload & Dead Reckoning |
| Component | Complexity (1-5) | Risk Level | Lead Owner |
|---|---|---|---|
| Custom Byte Serialization | 4 | Medium | Engineer (Aekkarin) |
| UDP Socket + Jitter Buffer | 4 | Medium | Engineer (Aekkarin) |
| Dead Reckoning Algorithm | 4 | High | Specialist (Sorawit) |
| Packet Loss Stress Testing | 3 | Low | Tester/QA (Piyada) |
| Local Networking & Git Setup | 2 | Low | DevOps (Phatsaporn) |
- Bitwise Errors: Incorrect byte shifting during serialization will corrupt the entire payload. Mitigation: Create strict unit tests for the encode/decode functions before attaching them to sockets.
- Buffer Bloat: A Jitter Buffer that is too large will artificially increase latency. Mitigation: Keep the buffer size minimal (e.g., <= 5 packets).
Theme: Binary Packing and Connection
- DevOps (Phatsaporn): Establish Git repository. Configure network environments for cross-machine testing.
- Architect (Kittichai): Finalize the exact byte offsets for the 52-byte Haptic payload.
- Engineer (Aekkarin): Write the Serialization/Deserialization classes using Bitwise operations or
ByteBufferin Java/Kotlin. - Specialist (Sorawit): Develop the Data Simulator to generate random but continuous spatial data (simulating a moving hand).
Theme: UDP and Jitter Management
- Engineer (Aekkarin): Implement UDP Client and Server. Build the Jitter Buffer logic on the Server side to sort incoming packets by Sequence ID and drop delayed packets.
- Specialist (Sorawit): Hook the Data Simulator into the UDP Client to stream data at 60Hz.
- Architect (Kittichai): Review network thread management to ensure the receiving socket does not block the main application thread.
Theme: Dead Reckoning and Stress Testing
- Specialist (Sorawit): Implement the Dead Reckoning logic ($P_{t} = P_{t-1} + (\vec{v} \cdot \Delta t)$). If the Jitter Buffer is empty (packet loss), trigger the algorithm to generate the missing coordinate.
- Tester (Piyada): Develop a "Chaos Tool" to intentionally drop packets at specific rates (e.g., 5%, 15%, 30%) and observe if the Dead Reckoning can smoothly cover the gaps.
Theme: Metrics and Presentation
- Tester (Piyada): Compile final metrics (Bandwidth saved using Binary vs JSON, Latency measurements, Error rates before and after Dead Reckoning).
- All: Code freeze and repository cleanup.
- Architect & Engineer: Finalize architectural diagrams and code snippets for the slide deck.
- DevOps: Ensure the live demo environment is stable for the final presentation.
- Weekly Sync: 30-minute code review every week focusing on algorithm efficiency and memory leaks.
- Testing Gate: Code cannot be merged into
mainuntil the Serialization unit tests pass 100%.
- System successfully packs and unpacks the 52-byte custom binary payload without data corruption.
- Jitter Buffer correctly reorders out-of-sequence packets and drops late arrivals.
- Dead Reckoning algorithm successfully estimates coordinates during simulated 15% packet loss.
- Project demonstrates a clear understanding of low-level networking and data structure optimization.
(Signatures required from all 5 members prior to Sprint 1 commencement)