Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131

Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131
ruvnet/RuView: π RuView: WiFi DensePose turns commodity ... - NTS News

ruvnet/RuView: π RuView: WiFi DensePose turns commodity …

π RuView: WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video. – ruvnet/RuView

Perceive the world through signals. No cameras. No wearables. No Internet. Just physics. Instead of relying on cameras or cloud models, it observes whatever signals exist in a space such as WiFi, radio waves across the spectrum, motion patterns, vibration, sound, or other sensory inputs and builds an understanding of what is happening locally. Built on top of RuVector, the project became widely known for its implementation of WiFi DensePose — a sensing technique first explored in academic research such as Carnegie Mellon University's DensePose From WiFi work.

That research demonstrated that WiFi signals can be used to reconstruct human pose. RuView extends that concept into a practical edge system. By analyzing Channel State Information (CSI) disturbances caused by human movement, RuView reconstructs body position, breathing rate, heart rate, and presence in real time using physics-based signal processing and machine learning. Unlike research systems that rely on synchronized cameras for training, RuView is designed to operate entirely from radio signals and self-learned embeddings at the edge.

The system runs entirely on inexpensive hardware such as an ESP32 sensor mesh (as low as ~$1 per node). Small programmable edge modules analyze signals locally and learn the RF signature of a room over time, allowing the system to separate the environment from the activity happening inside it. Because RuView learns in proximity to the signals it observes, it improves as it operates. Each deployment develops a local model of its surroundings and continuously adapts without requiring cameras, labeled data, or cloud infrastructure.

In practice this means ordinary environments gain a new kind of spatial awareness. Rooms, buildings, and devices begin to sense presence, movement, and vital activity using the signals that already fill the space. Edge modules are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response. CSI-capable hardware required. Pose estimation, vital signs, and through-wall sensing rely on Channel State Information (CSI) — per-subcarrier amplitude and phase data that standard consumer WiFi does not expose.

You need CSI-capable hardware (ESP32-S3 or a research NIC) for full functionality. Consumer WiFi laptops can only provide RSSI-based presence detection, which is significantly less capable. No hardware? Verify the signal processing pipeline with the deterministic reference signal: python v1/data/proof/verify.py The server is optional for visualization and aggregation — the ESP32 runs independently for presence detection, vital signs, and fall alerts.

See people, breathing, and heartbeats through walls — using only WiFi signals already in the room. The system learns on its own and gets smarter over time — no hand-tuning, no labeled data required. Fast enough for real-time use, small enough for edge devices, simple enough for one-command setup. WiFi routers flood every room with radio waves. When a person moves — or even breathes — those waves scatter differently.

WiFi DensePose reads that scattering pattern and reconstructs what happened: No training cameras required — the Self-Learning system (ADR-024) bootstraps from raw WiFi data alone. MERIDIAN (ADR-027) ensures the model works in any room, not just the one it trained in. WiFi sensing works anywhere WiFi exists. No new hardware in most cases — just software on existing access points or a $8 ESP32 add-on.

Because there are no cameras, deployments avoid privacy regulations (GDPR video, HIPAA imaging) by design. Scaling: Each AP distinguishes ~3-5 people (56 subcarriers). Multi-AP multiplies linearly — a 4-AP retail mesh covers ~15-20 occupants. No hard software limit; the practical ceiling is signal physics. WiFi sensing gives robots and autonomous systems a spatial awareness layer that works where LIDAR and cameras fail — through dust, smoke, fog, and around corners.

The CSI signal field acts as a "sixth sense" for detecting humans in the environment without requiring line-of-sight. These scenarios exploit WiFi's ability to penetrate solid materials — concrete, rubble, earth — where no optical or infrared sensor can reach. The WiFi-Mat disaster module (ADR-001) is specifically designed for this tier. Small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response.

Each module is a tiny WASM file (5-30 KB) that you upload to the device over-the-air. It reads WiFi signal data and makes decisions locally in under 10 ms. ADR-041 defines 60 modules across 13 categories — all 60 are implemented with 609 tests passing. All implemented modules are no_std Rust, share a common utility library, and talk to the host through a 12-function API. Full documentation: Edge Modules Guide.

See the complete implemented module list below. All 60 modules are implemented, tested (609 tests passing), and ready to deploy. They compile to wasm32-unknown-unknown, run on ESP32-S3 via WASM3, and share a common utility library. Source: crates/wifi-densepose-wasm-edge/src/ Every WiFi signal that passes through a room creates a unique fingerprint of that space. WiFi-DensePose already reads these fingerprints to track people, but until now it threw away the internal "understanding" after each reading.

The Self-Learning WiFi AI captures and preserves that understanding as compact, reusable vectors — and continuously optimizes itself for each new environment. The self-learning system builds on the AI Backbone (RuVector) signal-processing layer — attention, graph algorithms, and compression — adding contrastive learning on top. See docs/adr/ADR-024-contrastive-csi-embedding-model.md for full architectural details.

The installer walks through 7 steps: system detection, toolchain check, WiFi hardware scan, profile recommendation, dependency install, build, and verification. rUv Neural — A separate 12-crate workspace for brain network topology analysis, neural decoding, and medical sensing. See rUv Neural in Models & Training. The signal processing stack transforms raw WiFi Channel State Information into actionable human sensing data.

Starting from 56-192 subcarrier complex values captured at 20 Hz, the pipeline applies research-grade algorithms (SpotFi phase correction, Hampel outlier rejection, Fresnel zone modeling) to extract breathing rate, heart rate, motion level, and multi-person body pose — all in pure Rust with zero external ML dependencies. The neural pipeline uses a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates.

Models are packaged as single-file .rvf containers with progressive loading (Layer A instant, Layer B warm, Layer C full). SONA (Self-Optimizing Neural Architecture) enables continuous on-device adaptation via micro-LoRA + EWC++ without catastrophic forgetting. Signal processing is powered by 5 RuVector crates (v2.0.4) with 7 integration points across the Rust workspace, plus 6 additional vendored crates for inference and graph intelligence.

The Rust sensing server is the primary interface, offering a comprehensive CLI with flags for data source selection, model loading, training, benchmarking, and RVF export. A REST API (Axum) and WebSocket server provide real-time data access. The Python v1 CLI remains available for legacy workflows. The project maintains 542+ pure-Rust tests across 7 crate suites with zero mocks — every test runs against real algorithm implementations.

Hardware-free simulation mode (–source simulate) enables full-stack testing without physical devices. Docker images are published on Docker Hub for zero-setup deployment. All benchmarks are measured on the Rust sensing server using cargo bench and the built-in –benchmark CLI flag. The Rust v2 implementation delivers 810x end-to-end speedup over the Python v1 baseline, with motion detection reaching 5,400x improvement.

The vital sign detector processes 11,665 frames/second in a single-threaded benchmark. WiFi DensePose is MIT-licensed open source, developed by ruvnet. The project has been in active development since March 2025, with 3 major releases delivering the Rust port, SOTA signal processing, disaster response module, and end-to-end training pipeline. See docs/adr/ADR-027-cross-environment-domain-generalization.md for full architectural details.

A 3-agent parallel audit independently verified every claim in this repository — ESP32 hardware, signal processing, neural networks, training pipeline, deployment, and security. Results: 33-row attestation matrix: 31 capabilities verified YES, 2 not measured at audit time (benchmark throughput, Kubernetes deploy). A single WiFi receiver can track people, but has blind spots — limbs behind the torso are invisible, depth is ambiguous, and two people at similar range create overlapping signals.

RuvSense solves this by coordinating multiple ESP32 nodes into a multistatic mesh where every node acts as both transmitter and receiver, creating N×(N-1) measurement links from N devices. DDD Domain Model — 6 bounded contexts: Multistatic Sensing, Coherence, Pose Tracking, Field Model, Cross-Room Identity, Adversarial Detection. Full specification: docs/ddd/ruvsense-domain-model.md. See the ADR documents for full architectural details, GOAP integration plans, and research references.

Cross-Session Convergence: When multiple AP clusters observe the same person, CRV convergence analysis finds agreement in their signal embeddings — directly mapping to cross-room identity continuity. A single ESP32-S3 board (~$9) captures WiFi signal data 28 times per second and streams it over UDP. A host server can visualize and record the data, but the ESP32 can also run on its own — detecting presence, measuring breathing and heart rate, and alerting on falls without any server at all.

Nodes can also hop across WiFi channels (1, 6, 11) to increase sensing bandwidth — configured via ADR-029 channel hopping. The alpha firmware can analyze signals locally and send compact results instead of raw data. This means the ESP32 works standalone — no server needed for basic sensing. Disabled by default for backward compatibility. When Tier 2 is active, the node sends a 32-byte vitals packet once per second containing: presence, motion level, breathing BPM, heart rate BPM, confidence scores, fall alert flag, and occupancy count.

WiFi signals penetrate non-metallic debris (concrete, wood, drywall) where cameras and thermal sensors cannot reach. The WiFi-Mat module (wifi-densepose-mat, 139 tests) uses CSI analysis to detect survivors trapped under rubble, classify their condition using the START triage protocol, and estimate their 3D position — giving rescue teams actionable intelligence within seconds of deployment. Deployment modes: portable (single TX/RX handheld), distributed (multiple APs around collapse site), drone-mounted (UAV scanning), vehicle-mounted (mobile command post).

Safety guarantees: fail-safe defaults (assume life present on ambiguous signals), redundant multi-algorithm voting, complete audit trail, offline-capable (no network required). The signal processing layer bridges the gap between raw commodity WiFi hardware output and research-grade sensing accuracy. Each algorithm addresses a specific limitation of naive CSI processing — from hardware-induced phase corruption to environment-dependent multipath interference.

All six are implemented in wifi-densepose-signal/src/ with deterministic tests and no mock data. Raw WiFi signals are noisy, redundant, and environment-dependent. RuVector is the AI intelligence layer that transforms them into clean, structured input for the DensePose neural network. It uses attention mechanisms to learn which signals to trust, graph algorithms that automatically discover which WiFi channels are sensitive to body motion, and compressed representations that make edge inference possible on an $8 microcontroller.

Without RuVector, WiFi DensePose would need hand-tuned thresholds, brute-force matrix math, and 4x more memory — making real-time edge inference impossible. See issue #67 for a deep dive with code examples, or cargo add wifi-densepose-ruvector to use it directly. The RuVector Format (RVF) packages an entire trained model — weights, HNSW indexes, quantization codebooks, SONA adaptation deltas, and WASM inference runtime — into a single self-contained binary file.

No external dependencies are needed at deployment time. Built on the rvf crate family (rvf-types, rvf-wire, rvf-manifest, rvf-index, rvf-quant, rvf-crypto, rvf-runtime). See ADR-023. The training pipeline implements 8 phases in pure Rust (7,832 lines, zero external ML dependencies). It trains a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates — following the approach from the CMU "DensePose From WiFi" paper (arXiv:2301.00250).

RuVector crates provide the core building blocks: ruvector-attention for cross-attention layers, ruvector-mincut for multi-person matching, and ruvector-temporal-tensor for CSI buffer compression. The full RuVector ecosystem includes 90+ crates. See github.com/ruvnet/ruvector for the complete library, and vendor/ruvector/ for the vendored source in this project. rUv Neural is a 12-crate Rust ecosystem that extends RuView's signal processing into brain network topology analysis.

It transforms neural magnetic field measurements from quantum sensors (NV diamond magnetometers, optically pumped magnetometers) into dynamic connectivity graphs, using minimum cut algorithms to detect cognitive state transitions in real time. The ecosystem includes crates for signal processing (ruv-neural-signal), graph construction (ruv-neural-graph), HNSW-indexed pattern memory (ruv-neural-memory), graph embeddings (ruv-neural-embed), cognitive state decoding (ruv-neural-decoder), and ESP32/WASM edge targets.

Medical and research applications include early neurological disease detection via topology signatures, brain-computer interfaces, clinical neurofeedback, and non-invasive biomedical sensing — bridging RuView's RF sensing architecture with the emerging field of quantum biomedical diagnostics. Default ports (Docker): HTTP 3000, WS 3001. Binary defaults: HTTP 8080, WS 8765. Override with –http-port / –ws-port.

Edge intelligence: 24 hot-loadable WASM modules for on-device CSI processing on ESP32-S3. Multistatic sensing, persistent field model, and cross-viewpoint fusion — the biggest capability jump since v2.0. Major release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul. Complete Rust sensing server, SOTA signal processing, WiFi-Mat disaster response, ESP32 hardware, RuVector integration, guided installer, and security hardening.

WiFi DensePose — Privacy-preserving human pose estimation through WiFi signals.

Summary

This report covers the latest developments in android. The information presented highlights key changes and updates that are relevant to those following this topic.


Original Source: Github.com | Author: fpinzn | Published: March 10, 2026, 5:47 am

Leave a Reply