Simultrain — Solution

where ( \sigma^2 ) is gradient noise variance. This matches the rate of synchronous SGD when ( \tau ) is bounded.

SimulTrain reduces latency by 78% on 4G and 71% on 5G compared to SyncSGD. FedAvg hides latency via local steps but suffers from model drift. | Method | Upload per step (KB) | Download per step (KB) | |----------------|----------------------|------------------------| | Centralized | 7,500 (video frame) | 75 (weights) | | SyncSGD | 75 (gradients) | 75 (weights) | | SimulTrain | 30 (activations) | 75 (delta weights) | simultrain solution

[ w_t+1 = w_t - \eta \nabla \ell(w_t; x_t, y_t) ] where ( \sigma^2 ) is gradient noise variance

Proof sketch: The forecast term cancels first-order bias from staleness. Weight reconciliation prevents error accumulation. The pipeline yields the same effective gradient steps per unit time. Hardware: Edge = Raspberry Pi 4 (4GB RAM), Cloud = AWS g4dn.xlarge (NVIDIA T4). Network: emulated 4G (50 Mbps, 30 ms RTT) and 5G (300 Mbps, 10 ms RTT). FedAvg hides latency via local steps but suffers

[ \tilde\nabla_k = \nabla \ell(w^(e)_k; x_k) + \alpha \cdot (w^(c)_k - w^(e)_k) ]

SimulTrain sends activations (lower dimension than raw data but higher than gradients). However, it enables bidirectional overlap , reducing total bandwidth-time product by 65% compared to SyncSGD. | Dataset | Centralized | SyncSGD | FedAvg (5 local steps) | SimulTrain | |-------------|-------------|---------|------------------------|------------| | UCF-101 | 84.2% | 83.9% | 81.1% | 83.7% | | WISDM | 91.5% | 91.3% | 88.9% | 91.1% |