Papers
arxiv:2603.26610

Think over Trajectories: Leveraging Video Generation to Reconstruct GPS Trajectories from Cellular Signaling

Published on Mar 27
Β· Submitted by
Rising0321
on Mar 31
Authors:
,
,

Abstract

Cellular signaling records are transformed into GPS trajectories through map-visual video generation, achieving superior performance over traditional methods while maintaining scalability and cross-city applicability.

AI-generated summary

Mobile devices continuously interact with cellular base stations, generating massive volumes of signaling records that provide broad coverage for understanding human mobility. However, such records offer only coarse location cues (e.g., serving-cell identifiers) and therefore limit their direct use in applications that require high-precision GPS trajectories. This paper studies the Sig2GPS problem: reconstructing GPS trajectories from cellular signaling. Inspired by domain experts often lay the signaling trace on the map and sketch the corresponding GPS route, unlike conventional solutions that rely on complex multi-stage engineering pipelines or regress coordinates, Sig2GPS is reframed as an image-to-video generation task that directly operates in the map-visual domain: signaling traces are rendered on a map, and a video generation model is trained to draw a continuous GPS path. To support this paradigm, a paired signaling-to-trajectory video dataset is constructed to fine-tune an open-source video model, and a trajectory-aware reinforcement learning-based optimization method is introduced to improve generation fidelity via rewards. Experiments on large-scale real-world datasets show substantial improvements over strong engineered and learning-based baselines, while additional results on next GPS prediction indicate scalability and cross-city transferability. Overall, these results suggest that map-visual video generation provides a practical interface for trajectory data mining by enabling direct generation and refinement of continuous paths under map constraints.

Community

Paper author Paper submitter

πŸ”₯ What if trajectory data mining could be redefined by video generation models?

Traditional methods depend on complicated road-segment and temporal encodings, suffer from long-tail / imbalanced segment distributions, and often generalize poorly across scenarios.

But humans do something much more intuitive:
we plot trajectories on maps and draw the expected patterns visually. πŸ—ΊοΈβœοΈ

So we asked:
Can a video generation model learn to do the same? πŸŽ₯

The answer is yes.
Our approach achieves SOTA on both Signaling2GPS and Next GPS Prediction tasks. πŸ†πŸš€

We believe this points to a new-generation paradigm for trajectory data mining.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.26610
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.26610 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.26610 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.26610 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.