
Spatial ID is a Japanese government–defined standard that represents physical space as a hierarchical grid of three-dimensional voxels, optionally extended along the time axis. Each voxel is assigned a computable identifier in the form {z}/{f}/{x}/{y} (zoom level / altitude / longitude / latitude), enabling consistent addressing from planetary scale down to fine spatial resolutions, including sub-meter voxels (approximately 15 cm at high zoom levels, such as ZL28). By abstracting space into a unified identifier system, Spatial ID allows heterogeneous datasets—static or dynamic, spatial or temporal—to be indexed, queried, and integrated using conventional database mechanisms rather than application-specific GIS pipelines. In this sense, Spatial ID functions not merely as a spatial representation, but as an addressable and queryable framework for large-scale spatial computing.
One line of our research examines how Spatial ID can be used as a spatial authoring and referencing mechanism, with Mixed Reality (MR) systems employed as an interface for interacting with physical space. In this work, Spatial IDs function as persistent anchors that bind digital content—such as annotations, attributes, and sensor-derived data—to real-world locations. These associations are managed through a decoupled database architecture, enabling flexible retrieval and cross-platform interoperability. By integrating cloud-based perception pipelines and visualizing their outputs in MR at the corresponding physical locations, this research demonstrates how Spatial ID supports accurate spatial alignment, persistent authoring, and real-time inspection. In this work, Mixed Reality is used as an interface layer for spatial authoring and inspection, allowing Spatial ID–indexed data to be created, validated, and explored directly in physical space.
At a larger scale, our research focuses on operating Spatial ID as the core indexing layer of real-time urban data infrastructures. Static city models derived from Project PLATEAU are encoded into Spatial IDs and distributed via geospatial databases and vector tiles, while dynamic IoT streams are processed at the edge, encoded into Spatial IDs in real time, and synchronized with cloud services using publish–subscribe mechanisms. This edge–cloud architecture enables database-style queries and subscriptions over four-dimensional data (space and time), supporting low-latency access from web-based and immersive clients. City-scale deployments in Tokyo demonstrate that Spatial ID can unify static and dynamic urban data within a single operational framework, rather than as loosely coupled, application-specific pipelines.
Our future focus is the deployment of a high-performance Spatial ID database architecture capable of supporting near-instantaneous queries at metropolitan scale, including public operation of the database as an openly accessible infrastructure. Several challenges become dominant at this scale. First, at high zoom levels, the number of Spatial IDs grows extremely quickly as query area increases; even modest increases in area can lead to exponential growth in query time and I/O, making large-area, high-resolution queries impractical unless strict area constraints and query policies are enforced. Second, dataset size is a fundamental constraint: a full Spatial ID dataset for Tokyo’s 23 wards at ZL25 already exceeds 150 GB in raw form, effectively ruling out single-server deployments and requiring distributed compute, indexing strategies, and infrastructure-level design to support even the most basic queries. The most difficult problem, however, is defining what data should be queryable at which zoom level. Fine-grained data such as individual sensor locations or detailed metadata may be appropriate at building or street scale, but exposing the same data to city- or region-scale queries would require aggregating enormous numbers of records and would not scale. As a result, the data ingest model itself must incorporate zoom-level–dependent data availability, aggregation, and compaction policies. Addressing these issues is essential to establishing Spatial ID as a practical, trustworthy indexing backbone compatible with existing computing systems and future spatial computing workloads.
@inproceedings{Djelloul2026,
title = {Spatial ID Authoring in Mixed Reality: A Unified Platform Integrating Anchoring and Visualization},
author = {Sami Brahim Djelloul and Yanru Chen and Alex Orsholits and Manabu Tsukada},
year = {2026},
date = {2026-01-26},
booktitle = {IEEE International Conference on Artificial Intelligence and Virtual Reality (AIxVR), Work in Progress (WiP)},
address = {Osaka, Japan},
abstract = {We present a Mixed Reality (MR) authoring platform that leverages Spatial IDs as a standardized reference to align digital content with the physical environment. The platform enables users to tag and annotate space through Spatial IDs, with annotations managed through a decoupled architecture where a queryable Spatial ID index links spatial entities to external databases containing attributes and application-specific content. To support this, we integrate two enabling components. First, anchoring: Spatial IDs provide a hierarchical four-dimensional indexing system that ensures consistent alignment across datasets and localization methods. Second, visualization: anchored Spatial IDs allow both real-world data streams and authored datasets to be rendered in MR. As a proof of concept, we integrate cloud-based object detection from an external depth camera, streaming recognized objects into MR and visualizing them at their true physical locations. Together, these components demonstrate how Spatial IDs enable accurate visualization and persistent authoring, offering a unified pipeline for next-generation spatial computing.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Chen2026,
title = {Spatial ID-Driven Edge-Cloud Architecture for Real-Time Urban Digital Twins},
author = {Yanru Chen and Sami Brahim Djelloul and Alex Orsholits and Manabu Tsukada and Hiroshi Esaki},
doi = {10.1109/CCNC65079.2026.11366402},
year = {2026},
date = {2026-01-08},
urldate = {2026-01-08},
booktitle = {IEEE Consumer Communications & Networking Conference (CCNC2026)},
address = {Las Vegas, USA},
abstract = {Only the chairs can edit The integration of static geospatial datasets and real-time IoT streams is essential for responsive and scalable urban Digital Twins (DTs). However, current infrastructures remain fragmented across domains, formats, and reference systems, limiting interoperability and city-scale deployment. This paper presents the first city-scale implementation of a Spatial ID-driven edge-cloud architecture that unifies heterogeneous static and dynamic urban data under a hierarchical four-dimensional identifier. Unlike prior DT systems that rely on ad hoc tiling or local schemas, our design operationalizes Spatial ID as a universal indexing layer across batch and streaming pipelines, enabling multi-resolution queries, real-time synchronization, and cross-domain interoperability. A prototype deployment in Tokyo's Chiyoda and Bunkyo wards demonstrates the approach, integrating 3D city models with live IoT streams. Static data are encoded into Spatial IDs and distributed via a geospatial database and vector tiles, while dynamic streams are processed at the edge and synchronized with a cloud backend using a publish/subscribe model. The system supports real-time encoding, querying, distribution, and web/Mixed Reality (MR)-based visualization. Evaluation shows millisecond-to-second query performance over 148 million records, sub-100 ms vector tile delivery, and real-time IoT stream processing at 30 fps. These results establish Spatial ID not only as a conceptual framework but as a practical, deployable foundation for interoperable, low-latency, and scalable Digital Twin infrastructures aligned with the vision of Society 5.0.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
v2x
digital twins extended reality
digital twins
autonomous driving machine learning
machine learning v2x
autonomous driving v2x
extended reality
v2x
We are part of the University of Tokyo’s Graduate School of Information Science and Technology, Department of Creative Informatics and focuses on computer networks and cyber-physical systems
Address
4F, I-REF building, Graduate School of Information Science and Technology, The University of Tokyo, 1-1-1, Yayoi, Bunkyo-ku, Tokyo, 113-8657 Japan
Room 91B1, Bld 2 of Engineering Department, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
Mail: