Experiment testing

Experiment 1: Content creation and rediscovery with another client

We tested not only the functional readiness of the new Unity client but also the interoperability between different clients. The first screenshot below shows the opening screen (after login) of the Unity client. It lists all discovered services at the current location. The coarse location is determined by the client via GPS, but it is converted to a much coarser H3 index and only the H3 index is shared with the OSCP. The SSD (spatial service discovery) returned 3 available services: AugmentedCity for localization, and AugmentedCity and OrbitSCD for content discovery. The user selected AC for localization and OrbitSCD (spatial content discovery) for contents. Next, when the user pressed the “Localize” button, a photo was captured and uploaded to the AC VPS which returned the GeoPose of the user. With this GeoPose, after conversion to H3, the client app queried the available contents from OrbitSCD. The space already contained 3 models: a duck, a humanoid robot, and a sculpture consisting of a set of colourful ellipses. The user then pressed the Plus sign, indicating that a new object should be created. The client app offers four different models. This list is currently fixed, but it could be dynamically generated based on what is available in the user’s file storage. The user selects the fox model and after filling in the optional title and description, the model gets added to the scene, and the corresponding spatial content record (SCR) is written into the OrbitSCD.

Starting the Unity app on a different phone, another user can discover 4 pieces of content including the newly created fox, which all appear at the same locations as on the first client.

To verify that the SCR was indeed added to the database in an interoperable manner, another user started the WebXR-based OSCP client (it runs in Android Chrome 92+). The opening dashboard shows similar information to what was seen in the Unity client: the H3 index of the device’s location, the country code, and the available services. The user localized with the AC VPS and 4 content entries were automatically retrieved. The new fox appeared next to the robot model where it should be.

 

While the locations of the objects are correct, their orientations are not yet consistent between the creator session and the viewing session. The reason for this is that we currently write the identity orientation ([0,0,0,1] quaternion) into the spatial content record. In the future, we plan to implement user clicks on the floor to select the new object’s position and our simple model editor to adjust the new object’s orientation during the creator session.

 

Experiment 2: Live IoT sensor stream visualization

In this project, we implemented live sensor stream visualization in augmented reality. In particular, we implemented a new content type in OSCP that links to a Unity AssetBundle, which may contain not only 3D models but also scripts and other metadata that allow for customization and dynamic generation of content. We used such an asset bundle to describe an MQTT pipe and to generate AR widgets on the fly that can connect to an MQTT topic, retrieve sensor values, and show them as floating bubbles above a physical object.

This use case requires real-time streaming of radio frequency data from software defined radios to AR applications. We implemented a floating display of received signal strength as a proof of concept, which demonstrates the utility of AR visualization for viewing and interpreting real-time RF experiment data. Our sample experiment consisted of one transmitter and four receiver nodes in the ORBIT wireless testbed.

In the experiment, the strength of the transmitter signal was varied over time by a controller running on the experimenter’s computer, and the signal was received on four antennas in a spatial grid. The strength seen on each antenna was streamed to a publicly available MQTT broker. The records in the MQTT broker were then accessed by the Aurora Unity application and used to update the values in the AR overlay displayed next to the rack holding the antennas.

Note that by simply changing the MQTT topic names in the asset bundle and switching the radio to any other MQTT source (such as https://play.google.com/store/apps/details?id=com.lapetov.mqtt), one could implement a whole line of compelling demonstrators.

We note here that we planned another collaborative drawing application in which multiple users can draw scribbles in the air in 3D by waving their client devices. This use case, in particular, requires real-time sharing of users' poses and interactions with each other, which would be shared between all participants via MQTT. However, due to time constraints, we were not able to finish this application.

 

Experiment 3: Reconstruction from capture dataset

While the new visual positioning system of GMU is not finalized and therefore not fully integrated with our system, we performed partial reconstruction tests. In particular, we tested whether our new image and metadata collector app simplifies the map creation process. Indeed, we found that the output of our new Android capture app can be directly used as input to GMU’s mapping pipeline.

The SfM (Structure from motion) pipeline first extracts features from the images taken by the image collector, and then performs 2D feature matching among the images to get corresponding features from overlapped images. Finally, SfM triangulates the 2D matches to calculate the 3D coordinates of the features and pose of the images. The figure below shows the 3D reconstruction of the laboratory with the reconstruction tool Colmap, using our capture dataset as input.

 

AuroraViewer (Unity client) in WINLAB, discovering services, discovering the existing contents, and creating a new fox.

 

Sparcl (WebXR client) in the WINLAB, discovering the available service, discovering the existing contents, and discovering the newly created fox.

 

Overview of the interaction flow in our live IoT data visualization scenario

 

3D reconstruction of the laboratory with open-source reconstruction tools, using as input the capture dataset we collected with our app. The coloured dots are the feature points of the room, and the red rectangles represent the individual images (captured in panorama circles).