VR Chroma Project: Prototyping "Sense of Care".

VR CHROMA: Prototyping Wonderment & Care

 

Authors

Arthur Clay, HSLU

Dario Lanfranconi, HSLU

Anh Nguyen, HSLU

 

Abstract:

 

The VR CHROMA project is a virtual reality (VR) exhibit that blends digital and traditional artworks in a fully immersive environment. This paper provides essential information about the exhibition, including how visitors will interact with key elements such as the Chroma Tower, AR portals, and polychrome sculptures. The exhibit features AR artworks from artists John Craig Freeman (USA), Lilly & Honglei (CHN), and Arthur Clay (CHE), alongside digital twins of sculptures from the collection at MANN. Users can explore the virtual space and experience how color changes character as light shifts through an artificial day-to-night cycle, creating dynamic visual interactions. The paper also explains the technical processes behind the preservation of vintage AR artworks, ensuring that the audience understands both the visual experience and the historical significance of the exhibit.

 



Fig. 1. The VR Headset used for viewing the VR Chroma Demonstrator and its hybrid light project stand.

 

 

Introduction

 

The VR CHROMA Demonstrator is a prototype virtual reality (VR) application that brings the concept of the Open Space Museum to life by recreating it in a virtual environment by featuring two key elements: the Chroma Tower and AR portals. At the same time, the demonstrator also includes a series of Augmented Reality (AR) artworks, imported into the VR environment as part of a campaign to migrate vintage AR pieces from their original, but outdated, technical framework into a contemporary setting so that they can be shaared with the public. Another key feature of the work is the presentation of case study objects from Scenario 2 of the PERCEIVE project, where polychrome sculptures are integrated into the virtual environment. This allows users to experience the contrast between digitally created and traditional, non-di Ultimately, the VR headset enables users to explore architectural structures, engage with AR artworks, and immerse themselves in a virtual representation of the Open Space Museum.

 


Fig2. QR Code Link to the video clip teaser for the VR Chroma Project. Scan to view. VR Chroma Trailer: Dario Lanfrancon, 2024

 

Navigation

 

The VR CHROMA experience is designed primarily as a sit-and-spin application. Users can explore the environment by casting a light beam from the controller to any point they wish to move to, allowing them to navigate through all available locations and levels. While users can move freely within a level by directing the light beam onto the ground, to access a different level, they must cast the beam onto one of the three AR portals. To return to the base level, users simply cast the light beam onto the AR portal once more.

 


 


Fig. 3: The illiustration depicts the navigation strategy for the VR Chroma Project.

 

Virtual and Real-World Color

 

As a simulated environment, the VR CHROMA Demonstrator visualizes the spatial arrangement of the Chroma Tower and surrounding AR portals, showcasing the dynamic interplay of color, light transmission, and reflection under various times of day and lighting conditions.

 

By blending tangible and digital components, the CHROMA demonstrator offers viewers a captivating and immersive experience. It highlights how real-world color and its virtual counterparts can interact, showcasing a dynamic interplay of hues across both physical and digital spaces.

 

To enhance the perception of color shifts, the CHROMA demonstrator enriches the visual experience through variations in lighting between day and night, ensuring that viewers constantly engage with evolving color interactions shaped by the shifting quality of light.

 

Overall, the VR demonstrator merges cutting-edge technology with artistic innovation, offering a multisensory exploration of color in both real and virtual contexts. It provides a unique glimpse into the design of the Open Museum, illustrating how it presents Born Digital Art through the lens of color and light dynamics.

 

Methods/Methodology

 

The VR CHROMA demonstrator is built using a VR technologies which uses content won Digital Twins of historical Polycrhome sculptures and from vintage Augmented Realit artworks.

 

Although alternative approaches, including hybrid methods combining AI with manual interventions, are being tested for the creation of textures, the textures of the digital twins of polychrome sculptures area at the moment rendered using AI-based techniques, specifically style transfer, to generate textures. The inclusion of these textures for the digital twins of polychrome sculptures allows for flexibility in presenting alternate coloring possibilities.

 

The implementation of GLTF KHR materials variants further enhances the user experience by enabling seamless changes in lighting and material conditions, providing insights into how the sculptures might have appeared in different historical contexts.

 



Fig.4: Screen shots of the Chroma Tower and the AR Portals of the entry scenario; the Space Art Gallery artwork by Art Clay; and the Orators and Propoganda Stands by John Craig Freeman, and the Crystal Coffin by artists Lilly & HongleI.

 

 

Migration from Past to Present

 

With the AR Artworks used in the VR Chroma project, we were confronted with the .l3d file format, developed for the now-deprecated Layer app. The format was primarily designed to convert the widely-used Wavefront .obj format for use within Layer, optimizing performance and compatibility within the app's ecosystem. It allowed for faster loading and better handling of 3D data while providing Layer with control over the integration of custom features and content security. Given its proprietary nature, all assets created in this earlier format were likely developed as OBJ assets.

 

However, the need for a special converter to translate the assets back into OBJ format, when only available as .l3d files, proved to be a challenge. This process involved locating the program, finding machines capable of running it, and dealing with the imperfect migration of those assets back into OBJ files, often resulting in some data loss or misalignments during the conversion process. To fix the textures and misalignments in the AR artworks, we used AI to upscale the original low-resolution textures and replaced the incompatible UV mapping with new, high-resolution textures using texture tiling. The misalignments were mostly corrected manually to ensure proper material projection and alignment, enhancing the visual quality and adapting the assets for modern VR environments.

 

An OBJ asset typically consists of three essential resource files: 1. The obj file (the geometry of the object); 2., the mtl file (he material and UV mapping information); and the texture files (commonly stored in .jpg or .png formats). If these dependent files were made available by the conversation process, we were able to  accurately migrated them by importing it into Unity,  a modern AR development tools which  fully support OBJ and allowed for the reproduction of the original asset's geometry and texture.

 

Because the OBJ format is simple and widely adopted, it is expected to remain compatible with AR and 3D modeling software, including Unity, well into the future. By converting the .l3d files back into OBJ format, we ensured the continued viability and adaptability of these older digital assets, allowing us to successfully integrate them into Unity, even as file formats and platforms evolve.

 

MySQL Use

 

The scale, orientation, and location of artworks created with LAYAR were managed by a MySQL database. Since the MySQL tables were available for the works integrated into the VR Chroma, we were able to consider their contents during the migration. However, accurately reproducing the behavior of the artworks would be challenging without the original database attribute values, as these would need to be inferred from documentation or through collaboration with the artist who created the work.

 

Exceptions arise when animated textures or audio tracks are used, as such animated media consisted solely of a series of up to 12 images over a freely chosen loop duration, which was the sum of the time allotted to each individual image in the animation.

 

In Layar, the use of audio was simply achieved by adding a file as a URL in the database, which would then be played back either at a specific time after the project opened or when a particular action was taken by the user but without any possiblity to synch them to animted textures, etc.

 

In the Orators work, as an example, the movie textures that are seen on the stands were created originally from such a series of images in loop. The work also handled audio in similar fashion. The sitaiton for both the above proved not the best option in Unity for the migration, as media meeded to be bettered allligned time wise as well as mapped to the assets.

Geofencing / Geolocation

 

One of the most important issues is that LAYAR used true GPS geolocation. In contrast, most recently developed AR authoring tools rely on the newer mobile phones' plane detection and spatial recognition capabilities, meaning that most AR experiences now use geofencing rather than true GPS geolocation.

 

Since geolocation is only as accurate as a mobile phone's ability to determine its position—which can be affected by satellite line of sight and other disruptions—most development platforms do not use true geolocation methods. Instead, the experience is confined within a geofenced area. Layar, however, allowed objects to be viewed at any distance permitted by the scale, a feature that artists took advantage of. For example, by scaling an object upward and placing it farther away, they could stabilize it and prevent it from "jumping" due to the inherent limitations of GPS accuracy.

 

Although frameworks such as Hoverlay have implemented a true north orientation feature, true geolocation would need to be developed and implemented separately to recapture the intentions of artists who used GPS geolocation as a tool for stabilizing objects and bringing life to assets in their work by creating movement across the areas where they were installed.

 

Color in AR

 

As the VR Chroma Project is a tool for studying needs for an exhibiton of AR Artworks, there arose the interest to provide for some form of color calibration that would work across viewing devices. Although in an early phase of enquiry, our approach to AR color calibration is the attempt to introduce a user-driven solution that shifts the process from a technical challenge to an interactive experience.

 

By incorporating comparison elements, such as the virtual leaves, and providing a calibration panel in AR scenes, users can fine-tune color settings themselves by comparing virtual elements to real-world references. This empowers users to harmonize virtual and physical colors, facilitating more accurate color blending despite the limitations of perfect calibration. The concept can be extended to larger AR exhibitions by using consistent real-world motifs to synchronize colors across virtual assets.While absolute color accuracy remains elusive, this method offers a practical, engaging way for users to adjust color in real-time, moving the discussion towards creative, user-focused strategies.

 

There are similar features built into the Hoverlay framework. For instance, Hoverlay employs real-time light estimation vs. baked lighting: when you place a 3D model into Hoverlay, the app estimates the current scene’s light intensity and warmth (based on the video feed) and applies similar lighting to the 3D content. Additionally, Hoverlay uses real-time environment probes, dynamically creating an environment map based on the camera’s video feed. This map is then used to light reflective objects more realistically, such as a virtual metal ball that "reflects" the room it’s in. As a result, the same 3D objects with materials that interact with light (rather than unlit materials) will render colors differently depending on external conditions like light and location.

 

This capability is unlike any other way to experience 3D models, even when compared to traditional VR. In VR environments, light intensity and reflections are typically pre-set and cannot adapt to the real-world environment. In contrast, Hoverlay dynamically adjusts both lighting and reflections in real time, based on the actual surroundings captured through the camera. While not directly tied to color calibration, these features offer an innovative approach to interacting with digital assets, further highlighting the dynamic relationship between color, light, and environment in AR.

 

Results

 

The VR CHROMA project demonstrates how digital and traditional artworks can coexist and be explored in immersive VR environments. By allowing users to engage with both digitally-born and historically significant artifacts in a unified virtual space, the project opens new possibilities for virtual museum experiences. Future work includes refining the AI-based texture generation process for greater accuracy and exploring additional use cases for the GLTF KHR materials variants specification in AR applications.

 

Furthermore, the VR CHROMA application successfully delivers a dynamic, immersive experience of the Open Museum, allowing users to explore multiple levels of AR artworks. Key features include:

 

Chroma Tower: A central architectural structure in the virtual environment that interacts with AR artworks.
AR Portals: Entryways connecting users to different artworks, such as "Propaganda Stands and Orators" by John Craig Freeman, "The Crystal Coffin" by Lily and Honglei, and the "Space Art Gallery" by Arthur Clay.
Polychrome Sculptures: Digital representations of traditional sculptures, displayed within the virtual environment, allow users to explore different coloring possibilities using AI-generated textures.

 

Conclusion

 

The VR CHROMA project highlights the potential for preserving and revitalizing vintage AR artworks within a modern VR environment. By converting outdated file formats, addressing texture misalignments, and leveraging AI-based upscaling, these digital assets can be adapted for contemporary platforms like Unity. This approach not only ensures the continued accessibility of historically significant AR works but also enhances their visual quality for immersive exploration in virtual museum settings, while maintaining their authenticity.

 

References

 

Hoverlay, GLTF KHR materials variants specification.
https://www.hoverlay.com

Farnese Collection, MANN Museum.
https://mann-napoli.it/en/farnese-collection

John Craig Freeman, participating artist
https://johncraigfreeman.wordpress.com
Lily & Honglei, participating artist
https://lilyhonglei.net
Arthur Clay, participating artist
https://artclayportfolioonline.blogspot.com/

 

Acknowledgments

 

We would like to thank the MANN Museum for providing access to the Farnese Collection, as well as the participating artists: John Craig Freeman, Lily & Honglei, and Arthur Clay. Special thanks to the PERCEIVE project team for their ongoing support.

 

Contact Information

Arthur Clay (Lead Contact), HSLU

Email: arthur.clay@hslu.ch