Prosthetic Memory: Week 8 — Mechanical Eye

Joey Reich
The Mechanical Eye
Published in
3 min readNov 29, 2020

--

The concept is still in latent prothesis, lingering in its adaptation of assembled materials and memories working towards an engage-able whole. It is still fine tuning, fitting, and establishing its complexities. At the moment, it holds its digital formwork and has started acknowledging the captured visual memory of the spaces it holds. This process of virtual embodiment is now accessible, although its moments are destabilized having been redefined in terms of the fallacious nature of memory that similarly destabilizes the physical hierarchy of spaces engaged. In this way, the transition between spaces is similarly disrupted from its physical realities in favor of something novel to these memories but completely familiar to our collection of memories as we fit adapt our behaviors to promote engagement within this cognitive enclave. As it is now, the space is a combination of visual and digital embodiments of physical spaces that have been fitted for the purposes of the prosthetic. Engagement within is still awaiting further manipulation of the unassigned surfaced in order to fully encompass the utility of the amalgam of these spaces, but more specifically the redefinition of the spaces as they pertain to engagement within the window as a machine for viewing. As discussed in the last post, this mechanism in incredibly entrenched in the gaze and the hijacking of that faculty in terms of altering and exploring what is, what is before, and what is beyond the frame. Manipulating that which is unnoticed is what comes next.

My Technik for this step relied largely on the translation of captured imagery and textures into meshes that are imported into the digital scene and then imported into Unity. For this to work, I took my Agisoft models, built them all the way through to embedding textures, imported them as .fbx into Rhino, arranged the spaces accounting for future variations, and imported the asset into Unity. Once in unity, I extracted textures and materials, inserted the asset into the scene, and added point lights to make the scene more inhabitable. I also, went through the developer set-up process for the Oculus Quest 2, imported Oculus Integration from the Unity Store, and went through the process of setting up the Android SDK build settings. After this, I was able to Build and Run onto the Quest 2 and inhabit the scene. During this process, I adjusted the mechanics and size of the OVRPlayerController as well as added Mesh colliders to the scene in order to have a more constrained environment.

Fig. 1: Isometric View of Scene — Latent Prosthetic
Fig. 2: Bird’s eye of Unity Scene — Environment Lighting and OVR Camera

The most challenging part of this process was getting the textures to embed moving from Rhino to Unity. I hadn’t noticed the embedded textures toggle that was only showing up on one of my computers, as I only had a Pro license on one of my computers. After getting Rishab’s eyes on the computer that had the options, he pointed out that toggle and the project was able to progress again. After this process, I ran into issues related to the fact that the Quest 2 is not Mac friendly and after trying a few work arounds I decided to Bootcamp my computer and finished this step fully by the end of that same day. Once integrated with the headset, engaging with Unity became much more fluid, iterative, and adaptive by being able to quickly test ideas and push forward into the real outcome of adjusting components.

Fig. 3: Video of Virtual Environment — Inhabiting the Prosthetic

Further Reading:

VU Token. Building Memories in VR. 18 May 2018, www.medium.com/vutoken/building-memories-in-vr-934c83c17ebd.

Wigley, Mark. Prosthetic Theory: The Disciplining of Architecture. Aug. 1991, www.jstor.org/stable/3171122.

--

--