Interactive project

Breif

"In this second project of semester two, you will have an opportunity to explore interactive systems for the displaying or relaying of information and ideas to an audience. In small teams you will design and develop a scheme that will offer visitors an engaging experience through the development of an interactive exhibit that really adds to their understanding".

"It is important that you design and develop the theme and style for the delivery of the information as well as pushing the boundaries of the way in which the content is delivered, technically and physically. You will work in teams to develop a design and proposal for this project."
Our target audience we chose is Educators, as we wanted to educate the younger genration using an interactive element.

Contribution to the group project

As the lead developer my role was to construct the entire project using Untiy to piece it together using the 3D designers assets, where I'd use a varitey of C# scripts to construct the complete interactive Happy Planet model.

AR Inspiration

The team and I, looked at a vairety of previous projects from other developers, to get a sense of what kind of other projects had been constructed in the past.

The first part of the inspiration for ‘Happy Planet' came from this incredible visual replication of flight data.

This visual appealing method allows the user to physically fly through the data and circle around the map to see the most common flight path directions, rather than seeing the data on a spreadsheet (which is less appealing!

This AR map gave me the inspiration for the how an AR application could be produced with interactivity in mind.
The 3D map is able to rotate as well as produce information about a certain place that the user desires to be informed about. The signs are also able to follow the camera, from any direction the user intends. This is ideal for the design and functionality for the user, from Happy Planet.

Vuforia tests



To start my journey with Augmented Reality development, I used a basic Image target with Vuforia as the focus point for the cube.

Image targets are used by the Vuforia engine to illustrate images where they can detect and track from the camera. This I therefore used the image target to augment the 3d cube.

This gave me an insight on how i could utilize the image target to augment the 3D globe.

After learning the how the image targets worked, i set about using a 3d globe and wrapping a satellite image around it.

Using unity’s animation and text options, I used this on the globe to get an idea on what i could use in the future.

Intergration of the design

One of the 3d designers in our group, produced this brilliant low poly design of the globe. This is the perfect style that we were initally going for.

After sending it to me via an .FBX file i then opened it in Unity, where I assigned it in the Image target hierarchy, replacing the previous earth model.

After adding the rotation animation to the globe, i entered ‘play mode’ in Unity, which enables the camera to function, allowing the globe to be augmented on the target.

Image targets upload.

Sami, the designer of the group then sent me the image target for the project, which I then uploaded on to the Vuforia website.
The image target is increibly important for the project as it enables the the object to be augmented from the intial image target, which is key for the applications success

1. I created a development key, using Unity's developer portal which enabled me to begin customising.

2. After the accout was created, I copied the license key (that Vufoira provides for any AR project), into Unity to enable the image target to link with unity.

3. I Uploaded the image for it to be graded on its level of detail as an image target.

Our target was graded 5 stars, which is the highest rating an image target can achieve. hte highr ethe detail, the more accurate the level of AR detection is.

4. I downloaded the image target in order to assign it to unity for the image target.

Interactivity with C# coding

In order to understand how I wanted the models to interact with the AR camera, I had to experiment on how that would be made possible.



In order to understand how I wanted the models to interact with the AR camera, I had to experiment with cubes.

After learning some C# coding skills and help from online resources from developers, I wrote 3 C# scripts.

InfoBehavoir: The Infobehavoir script, ties its self with the obejct, in this case the cube has a box collider where will interact with the "Gaze" script.
From here, the code "Open info" and "Close info" will run whenever the AR camera faces the cube, due to the Gaze script being the main sript for the AR camera.

Facecamera: The facecamera script enables the sign post infomation to move to look at the main AR camera, wherever it it positioned in world on the Y axis.

Gaze: The gaze script enables the rayacsting to occur, whenever the forward vector of the camera (blue arrow in the editor) hits the cube.
The infobehavoir script then links with the Gaze script, allowing the infobehavoir to open and close whenever the camera faces or does not face the cube.

All three of the scripts enable the camera, the object and the information to all harmoniously work in unison.

This gave me an insight on how i could utilize the image target to augment the 3D globe.

On the right is a demonstration of how the ‘infobehavoiur’ and ‘facecamera’ script work, with the cube and the AR camera works together.


As you can see the ‘hasinfo’ is written into the Gaze script (on the far left) for the AR camera, which makes it link with the asset.

Afterwards I tagged the cube to have the value ‘hasinfo’, enabling the camera to trigger the data (information about the asset) to emerge.

Below is the complete version of the cube, with the ‘Here’ being displayed below whenever the Y rotational axis looks towards the face of the cube.


This shows that the ‘Gaze’ script that identifies the object, is in full functioning order and can therefore be complete for the final step.

Trouble Shooting



Throughout the coding of the scripts, I often ran into many issues with the C sharp coding which I coded using Visual Studio.

This required me to analyse in the C sharp code what I could have possibly done wrong, so therefore I had to amend any potential bugs. The console would notify me if there were any major errors, by producing a large red ! Symbol.

Often the majority of the bugs would be simple fixes, such as the closing brackets not aliging with the prevoius open bracket above it, which would be not complete the line of code.

In this clip, the three cubes were not displaying the information above despite all of the appropriate scripts attached. After a while of debugigng, I realised I had made an error with accidentally removing the box collider on the cubes.

This therefore did not trigger the infobehavoir to react with the collider, which then did not link to the Gaze script.

This is the final version of the 3 cubes working with the gaze script in the AR camera fully functioning.


Assets



The 7 assets were created by the team’s two 3D designers, where they decided to construct the unique assets that would be visually appealing for youger audience as well as engaging for any older users.

The assests are from 5 of the 7 continents in the world, varying in size and popualrity on each continent.

    The list of assets are:

  • Sky Tree, Toyko, Japan

  • Statue of liberty, New York, USA

  • Arc de Triomphe- Monument, Paris, France

  • Taj Mahal, Agra, India

  • Pyramids of Giza, Cairo, Egypt

  • Tower Bridge, London, England

  • Sydney Opera House, New South Wales, Australia

Assets merging with C# scripts


In order to construct the ‘sign post’ on top of the asset, I had to build a Quad, which I renamed ‘Infoparent’ for the backing of the as well as a lengthened cube to form the sign post.

The text mesh pro’s (TMP) height and width was adjusted to fit onto the backing of the sign, as shown below.


After enabling gizmos, I was able to scale the text on the sign, which gave me a good indication on the dimensions that I wanted the text to be at.

Due to the ‘text mesh pro being a child of the infoparent, it allowed me to easily add and remove any text, as well as changing the dimensions of the text.

The hierarchy is where the individual assets are placed in order to construct the full model. For example, the content parent is the main parent for all of the individual models and assets.
This therefore means that any of the assets are children of the content parent, which will determine how the model is interacted.

The 'arcdetromphe' has a series of children, which are the individual building blocks for the entire models construction. This means that without one of the cubes or foot, the model would not be complete.

The section info then contains the C# ‘facecamera’ script as mentioned earlier and the quad, text mesh pro and cube for building the info sign. This is identical for all of the assets/ models.

To allow the gaze interaction from the AR camera, I had to add a box collider on to all of the assets.

If this wasn’t added, the gaze interaction wouldn’t be able to interact with the box collider and therefore the sign post wouldn’t pop up when the camera faced the asset.


After adding the ‘facecamera’ script into the assets, I was able to enable the AR camera being tracked by the sign post.
The bottom post of the sign rotates on top of the asset.

Both of the assets were able to be traced by the AR camera.

Here I'm testing the if the if the gaze interaction works with the pyramids in play mode.

When the camera hits the Pyramid of Giza, the top of the sign emerges from the pyramidion (peak of the highest capstone)

The positioning of the assets onto the earth is key for the models ascetic success.

Using the XYZ axis’s was able to move the assets around and position them with great success.

Apple XCODE

In order to build for IOS, I had to convert the build settings from PC and Mac which allows IOS build settings to funciton.


If I wanted to build out to Andriod in the future, I would have to first install the Android supported module into unity, as well as download Android X core which is the developer program for Android.

XCode is an IDE – Integrated development environment which is used by developers for building Apple applications for the app store.

These applications can be developed for any Apple device e.g. Apple Watch, iPad, iPhone, Mac to name a few. In order for it to function, I had to build it out from Unity and open up XCode and connect my iPad.

Once I had signed into my account, then pressed play which builds an app onto my iPad. Once this is built out, I then tapped to place it anywhere in the room

The code on the lower right, runs during the length of time that the program is augementing.

Augmented Reality Testing

This clip is where I began to test out my first AR run with the 3 cubes.
This test enabled me to understand how i could use this for the actual Happy Earth model, as well as any potential bugs I would have to overcome.

This gave me great confidence to start intergrating the AR features into the Happy Earth Model.


Here is my first AR attempt, using the Happy Earth model. I was very suprised that this actually did fullly fucntion, as i had to use a Vufoira camera, rather than a XR camera to enable the world and the raycasting to augment from the image target.

I realsied that I had to work on the scale of hte model, due to the box collider, being far too small for the camera to keep it on target for long enough.

Final Touches

I also added particle effecst on top of the trees and pyramids to produce more of an immersive experience for the Earth.

After adding the infomation posts on top of the assets, I began by placing them on top of the earth.

With some of the models, there were issues where I could not attach the information post on top of the assets, causing the info post to immediately go horizontal whenever the info behaviour script was added, whcih was due to the way which it was created.

I overcame this by adding a 3D dimensional cube on top of the assets, which enabled me to produce and scale the quad for the back panel of the sign.

After all of the assets were added onto the globe, I turned on the rotation of the Earth, to test whether all of the signs would locate onto the main camera as if it was augmenting in real life.

Here you can clearly see the info panels facing the main camera as they rotate around the globe.

I positioned the camera to face the curvature of the Earth, in order to get a cinematic view of the planet rotating.
This gives a very impressive view of the assets placed north of the equator.

Happy Planet final AR model

This is the final AR model of Happy Planet in full augmention .