Twittersphere So Far

After many weeks of development, the Twittersphere project has come a long way. In my original conceptual post, I described how the final Twittersphere might function, but after so much development it might be worth recapping. The initial concept was that of a visual representation of debates raging on Twitter, with individual tweets’ sentiment adding to a global “weather map” of opinion. Well, right now, it’s great to say that this visualisation is functional!

Twittersphere

As you can see the weather map is a lot more three-dimensional than a usual weather map. We have given each orb and trail (which represent a single tweet) a number of characteristics based on the tweet’s data. Speed is tied to follower count, visualising the “weight” of the tweeter’s influence. The colour and jitter of the tweet are tied to the sentiment and contents of the tweet.

Sound design is going a bit slower, but progress is being made! In a workshop we had recently we looked at many different aspects of sound design, which was extremely useful in designing how the Twittersphere will sound. We have proximity-triggered speech of the tweets as they fly past the camera, surrounding you in the audible conversation as you are discern the tone of the conversation from the visuals. The next steps involve live ambient sound to be mixed based on twitter data in real time.

So, the audiovisual experience is actually quite a bit ahead of where we planned to be at this stage, albeit mostly on a technical point – fine tuning the audiovisuals themselves to be as impactful as possible will take some time. Therefore we began discussing how else we could improve the experience, and we agreed that control of the installation was severely lacking. Our development up until this point had involved controlling the Twittersphere with a mouse and keyboard.

We wanted to give control of the experience to those in the experience itself, so we had to change this. We first experimented with using the Leap Motion gesture controller.

Screen Shot 2016-12-16 at 17.08.14 Screen Shot 2016-12-16 at 17.09.46

However, while the integration was quite a nice achievement, the interaction left much to be desired. The newest Leap Motion driver software (codenamed Orion) is still in Beta and only available on Windows. Our development environment, however, is almost exclusively Mac OSX. This left us using the old, outdated version 2 SDK from Leap. After a while of getting the SDK working with the antiquated Unity example projects, I was able to get the Leap working with the Twittersphere globe. However, the outdated API proved difficult to use, and gestures weren’t detected accurately. In the end our Leap controls weren’t any more interesting to use than a joystick, and a lot less intuitive.

So we made the decision to explore other methods of control, and we wanted to explore the possibility of creating a physical controller, in the shape of a globe, that can be manipulated in real space with the Twittersphere echoing the movement in its own virtual space. Thus, we have been exploring the use of Android’s built-in sensors to detect orientation of a physical object that we can replicate in our virtual environment.

TwittersphereOSCControl

We have used the Android’s Accelerometer and Magnetometer to calculate the device’s rotation in a Processing application, which then transmits the phone’s orientation to our Unity scene over OSC. This allows us to rotate the Earth in our Unity scene in real time. Therefore, we are able to place the phone inside a sphere and create a real-world clone of our Earth. I recently read a piece of writing by Elisa Giaccardi about the three pillars of commensurability, through which she aims to create a world where we interact with technology without screens, and in my mind I extend this to all generic input methods i.e. mice and keyboards. With our physical globe, we can create a hyperreal experience where our consciousness is unable to distinguish reality from simulation of reality (Tiffin and Terashima, 2001:1).

However, the control is not entirely responsive yet. There is still work that needs to be done in order to make the Earth in Unity rotate more consistently with the rotation of the phone. This is proving slightly more difficult considering it is not actually the Earth itself which is rotating, but rather the camera rig is rotating around the Earth, causing all of the input from the phone to translate into reversed movement in the scene by default.

OSCControl

 

Technologies in use

So as of right now, there is a lot of technology in use in the Twittersphere;

  1. We first get raw tweet data using Twitter’s Streaming API using Node-Red.
  2. This raw data is piped through sentiment analysis using IBM’s Bluemix, and gets the user’s geolocation coordinates using Google’s Geolocation API.
  3. These tweet objects are then broadcasted over MQTT as JSON and received by Unity which displays them as described above in our audiovisual experience.
  4. Once the data is in Unity, we faced an issue with turning our tweet strings into voice. We were originally attempting to do so before sending the data into Unity, but this proved problematic as containing the WAV Byte streams in our JSON data required Base64 encoding them at the server end and then decoding again in our Unity environment. This proved extremely demanding on our hardware and caused excessive delays while the application attempted to decode every string that came in. To resolve the issue, we now send the data back across to Node-Red via HTTP request, which does the text-to-speech conversion (again using IBM Bluemix) and sends it back into our Unity application as a pure WAV Byte through a dedicated websocket. This experiment worked extremely well and solved our problem well, as despite the complexity of sending and receiving data over the internet multiple times the delay is practically unnoticeable.
  5. We then also have an OSC channel open between an Android device and our Unity scene. On the device, we have a processing sketch calculating the orientation of the phone and transmitting to Unity, which uses this orientation to control the orientation of the camera rig in the scene.

 

Sources

Tiffin, J. and Terashima, N. (2001) HyperReality: Paradigm for the Third Millenium . London: Routledge.