Final product

The final outcome of the Twittersphere installation was above and beyond what I originally envisioned at the beginning of the project, and it was great to see people enjoying the experience. During the presentation of the Twittersphere project, we searched for and audiovisualised a number of popular and controversial topics, including the war in Syria, Donald Trump, and race issues.

The final iteration of the installation looked amazing. The weather map of opinion that we aimed to create was extremely effective at showing regional and global differences in sentiment towards specific topics. This was perhaps in-part due to the changes we made to the colours we were using. We made the decision to make the emotion-connect colours (red for anger, etc.) much more vivid than before, in order to create a higher contrast between the individual colours and  A great example was the different compositions of colour coming from Western Europe and the continental United States when visualising the latest Trump news. And although it was not the purpose of the project to form conclusions from the data, it was apparent that regional differences in opinion exist.

It was also a source of pride for the team knowing that we had created something different out of a very common data source. Twitter visualisations, as I mentioned in a previous post, are not new or unique. But by focussing purely on the opinions and emotions that people are expressing online instead of the words that they’re using to express them, with Twitter/Tweets acting simply as a medium, we were able to create a much more abstract and emotion-evoking installation.

The audio also worked really well with the visual aspect of the project. The combination of the stem sounds with the speech clips created an audio composition that was unique to every subject, region, and time, which we felt fell into the concept of realtime perfectly.

Screen Shot 2016-12-16 at 17.12.57

Finally, our gyroscopic, voice-controlled input device (which was simply a second Unity application running on an Android phone inside a spray-painted foam ball) proved to be exactly what we had hoped it would be – an interesting and effective means of interacting with the installation. We knew early on that a mouse and keyboard as the primary means of input, controlled by us and not the audience, would ruin the experience for the observer. The control device allowed the audience to have complete control over their own experience, searching for the topics that they wanted insight into and navigating to the regions that they wanted to observe.

Originally this control device was written in Processing as I described in my previous post on the project. But access to the Android device hardware in Processing was difficult at best, with complicated code needed to combine accelerometer and magnetometer data to calculate real-world orientation of the device. Unity, however, we found had its own functions for accessing gyroscope readings directly, thus I was able to replace this Processing code for detecting and calculating orientation:

Screen Shot 2016-12-16 at 14.49.08

with this code in C# as part of a Unity application which collects both gyroscope and accelerometer readings:

Screen Shot 2016-12-16 at 14.57.55

and send them over OSC to the main application, which was also another 20 or so lines in Processing but only this simple in C# thanks to Unity:

Screen Shot 2016-12-16 at 14.55.21

 

Presentation

Going back to the presentation of the Twittersphere installation, my personal favourite insight was the way in which the topic of Syria changed in composition over time. The day before the presentation, Tuesday 13th December 2016, there were hopes that the Syrian city of Aleppo was to be evacuated and civilians were to be released from their besiegement. The sentiment that we were witnessing that day, during testing, was quite strongly positive, even joyful in some instances. On presentation day, however, there were reports beginning to emerge that the ceasefire that was to allow the civilians to leave the besieged part of the city had failed, and we the composition was much more sad, fearful, and angry. This perfectly highlighted to us how our project related to the idea of realtime, as the exact same code, assets, and topic produced two very different compositions and emotional weather maps as the story changed over the course of 24 hours. My only regret is not highlighting this more during the presentation itself.

However, there were also flaws in the product. Although our gyroscopic control device was, we feel, a unique and interesting method of interacting with a dome installation, it could definitely have been more refined. We decided to use an Android device due to the accessibility of the built-in sensors and the availability of a microphone. However if I were to create this device again, I believe I would avoid this route and instead choose to go down the micro-controller route instead in order to pick the sensors myself. An issue that we could not overcome with the Android device was the low reliability of the gyroscope readings due to its low quality. Picking hardware ourselves would have allowed us to use a much more accurate and reliable gyroscope system.

Technical Challenges

node-red

During the development process of the Twittersphere installation, as we become more and more ambitious with what we wanted to achieve and what we wanted the installation to be, we added much more complexity to the project than I would have originally liked. We had two Unity applications running on two different operating systems on two different devices. One was reading gyroscope and accelerometer data and streaming it to the other Unity application over OSC whilst also doing realtime voice-to-text recognition and transmitting the results over HTTP to a node-red app running on IBM Watson. The Node-Red app then used what it received over HTTP as the search criteria to activate the Twitter Streaming API, from which we received the tweet with the users location. We then sent that string location (e.g. “Plymouth, England”) through the Google Geolocation API to find usable lat-long coordinates and stored then in a NoSQL database for future use and packed it all together with the rest of the tweet data and sent it through our websocket connection back into the main Unity application. Unity then took the tweet and did its audio-visualisation for each tweet coming in, but we also wanted the tweet to be a spoken audio clip. That required sending the clip back to Node-Red on IBM over HTTP, where we used the service’s text-to-speech plugin to create a WAV Byte of the tweet and returned it in our HTTP response, which we then converted back into a WAV audio clip in Unity.

Confusing? I know. The point I’m trying to get across is that there were lots of points in this process where things could break or get caught up, and this happened more often than we would have liked. Fortunately we had some creative solutions in mind for some of the issues.

The first big issue we had with this process was the conversion of the tweet into a WAV audio clip. We initially did this during the same flow as the location finding so that we would receive the audio in the same data pack as the rest of the tweet’s data. But the issue we had was that we could not store the WAV Byte data directly into our JSON data without first Base64 encoding the data, which then took extremely long to decode at the other end. So we came up with the second flow that was accessed over HTTP purely out of necessity, but the solution turned out great. There is very little delay between the initial tweet being generated and the audio clip being assigned to it, despite the extra round-trip the data has to make over the internet.

Another big issue we had was with Google’s geolocation API. The structure of our Node flow meant that every tweet coming in that did not have its location already stored in our NoSQL database was sent to the API. But for hot topics like Trump or Syria we were getting much more than Google’s 2,500 queries/day rate limit, and often even exceeding their 50 queries/second rate limit. For damage control, we created a number of API keys that would automatically be swapped in and out when one would go down, thus extending our limit.

Websockets also proved difficult. Many times through the development process we would have confusing moments where data wasn’t being sent or received properly, only to find that a websocket connection had disconnected with no way in Node-red to reopen the connection. We ended up having to create a reconnection control in the Unity app instead that would send a connection request at the press of the space bar. It was simple, but it worked instantly!

There were also lots of issues with the gyroscope in the Android device, though as I addressed above I believe that was an attribute of the simplicity of the embedded sensor and may have been overcome with a more sophisticated sensor system.

 

Work split

Screen Shot 2016-12-16 at 17.11.17

In such a large project, it is of course important to split work evenly between the team and play to each member’s strengths. This was something that I feel our team did really well. We all have interests and experience with different technologies, so applying our individual skills to the project proved fruitful.

I personally initially worked mainly on the code that provided the link between the Unity application and the IBM-hosted Node-Red application. I therefore worked on both applications simultaneously, which was important in ensuring that data was sent and received in correct formats. I also worked quite extensively with Josh on the Node application through the project, building up the database connection that we were using to store location data and cut down on calls to Google. Later in the project, I worked mainly on the input device, initially in Processing and later in C#/Unity, as well as how this controller affected the camera object in our main Unity scene.

Meanwhile, Phil worked mostly on the visual elements of the project including game engine assets. This involved researching effective uses of colour and applying it to the application, as well as more code-heavy tasks like mapping tweets’ lat-long coordinates to Unity XYZ coordinates.

As mentioned above, Josh worked extensively in the back-end system, ensuring that the Node flow was robust and achieved all of the technical requirements that we defined, including the use of IBM Watson’s sentiment analysis and voice/text conversion/translation.

Finally, Elliot’s area of expertise laid firmly in audio design. Therefore Elliot was tasked with creating the audio stem clips that would create our composition. He also worked on the Unity project in mixing the audio stems and voice clips in c# script.

Future Potential

As well as the different approach to the gyroscopic input, there are definitely further improvements that could be made to the project.

Although lots of time was spent in designing effective audio stems, things like that will never be perfect. With more time I feel as though we could have created stems that more effectively built on each other to create more distinguished compositions. Although not an audio designer myself, I was inspired recently by the work Rockstar does on their games, specifically Red Dead Redemption, in which the use of audio stems to create multi-faceted compositions has been achieved to an outstanding level. This level of audio detail would, I feel, improve the Twittersphere experience significantly.