Animating 2D Bubble Charts Through Time
Posted: August 9, 2012 Filed under: Human and Computer Interaction, User Interface | Tags: JavaScript, Visualization, Web Leave a comment »Information Visualization is the area that finds better way to portray data in a visual way. Several popular visualization are used in probably all disciplines in the world, since data is everywhere and need to be analyzed. As an example, in CS is very popular to see Trees and Graphs to model everything from inheritance to data structures and even software engineering.
One of the most successful examples of Information Visualization today would be bubble charts. They are very easy to understand since they are very similar mapping wise to other very common visualizations like bar and line charts, but bubbles also add an extra dimension, the size (radius) of each bubble represents a value too.
There are many good examples of bubble charts, but probably one of the most popular ones would be Hans Rosling’s Gapminder. This has been features in several documentaries and public talks throughout the world, because they really help to get a lot of insight when it comes to analyze data though time.
What it does is that it shows bubbles in a meaningful matter, and it also supports animations that let us see how all variables move around in time. The perception of time cannot be greater than watching one thing literally moving through time.
Inspired by this, the purpose of the project is to successfully emulate this chart engine, and make it accessible in different platforms.
One of the few platforms that support this kind of compatibility would be Web technology and this means HTML, CSS and JavaScript. Now, every device has access to the internet and contains a web browser compatible with most of the features present in the most recent version of these languages. The W3C does a pretty good job trying to maintain everything standardized, and while some technologies always appear challenging this standardization, the core setup remains working on every browser.
Nowadays, the web is being constantly developed by hundreds of programmers and designers. HTML5 is closer to being a standard and with it, the web becomes even more reliable and full of features, and every day that passes relies less in applets and other private extensions that compromise out of the box compatibility with less optimized execution speeds.
Research lead to a humongous number of web site featuring projects regarding visualization, from 2D to 3D, Scientific to Informational. One that got the attention of this research was MooChart. Based on MooTools, a library made for multiple purposes, mainly interactive widgets and abstraction to the most common task in a web app, this library aimed to use the rendering engine capabilities of the moo tools to draw circle using pure JavaScript, and then plot them in two axys.
This was a good opportunity for experimenting, so the library was adopted and in very few time I had the bubble chart running on a dummy web site. Now of course, this is just part of the job to successfully create an animated time bounded bubble chart that was the needed to create the time dimension on the chart. Depending on how time was manipulated, the position of the bubble will be updated.
Each point of the bubbles in the chart was recorded in X and Y coordinates as well as the radius. The data structure used for this also allowed multiple values of this kind of data, storing each position through time. The identifier that will enumerate each moment though time was bounded to an slider, one of the best widgets for mapping time. It’s very intuitive to use: simply drag the slider to the right and time advances, drag it to the left and time moves backwards. If the slider was updated with a position, the chart was redrawn to show any change. On this implementation, change on the chart happens when the slider has a time value that is present on the data structure.
After making this work, the chart was ready to be used, but still we are not close to Roseling’s approach. When the chart was used it was still lacking a very fundamental feature that was not only really insightful and novel, but pretty impressive: the animation.
In Digital graphics Animation is done by calculating the frames when an object should be in time depending information like physics, or in this case, data behavior. The animation can’t be perfect all the time; we simply don’t have that many data samples for every millisecond of animation, and we never will. If the animation was played on the current implementation explained so far, it would be very jumpy, non dramatic and non intuitive by the lack of perceptual information, rendering it meaningless. Hence the creation of a data approximation function; the implementation needed a way to create the frames that were missing, or in other words, the points between one moment in time to the other.
Knowing that the approach taken was the creation of a recursive timer that by calculating the slope of a line between one point to the other would be able to sample the points for each movement. The timer would be able to repeat this action in an incrementally, but not going as fast as possible like a regular loop would, but in a timely, adjustable and perceivable fashion.
Since the animation will run through several points, the timer itself will end when the goal was reached and in a recursive way will call another timer function that samples the way in the following period of time, this until the last value of the chart was reached.
Linking this function to a button press even, the user of the visualization is able to click and start the animation from anywhere in time on the chart context. The user can also pause the animation. This is done by telling the function to not increase to the next sample point and simply stay in the current one until the user resumes the animation letting the movement across the sampling continue, or simply stop the animation altogether.
You can try the complete prototype which is preloaded with data from the mobile OS market share in time context. This version basically accomplishes everything I setup for the project. As expected, this visualization runs in a several devices and so far it has been tested in PCs, iPhones and iPads without problems.
In the future, more development can be added to the system by making the animation more resource efficient, and also be bounded to real time, i.e. frame skipping algorithm, so it runs at the same speed in every system. This, and a bit of code clean up could be potentially released as a plugging to MooChart project, making it available to anyone that wants to use it.
From Boids to Documents – Part 2
Posted: July 20, 2012 Filed under: Human and Computer Interaction, Research | Tags: App, Boids, C++, HCI, Independent Elements, Publication, Research, Visualization, WinAPI 2 Comments »If you haven’t read Part 1 head there!.
This year we started the development on this new version of the boids program. The idea: each element is a document, and they can be grouped by similarity.
In order to create such a thing now we needed a parser module that read documents like PDF files and create a data structure that provides the similarity between each of them. That was done entirely by a partner while I was focusing in the visualization itself.
As the paper describes the idea was to create a 4th rule to the Boids Algorithm that basically will direct the element to get nearer other similar elements. There are several ways to do this and they are described on the publication. We applied that idea along a modified algorithm designed by myself that easily takes the similarity values into cohesion like behavior. The results were pretty impressive after some tweaking on the other rules.
To provide more insight to the user on the visualization we developed a coloring code were the user simply selects a keyword and the documents that contain that keyword would be highlighted with a user selected color. Going beyond that, the user could select an element to show the title of that particular document and even open it from that interface.
We also added freeform movement and other interactions that let the user move around the 3D space.
In other words, we developed a whole different way to browse documents and the application served as a proof of concept for a different take on visualizing documents. And we didn’t stop there.
Tune in next time to know how we added Kinect support to the whole thing!.
From Boids to Documents – Part 1
Posted: July 19, 2012 Filed under: Human and Computer Interaction, Research | Tags: Boids, C++, HCI, Independent Elements, OpenGL, Stereoscopy, Visualization, WinAPI 2 Comments »Here’s some information on the projects as promised.
On Summer 2010 I was approached by my HCI instructor, I have already told him about my interest in the area and that I feel good to go and start working on some projects. My instructor expertise relies on Data and Scientific Visualization and he had something beign cooked at the moment. A fellow student was working in a independent agents program that visualizes several objects in a 3D space with a very dynamic behavior inspired by nature: they tend to group and navigate in this space just like flocks of birds, swarm of insects and banks of fishes do, and they can react to their environment in different ways.
This “Boids” algorithm is very popular and is used in several graphical applications like videogames and movies due to its approximation to the real behavior of these animals without necessarily simulating the way nature does, all with very simple algorithm that is not heavy computationally speaking. Being it for real time or pre rendered presentations the technique really outstanding.
Sort of like that, get the idea?
Well, I jumped into the project and did some optimization to the code and developed some interactions that were needed to further use the application in experimental setups, later we added 3D stereo to it using active shutter glasses, and let me tell you, watching thousands of elements floating out of the screen can be pretty amazing.
The previous picture shows about five thousand elements forming groups on 3D space. You could put other objects that can be either attract (food) or repel (predators) the boids, and you will see them reacting exactly as animals with similar behavior will do. Everything was done using C++, WinAPI and OpenGL. Project was finished and ready to go.
Fast-forward to 2012, a theoretical research was going on based on this technique: What if each element metaphorically represented a document? Well yes it would be a mess, but what if we can actually sort them out by some factor of similarity? No we are talking; we could visualize big structures of documents grouped together in clusters according to its similarity.
That’s what the newest paper is all about, and I’ll tell you how we developed a prototype in the next post.
New publications
Posted: July 9, 2012 Filed under: Human and Computer Interaction | Tags: HCI, Publication, Research, Visualization, VR 1 Comment »Next week some of the work done on my graduate research time will be published on the he 9th International Conference on Modeling, Simulation and Visualization Methods (MSV’12) and the 16th International Conference on Computer Graphics and Virtual Reality (CGVR’12). We have been hard working on some visualization techniques and some VR practices that are pretty exciting:
- Visualization and Clustering of Document Collections Using a Flock-based Swarm Intelligence Technique.
- Designing a Low Cost Immersive Environment System Twenty Years After the First CAVE.
My plan is that after the conference I’ll publish some related material on the site and the blog about the implementation of those ideas, so stay tuned for that.
Also, if you haven’t noticed, my we page is finally up. Still I have to upload more portfolio material. That will be fixed in the coming weeks.