Minority Report like gesture control for PC’s

Remember Tom Cruise in Minority Report? Remember when he magically moved photos, videos, etc. from one place to another with just a simple flick of his wrist? Well, that power can be yours too. This is not science fiction – this is Mgestyk gesture-based control from Mgestyk Technologies.

Wow! This looks awesome.

Contextual User Interfaces and beyond

The new interfaces are winning people over because they are based on usage patterns instead of choices. The key thing about new UIs is that they are contextual – presenting the user with minimal components and then changing in reaction to user gestures. Thanks to Apple, we have seen a liberating movement towards simplistic, contextual interfaces. But can these UIs become the norm?

Over on his blog, Alex Iskold has written a wonderful piece on The Rise of the Contextual User Interfaces. In it he contrasts old school traditional user interfaces from the days when Microsoft Windows dominated everyones interaction with a computer, to the new generation of contextual user interfaces that the likes of Flickr, 37 Signals, Shelfari etc. all seem to have embraced.

I think contextual UI’s are something that we all subconsciously appreciate but don’t really think about , they just seem to work, they just seem to let us do what want to – and therein lies their beauty. It’s their elegance and their simplicity that makes them a pleasure to use. They only tell us what we need to know, if we need to know it, they don’t confuse us by presenting a plethora of options that we then have to decipher before we can continue, or indeed hide important functionality behind an “advanced” setting somewhere. It’s for this reason I completely agree with Alex when he describes how one of philosophies of the old UI approach was entrenched in the idea of presenting the user with all the information all of the time, which was overwhelming. The move towards Contextual User Interfaces is really about building user interfaces that respond to the way that users interact with them – and ideally to the individual user him/herself.

As hardware and processors have become more powerful we have become better able to pre-process and analyse information for users in order to give them exactly what they need when they need it.
This transition towards being context aware isn’t something that happened overnight, it’s happened gradually over time as technologies have matured and improved to allow us to do things that weren’t necessarily possible before. One often touted example of this is the Spell Checker, I remember when you had to explicitly invoke the spell checker in Microsoft Word, whereas now its done automatically in the background as you type. So I do wonder how long it will be before processing becomes cheap enough for us to process entire databases for users in order to derive better context.

But one of the stumbling blocks is that whilst we can derive or assume some context within an individual application we still don’t have the tool’s to computationally describe and communicate context where reasoning and inference is distributed. Why is that important? well to me my context, as an individual, is in some ways predictable and in others it’s highly temporal. In an ideal world there would be a way to describe who I am, what my interests are in general, but also what also what my interests are at a given point in time. If we could formalise that description, using an standardised ontology, then we could provide that ontology as an input into any application we used. That’s where a lot of the work that my friend Alan has been doing has been focussed, and it’s also one of my areas of interest.

That’s why Alex’ post was so wonderful, it resonates with an articulates many of the things that I’ve been thinking about for a while.

YouTube adds Visual Browser


Click to enlarge

YouTube have added a cool visual browser that allows you to find videos that are related to the one you are watching. In order to access the feature view a video, then go full screen. You’ll notice a new icon next to the play button ( represented by three dots) if you click on this and the Visual Browser appears. It shows you videos related the current node in the center. If you click on any related video more nodes appear representing further related videos. As an exploratory interface it’s really simple and intuitive to use and uses a similar metaphor to an interface I’ve been working on at Talis for exploring data that is structured semantically.

For a while now I’ve believed that discovery is more important than search, if you think about it traditional searches that ask users to enter keywords don’t use context which is why they are so hit and miss – relevance rankings are based on external influences and nothing to do with you as an individual what’s worse is it’s never clear to the user why the results that do appear are there – we have to accept the relevance or ranking system because we are never told why.

A discovery tool like the Visual Browser pictured above helps us to see how things are related and in doing so provides us with context – it also gives us a sense of control because we choose how we explore and find things of interest … that’s empowerment.