Monthly Archives: May 2008

A possible future of Software Development

This talk begins with an overview of software development at Adobe and a look at industry trends towards systems built around object oriented frameworks; why they “work”, and why they ultimately fail to deliver quality, scalable, software. We’ll look at a possible alternative to this future, combining generic programming with declarative programming to build high quality, scalable systems.

… a very interesting talk, that raises some important questions, about the very nature of software development.

Is Facebook really a black hole ?

A number of us at Talis have been thinking and talking a lot about DataPortablity, my colleague Danny even went as far as recording the YouTube video above, which I think is excellent. When Google recently launched their Friend Connect service earlier this month, it seemed like a step in the right direction. Finally I’d be able to move my social graph from one service to another … I mean it’s my data after all, right?

I had been wondering, as I’m sure many others have, how established services like Facebook would react to Google’s initiative and indeed any initiative by the more open dataportability movement in general, especially when you consider that the only real value Facebook actually has is all the data we, as users, have entered into it. I wasn’t too surprised to read this article by Scott Gilbertson over on Wired Blog, which describes pretty much the kind of reaction that I had expected from FaceBook … but what did surprise me was that Facebook’ terms and conditions do actually state the following:

You may not store any Facebook Properties in any Data Repository 
which enables any third party (other than the Applicable Facebook
User for such Facebook Properties)to access or share the Facebook
Properties without our prior written consent.

Scott sums it up quite succinctly when he says “Facebook’s TOS make no bones about who controls your data. The answer is: not you”. He is also right to point out that Google’s motives are far from altruistic:

But don’t go getting idea that Google is really all that concerned with freeing up your data. Google, like every other site, wants a slice of the pie. If Google helps you gain a little control at the same time, consider it a happy coincidence, not a motivating factor.

Yet what does frustrate me about Facebook is that they are using the tired old excuse that they are trying to keep their users safe; that their users privacy is paramount, which is laughable as Scott also quite rightly points out:

Facebook’s own failed Beacon ad platform effectively showed that, deep down, Facebook doesn’t care about your privacy, it cares about making money off your data. And to do that it has to make sure it keeps that data locked up on the site. Letting Google siphon your info off to other social sites isn’t going to help line Facebook’s coffers.

There’s something deeply wrong with the idea that I can create data about myself, and my relationship with other people, but then that data doesn’t belong to me. For me this situation highlights the importance of DataPortability, as Danny so vividly puts it, to Get Your Data Out. Scott is probably right when he observes:

If we want an open social web, we’re going to have to build it ourselves, using technologies that no one company controls.

Nodalities Magazine Issue 2

The second issue of our Nodalities Magazine is out today. It’s free to subscribe to if you want a printed version, or you can view it online by clicking on the image below or download the pdf here:

  • Blue Oceans – Ian Davis and Zach Beauvais discuss the ‘Blue Ocean’ opportunity facing those who embrace the Semantic Web
  • Social Networking – Garlik CEO Tom Ilube introduces the notion of ‘social verification’
  • Environment – David Peterson puts semantic technologies to work in the fight against Climate Change
  • Predictable Mavericks – Talis CEO Dave Errington looks back at the company’s past, and forward to a semantically powered future
  • Open World Thinking – by me! in it I offer my thoughts on how Semantic Web developers need to see the world differently
  • Dow Jones and Thomson Reuters – Read transcripts of recent conversations with these factual information powerhouses, and learn how the Semantic Web is being put to work.

Talis has launched a magazine called Nodalities that bridges the divide between those building the Semantic Web and those interested in applying it to their business requirements. Supplementing our blogs, podcasts, and Semantic Web development work, Nodalities Magazine is available – free – online and in print, and offers an accessible means to keep up with this rapidly evolving area.

Does the net need an upgrade?

As the Internet is being overrun with video traffic, many wonder if it can survive. With challenges being thrown down over the imbalances that have been created and their impact on the viability of monopolistic business models, the Internet is under constant scrutiny. Will it survive? Or will it succumb to the burden of the billion plus community that is constantly demanding more and more?

Download here

Just finished listening to this really interesting panel discussion made available over at IT Conversations, on the subject of whether the net needs an upgrade based on the changing and ever increasing usage patterns and demand from an ever increasing base of users. The panel is comprised of Van Jacobson, Rick Hutley, Norman Lewis and David S. Isenberg.

We have seen the rise in demand for video online, increases in peer to peer data, and whole virtual world syndrome. The way we are using the internet today is radically different from what it was originally conceived as. The internet as such is evolving and as been from the moment it emerged and perhaps part of the problem is that it is evolving so quickly, it represents a myriad of innovations but has also created a number of complications. If you consider that when the net was first and conceived and used it was a closed system at any point in time every point in the system was a trusted point, we can contrast this with what we have today where the open nature of the internet has abrogated this principle. Yet the very openness of the internet is what has driven it’s massive growth, we have seen entire industries emerge during this growth, entire businesses formed around the pervasiveness of this infrastructure.

It was interesting listening to the panel most of whom agreed that some kind of upgrade was needed, yet for me the point Van made seems profound, even though I don’t fully grasp it : he argued that it isn’t the net that needs upgrading its our point of view that needs to change. Instead of looking at the network we need to look at the content because thats what we are now using the internet for, we need to think in terms of that content moving around, whats the best way to move it around and secure it based on what it is.

I think I need reflect a little longer on this discussion and better understand the arguments presented by the various panelists, it’s difficult to switch from being a consumer or in essence a user of services or abstractions built upon this vast infrastructure to actually trying to start to think about the net as a physical infrastructure … but it’s given me great food for thought.

Contextual User Interfaces and beyond

The new interfaces are winning people over because they are based on usage patterns instead of choices. The key thing about new UIs is that they are contextual – presenting the user with minimal components and then changing in reaction to user gestures. Thanks to Apple, we have seen a liberating movement towards simplistic, contextual interfaces. But can these UIs become the norm?

Over on his blog, Alex Iskold has written a wonderful piece on The Rise of the Contextual User Interfaces. In it he contrasts old school traditional user interfaces from the days when Microsoft Windows dominated everyones interaction with a computer, to the new generation of contextual user interfaces that the likes of Flickr, 37 Signals, Shelfari etc. all seem to have embraced.

I think contextual UI’s are something that we all subconsciously appreciate but don’t really think about , they just seem to work, they just seem to let us do what want to – and therein lies their beauty. It’s their elegance and their simplicity that makes them a pleasure to use. They only tell us what we need to know, if we need to know it, they don’t confuse us by presenting a plethora of options that we then have to decipher before we can continue, or indeed hide important functionality behind an “advanced” setting somewhere. It’s for this reason I completely agree with Alex when he describes how one of philosophies of the old UI approach was entrenched in the idea of presenting the user with all the information all of the time, which was overwhelming. The move towards Contextual User Interfaces is really about building user interfaces that respond to the way that users interact with them – and ideally to the individual user him/herself.

As hardware and processors have become more powerful we have become better able to pre-process and analyse information for users in order to give them exactly what they need when they need it.
This transition towards being context aware isn’t something that happened overnight, it’s happened gradually over time as technologies have matured and improved to allow us to do things that weren’t necessarily possible before. One often touted example of this is the Spell Checker, I remember when you had to explicitly invoke the spell checker in Microsoft Word, whereas now its done automatically in the background as you type. So I do wonder how long it will be before processing becomes cheap enough for us to process entire databases for users in order to derive better context.

But one of the stumbling blocks is that whilst we can derive or assume some context within an individual application we still don’t have the tool’s to computationally describe and communicate context where reasoning and inference is distributed. Why is that important? well to me my context, as an individual, is in some ways predictable and in others it’s highly temporal. In an ideal world there would be a way to describe who I am, what my interests are in general, but also what also what my interests are at a given point in time. If we could formalise that description, using an standardised ontology, then we could provide that ontology as an input into any application we used. That’s where a lot of the work that my friend Alan has been doing has been focussed, and it’s also one of my areas of interest.

That’s why Alex’ post was so wonderful, it resonates with an articulates many of the things that I’ve been thinking about for a while.

BBC Opening Up

The BBC is opening up and making its data accessible to development teams outside the beeb – they are also following the Linked Data approach …

We have been following the Linked Data approach – namely thinking of URIs as more than just locations for documents. Instead using them to identify anything, from a particular person to a particular programme. These resources in-turn have representations, which can be machine-processable (through the use of RDF, Microformats, RDFa, etc.), and these representations can hold links towards further web resources, allowing agents to jump from one dataset to another.

They have designed and published a simple but versatile ontology for describing Programme data which can be accessed here.

The amazing intelligence of Crows

Hacker and writer Joshua Klein is fascinated by crows. (Notice the gleam of intelligence in their little black eyes?) After a long amateur study of corvid behavior, he’s come up with an elegant machine that may form a new bond between animal and human.

I was amazed watching this fascinating Ted Talk, you have to see it to believe just how much Crows as a species have adapted to human beings.

Powerset

I’ve been playing around with Powerset, a new Semantic Search Engine, which uses natural language search technology that is based on patents licensed exclusively from Palo Alto Research Center (formerly Xerox) and its own proprietary indexing.

Instead of being limited to keywords, Powerset allows you to enter keywords, phrases, or questions. Instead of just showing you a list of blue links, Powerset gives you more accurate search results, often answering questions directly, and aggregates information from across multiple articles.

I have to confess I am very impressed with it – I used it for a few hours late last night to help find some references for a blog piece I’m writing and loved the way it seemed to understand the concepts I was asking about rather than giving me matches based on keywords. Currently it only searches Wikipedia but does also provide results from Freebase as well. I really find the MiniViewer useful, it allows you to view a short result snippet presented on the results screen within the other original context it was extracted from, this also makes the search results feel interactive.

Here’s the demonstration video that explains how Powerset is different using some cool examples:


Powerset Demo Video from officialpowerset on Vimeo.

I’m really interesting in seeing where Powerset goes next …

A Dream Within A Dream

       A Dream Within a Dream

Take this kiss upon the brow!
And, in parting from you now,
Thus much let me avow --
You are not wrong, who deem
That my days have been a dream;
Yet if hope has flown away
In a night, or in a day,
In a vision, or in none,
Is it therefore the less gone?
All that we see or seem
Is but a dream within a dream.

I stand amid the roar
Of a surf-tormented shore,
And I hold within my hand
Grains of the golden sand --
How few! yet how they creep
Through my fingers to the deep,
While I weep -- while I weep!
O God! can I not grasp
Them with a tighter clasp?
O God! can I not save
One from the pitiless wave?
Is all that we see or seem
But a dream within a dream?

        --by Edgar Allan Poe

Processing.js

Processing is a Open Source data visualization programming language. I first played around with it about a year ago. I was recently reminded of it by Rob, and have started playing with it again. However, I just discovered that earlier in the week John Resig released his JavaScript Port, Processing.js. So far it looks amazing, virtually all the demo/example applications that are shipped with Processing are running using the CanvasElement in JavaScript. I’m going to have a lot of fun with this.

John deserves a huge amount of credit for this contribution.