Making Tea and the Semantic Web

Abstract. Making Tea is a design elicitation method developed specifically to
deal with situations in which (1) the designers do not share domain or artefact
knowledge with design-domain experts, (2) the processes in the space are semi-
structured and (3) the processes to be modeled can last for periods exceeding
the availability of most ethnographers. We propose a set of criteria in order to
understand why Making Tea worked. Through this criteria we also reflect upon
the relation of Making Tea to other design elicitation methods in order to pro-
pose a kind of method framework from which other designers may be assisted
in choosing elicitation methods and in developing new methods.

Download paper here, by Monica Schraefel and Alan Dix.

I periodically check up on Alan’s homepage at Lancaster University and have a read through any papers he has made available. Earlier today I found an interesting looking paper entitled: Within Bounds and Between Domains: Reflecting on Making Tea within the Context of Design Elicitation Methods – the abstract for which I have transcribed above.

Just reading the title made me smile as I recalled many an evening spent listening to Alan talking to me about an idea, or helping me understand something I was struggling with, all over a cup of tea (actually usually over several cups of tea!). I wasn’t sure what to expect from this paper, but I’m glad I read it, it proved invaluable for a number of reasons but primarily because it actually led me back to the Semantic Web, and some of the work we are doing at Talis. Whilst on the face of it this assertion might sound somewhat tenuous but maybe it isn’t … as I’ll try to illustrate briefly(ish)…

The paper describes some of the history behind an attempt by a group of computer scientists to design a digital version of a synthetic chemists lab book. However because the computer scientists were not experts in the domain or had very little experience in chemistry they struggled to understand the process that the chemists followed. Whilst they could observe the chemists doing their job and glean some information through interviewing them they simply could not understand the critical issue with reference to the lab books – when, how and why certain things were recorded and others were not. If your trying to create a digital replacement it’s absolutely imperative that you can understand what it is the user is doing and why. It’s at this point that Making Tea became so important …

In frustration, among the team of Canadian and British computer scientists and
chemists, the group made tea, a process embraced by both nations for restoring the
soul. It was at this point that the chemist-turned-software-engineer on the team said
“Making tea is much like doing an experiment.” The rest is history. The design team
took up the observation and used making tea to model the process of both carrying
out an experiment and recording it. To wit, the team’s chemist make tea multiple
times: first using well understood kitchen implements, where questions were asked
like “why did you not record that?” “You just did 20 steps to get the tea ready to pour,
yet you’ve only written down “reflux.” Why?” From kitchen implements, the team
moved to chemistry apparatus set up in the team’s design space. From there the team
moved to the chemistry lab. The results of the exchanges in these spaces informed the
design process. Indeed, they also informed the validation process: design reviews with
chemists in various positions, from researchers to managers to supervisors, were car-
ried out by making tea, and demonstrating how the artefact worked in the tea-making
experimental process. This time it was the chemists’ turn to interrupt the presentation
with questions about the artefact and their process.

The paper goes on to describe why ‘Making Tea’ worked so well as a design elicitation/validation method. I won’t provide a summary comparing it to other methodologies (you can read the paper for that) but I will summarise the four criteria that Monica and Alan identified that made it so successful in this example.

  1. Neutral Territory: Making Tea created a neutral space that was not owned by either the system designers or the domain experts – the intended users of the application. In a smiliar vain (although not exactly the same) having a neutral space you can go to to carry out design elicitation activities has proven hugely beneficial in our own experience at Talis. I have seen that removing ourselves from our offices or normal environment to spend time as part of a multi disciplinary team to focus on understanding and designing a solution to a problem both helps to focus us and place everyone on an equal footing in an alien environment. It also forces us to come together … thats important.
  2. Boundary Representation: When the problem domain is understood by both designers and users this forms a point of contact or reference that both groups feel comfortable with, and can relate to each other – it not only offered a way for domain experts to describe their tasks and activities, but also one where software engineers could offer back potential new designs
  3. Disruption: By being similar yet different from the actual process being represented, Making Tea forced the users to reflect on their tacit activities. To my mind this should be simple to appreciate for anyone who has ever pair programmed. When you constantly have someone asking you what your doing, asking you to talk through your thought processes, as disruptive as it might seem it actually forces you to reflect on what your doing. I’ve lost count of the number of times I’ll start to explain a piece of code I’m writing to someone, and then find that the actual processes of having to articulate what I’m doing reveals very quickly that I’m doing something wrong or failing to see the bigger picture.
  4. Time Compression: Making Tea reduced the time taken for a normal ex-
    periment into a period that could be completed in a participatory session
    .The net effect of this is at facilitates rapid iteration of both observation of processes and each design. This sits quite well with the Agile mantra of rapid iteration and constant feedback.

I guess I like Making Tea, this entire metaphor feels very comfortable. It also got me thinking about some of the work I’ve been doing at Talis. I’ve been spending a fair amount of time looking at ontologies to represent different kinds of knowledge. Most recently Rob and I have been looking at how to model Workflow’s in RDF … it doesn’t sound particularly tea-like yet I can’t help but think that our current efforts to try and get closer to our users and also others working in this domain is going to help us understand what we are trying to build far better than trying to be purely academic in our approach to researching this area.

Now back to the tenuous Semantic Web link I mentioned earlier. Monica is working at Southampton University on a number of their semantic web projects. She was/is the Project Lead on MyTea, which tried to re-imagine the original work that the Smart Tea Project team mentioned in the paper did in building a digital lab book. What the MyTea project attempted to do was enhance the original work by integrating the tool with Semantic Technologies and what the folks down in Southampton refer to as The Semantic Grid ( they also run an active project called myGrid which appears to bring all this together ):

The Semantic Web and Semantic Grid, however, are motivating a possible sea change in the way scientists make their work available. With the Semantic Grid, a Web-based technology for sharing data and computation, scientists can share information in richer forms than traditional lab books and publishing has allowed. They will be able to make rafts of data generated in experiments available to other scientists, and to the public for compariosn exploration and study; they can share analyses of information and collaborate in new ways.

Now I’m not sure what the current status of either of the projects is since the paper was originally written in 2005, and the sites don’t look like they have been updated in a while other than myGrid. from my perspective im interested in the work flow modelling they talk about. Yet in additon to that there is something that does touch on what we are trying to do at Talis in building a platform that facilitates this notion of a Web of Linked Data – how to find ways of enriching existing applications by providing the means to link data together in ways that have never really been possible before. We have already seen the amazing things we can do with data and applications when you fundamentally accept that what we are talking about is not a technology change as such, but rather a complete paradigm shift.

This post does feel a bit strange due to the somewhat tenuous links and a bit of tangential reasoning but it’s forcing me to reflect on something I’m struggling to articulate at the moment … but that’s not a bad thing.

Adaptive Algorithms for Online Optimisation

ABSTRACT

The online learning framework captures a wide variety of learning problems. The setting is as follows – in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm’s loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret*, a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Pretty interesting tech talk, I found the notion of minimising regret quite interesting, but only really because I have heard of this before, but never experienced a real world implementation of this. I first heard of the significance of regret in learning from Alan who captured this vividly in an essay he wrote called The Adaptive Significance of Regret which he wrote back in 2005. In fact he even showed me some PHP code he wrote that modelled regret, which at the time I remember finding somewhat amusing … but right now it it feels far more significant.

Lecturing – Usability and Web2.0

Alan Dix I had a lot of fun yesterday, my good friend Alan invited me to come up to Lancaster to do a special guest lecture on Usability and Web2.0 – I was asked to talk about the demands Web2.0 put on real world development, and the usability issues we now face. The lecture was intended mainly for his undergraduates but he invited the MSc, MRes and PHD students to attend as well.

I must confess I was very nervous it’s been a long time since I’ve had to stand up and talk for ninety minutes – I had also spent much of the weekend trying to prepare my slides and work out how to I was going to talk, intelligently, on a subject area that encompasses so much. I have to thank Richard Wallis and Rob Styles, two of my friends at Talis who both provided me with some great advice last week when I approached them and said “arrghhhh I’m panicking!I know what I want to say I’m not sure how to structure it“, fortunately they both gave me some great advice so I spent the weekend trying to organise my thoughts.

In the end it was fine, I really enjoyed the session and Alan did his best not to embarrass me ( too much 😉 ). I started by talking a little bit about the Web1.0 and the sorts of usability mistakes  that were common back then ( and perhaps still are now ), I went on to talk about the differences between Web1.0 and Web2.0. I then focused on Web2.0 and the kinds of usability problems that we are having to consider and find solutions to at the moment and tried to cover broad range – technology, accessibility, identity, authority, privacy etc. I also talked about Search as a usability problem, and how we still can’t find what were looking for, I explained why this leads me to believe that Google is broken. This flowed nicely into the final part of my talk which focused on the semantic web and some of the work we’re doing at Talis.

The slides for my presentation are now available online here.

Semantic Desktop, PIM’s and Personal Ontology’s

Abstract
A Semantic Desktop is a means to manage all personal information across application … all » borders based on Semantic Web standards. It acts as an extended personal memory assisting users to file, relate, share, and access all digital information like documents, multimedia, and messages through a Personal Information Model (PIMO). This PIMO is build on ontological knowledge generated through user observations and interactions and may be seen as a formal and semi-formal complement of the user’s mental models. Thus it reflects experience and typical user behavior and may be processed by a computer in order to provide proactive and adaptive information support or allows personalized semantic search. The Semantic Desktop is build on a middle ware platform allowing to combine information and native applications like the file-system, Mozilla, Thunderbird or MS-Outlook. In this talk I will show how machine learning techniques may be used to support the generation of a PIMO. I will further introduce the main concepts, components, and functionalities of the Semantic Desktop, and give examples which show how the Semantic Desktop may become

I was very interested, and a little amused, when I came across this tech talk earlier this week. The talk echoed many of the ideas and points that Alan has been talking to me about recently around the whole idea of using Personal Ontology’s to provide context for applications. It’s a research area he’s particularly interested in and I’m very very excited about the prospect of working with him to develop some of his ideas using the Semantic Web Platform we’ve been building at Talis.

Alan has collaborated on papers on this subject which you can find here. Although the paper on Task Centered Information Management resonates the most with some of the ideas presented in the tech talk.

Alan joins the Talis Platform Advisory Group

Last week Alan agreed to join our Talis Platform Advisory Group. Here’s the official announcement over on our Nodalities blog.

I was over-the-moon when I discovered that Alan had agreed to join our advisory group, I wasn’t sure whether he would due to his other commitments but after spending a wonderful weekend with him and his wife Fiona up in Kendall following HCI 2007 (some pics), I knew that there was so much he could help us with. We spent a long time talking about some of the work we’re doing here at Talis and Alan kept offering me his insight, and sharing his ideas with me and it became apparent that he could offer our team a unique perspective which is something they all seem to agree with … it was Paul, one of our resident evangelists (all round nice guy … and the keeper Cadbury’s Creme Eggs), who suggested asking Alan to join the group.

Alan has been more than just a wonderful friend to me over the years, he’s been a mentor, a muse, a confidant, in many ways he’s been like a father to me … the idea of collaborating with him again to build something special, like we did at aQtive, feels inspirational … 🙂

Whilst were on the subject of inspiration …. consider for a moment who the other members of the advisory group are …

That’s a pretty special group of people each of whom brings a wealth of experience and knowledge that will be invaluable in helping us grow the platform, they’ll tell us when they think we’re right or tell us when they think were making mistakes. I know were all looking forward to working with them.

HCI2007 … Day One Summary

 The last couple of weeks have been extremely busy and consequently I haven’t had much of a chance to post up about the conference. I thought I’d write a summary of some of the presentations that stood out the most for me.

For anyone interested you can view all the papers that were presented during the conference which are available from the BCS website on this page.

Day One

View the schedule for the day.

The day was divided into three sessions. During each session there were three concurrent tracks. This was essentially the format for all three days of the conference. I decided to stick with the Creative and Aesthetic experiences track for most of the day.  During the first session the presentation that captured my imagination the most was Dharani Perera’s short paper on Investigating paralinguistic voice as a mode of interaction to creating visual art.1.   In the paper Dharani and her colleagues reported on their research into how it is possible for people to use the volume of their voice to control cursor movement to create drawings. The research is especially hopeful for artists with upper limb disabilities who  show remarkable endurance, patience and determination to create art with whatever means are available to them. Listening to her presentation and watching one of the video’s they had recorded of an artist using the system was quite inspirational.

Another interesting paper presented during the first session was by Jennifer Sheridan from www.bigdoginteractive.com on Encouraging Witting Participation and Performance in Digital Live Art 2. Jennifer and her colleagues worked on developing a framework for characterising people’s behaviour with Digitial Live Arts. They identified three key categories of behaviour with respect to a performance frame – these are defined as performing, participating and spectating. They used an iPoi to illustrate their framework. Imagine spinning or throwing a tiny computer round your body to create your own visual projections, music or light show. Poi is a traditional Maori art form. iPOI (pictured above) is a sensor packed upgrade of the original that can trigger visual and audio soundscapes in real time using wireless technology. The goal of this peer-to-peer, exertion interface is to draw people into the performance frame and support transitions from audience to participant onto performer. The presentation was really about describing how evaluating and measuring interaction in public performance is very differnt to the frameworks and measures currently employed to understand interaction in tradtional HCI. I think what Jennifer and her team showed was that traditional HCI, like it or not, has focused on understanding interactions in desktop computing, however as computing moves away from desktop into more ubiquitious, mobile devices we will see a shift to non task based uses of computing, and as such we need to new ways to understand this new kind of interaction.

The first session was followed by the first of the conference keynotes by Professor Stephen Payne on the subject of Deciding how to sped your time the keynote was an excellent presentation of Stephen’s current research area of Cognitive strategies and design heuristics for dealing with information overload on the web3. He began the keynote with the simple and yet insightful observation that …

time is natures way of preventing us doing everything at once

The talk was about the very real problem that many of us face … In a world where the internet and search engines can provide individuals with relevant texts on any given subect how can readers allocate time effectively across a set of relevant documents, and how can they be helped so to do. So when faced with more relevant texts than time to read them all what strategies do you use when you have a specific learning goal.

Stephen presented the results of an experiment where a number of test subjects were presented with four texts to read in 6 minutes, after which they would have to take a test. Whilst the four texts covered the same subject (the functioning of the human heart) the complexity of the texts varied. For example Text A was a primary school text on the topic, whilst Text D was a post graduate medical text. B and C were both somewhere in the middle.

Now, before seeing Stephen’s presentation, I would have assumed that people in this situation might have attempted to sample each of the texts, in other words review each on briefly. What Stephen’s research has begun to demonstrate is that individuals, when faced with a specific learning goal, don’t sample but rather Satisfice, in other words in their minds they set a threshold of acceptability and if it’s met by a particular text will settle on it. It’s sometimes referred to as Information Foraging, which is analagous to problems faced by animals foraging for food in nature, how long do they stay in one patch patch before moving on to another.

What I found most interesting in Stephen’s presentation was the idea that even when skimming documents we build a mental representation of the mapping between the physical structure of the text and the meaningful content of the text. Now if this is true then it would suggest that it might be possible to find ways to construct documents so that they take advantage of this structure map. It was an excellent keynote, and I must admit I wasn’t at all suprised to learn that Alan has worked with Stephen4 on this area since much of Stephen’s talk seemed to remind me of ideas and theories of the mind that Alan had related to me in the past.

After lunch the second session of talks began. Again I opted to start off in the Creative and Aesthetic Experiences track. The first paper presented was by Shaowen Bardzell, and had the intriguing title – Docile Avatars: Aesthetics, Experience, and Sexual Interaction in Second Life5. It’s no secret that I’m not a huge fan of Second Life, but listening to this talk made finally accept that even though I don’t understand why people in second life seem to take it so seriously the fact remains that many do and because they do it’s important to understand their needs as users and the communities that are emerging within Second Life. I did find this talk to be somewhat surreal the whole idea of sexual interaction and in particular BDSM as something thats possible in a virtual environment felt kind of bizarre. I did laugh about it with Alan at the bar later when I recounted to him a conversation I had with someone from the second life team who couldnt understand why i wasnt willing to spend more time in it and asked me what the real world had that second life didnt and I had replied “real women” (stop laughing Alan!). What did emerge from Shaowen’s talk was that it is possible to construct powerful aesthetic experiences and that were only now beginning to try to understand through the use of virtual ethnography and HCI theories of experience design to understand how and why this complex phenomenon emerged in Second Life.

The most interesting of the talks from this session, under the Communication and Sharing Experiences track, was entitled Thanks for the Memory6. This was a join project between Manchester Met University, The BBC and Microsoft. It was based around a piece of life-logging technology developed by Microsoft called The SenseCam,which they decribe as a piece of memory prosthesis. The SenseCam is a passive device that users wear, the device is designed to take photographs, at regular intervals of around 30 seconds, without user intervention whilst it is being worn. Unlike normal camera’s it doesnt have a view finder, it’s a simple camera fitted with a special fish eye lense that ensures that the field of view is maximised so that almost everything in the wearers field of view is captured. Whilst it sounds a bit strange the effect on the six test subjects seemed to be quite profound, the authors of the paper put it in these terms …

What we have seen is that the relationship between things-as-remembered-by-thesubjects-in-ordinary-ways and things-as-presented-by-the- SenseCams is complex. For one thing, SenseCam data captured things-that-might-have-been-remembered-but-not-intentionally and things-that-were-beyond-the-possibility-of-being-recalledby-the-user-but-which, -when-presented-to-the-same-user, -somehow-provoked-a-recollection … This awkward language alludes to the difficult and complex relationship between human memory and digital traces of action. We have seen that SenseCam data makes livedexperience, in various ways and in varying degrees, strange to the persons who had the relevant experiences in question. Strangeness here is not a negative thing, as we saw. Strangeness brings values of various kinds. The crux, it seems to us, is that in creating discongruent experiences to the one’s imagined or recollected, SenseCams brought to bear ways of seeing that were not obviously the subject’s own, but which were nevertheless empirically related to those experiences, though in complex ways.

For the final session of the afternoon I decided to stay with the Communication & sharing experiences track. There was a very interesting paper presented The Devil You Know Knows Best How Online Recommendations Can Benefit From Social Networking7 which I thought was quite relevant to some of the stuff were looking at here at Talis. The content of the paper seemed to be me be fairly obvious but nevertheless it was interesting to see some empiricial research done to prove some of the points. In short there thesis was that the defining characteristic of the internet is an abundance of information and this is problematic, how do we know which information to pay attention to and which we can just ignore? Recommender Systems were envisaged to try to solve this problem but have for the most part been unsuccessful. Research would suggest that this is due to lack of social context and inter-personal trust. What the researchers presenting this paper discovered was that participants overwhelmingly favoured recommendations made by people familiar to them, and if they recommender was not someone familiar then recommendations were favoured if the recommender could be identified as having the same interests. Consequently the conclusion offered, and this all felt terribly obvious t me, was that Recommender Systems should be integrated with Social Networking Systems.

The next paper from Day One that I want to mention was the presentation by my HCI tutor and aQtive Colleague Russell Beale Blogs, reflective practice and student-centered learning8. Russell argued that Blogging can be used to enhance education by encouraging reflective practise. Russell defines reflective practise as

an approach to learning that encourages thought about what has been experienced and seen, which can then drive new theories and investigations to test those theories, leading to new experiences that may, or may not, validate the original ideas. This leads to them being modified, extended, and refined, and the cycle continues.

Whilst the notion of reflective practise should be familiar to everyone, this is the first time I’ve come across any real research into using Blogging as a medium to encourage and enhance reflective practise in education. Russell was keen to point out that from a social and pedagogical perspective blogging can support a sense of community amongst students – they can interact with each other, post comments on each others blogs. But because of the semi-public nature of the content they are generating students can see the work done by other students and as such gain an insight into how much work they themselves need to do – since others can see the level of their activity. This creates a kind of peer-pressure that exerts influence over students to at least maintain some kind of acceptable level of activity. Russell has always been a good presenter, and I enjoyed his presentation – even though I do think I have some misgivings about the kind of peer-pressure this creates – but that doesnt mean its a bad idea.

The last paper of the day was also very interesting, the researchers sought to compare traditional and novel user interfaces for exploring a blog. The paper is entitled Contextualizing the Blogosphere: A Comparison of Traditional and Novel User Interfaces for the Web9. They investigate how contextual user interfaces affected blog reading experience and also how novel contexstual user interfaces can increase user performance and statisfaction. They compared a standard Blog Interface (WordPress), with StarTree, and the Focus Metaphor Interface. Star Tree uses a dynamic navigation tree that presents all nodes in the navigation concurrently. Each node correlated to a category on the blog. When a user selects a node it displays the associated article in a content pane. The presenters argued that by providing the entire structure of the information space concurrently this could result in superior orientation in the information space, making it much easier to find what your looking for. As such Star Tree out scored both the standard blog and the FMI interface in terms of task performance.

What this demonstrated to me was that when you have a large data set that you need to navigate or traverse in order to discover key bits of information tradition interface metaphors like facetted browsing, key word search etc. don’t provide enough context especially when the information space is large or complicated. What you need is a way to intuitively navigate to the bit that your interested in. It’s an idea some of us here at Talis have been playing around with and we think we have come up with some interesting results. 😉

At the end of the day most of the delegates attended the reception where we had some nice food, tonnes of drinks and I managed to have a lot of conversations with some very interesting people. After the reception many of us ended up in the bar where I camped out with Alan and Russell and spent the evening being introduced to tonnes of people spoke to them about their research and interests as well as what we do at Talis. The evening was a lot of fun and a great way to round off a very intense first day. All in all the first day seemed pretty fast paced, there was a lot to absorb ( some of which im still absorbing … ).

I’m going to start writing up the summary of day two ….

  1. D. Perera, J.Eales, K.Blashki [Voice Art: Investigating Paralinguistic Voice as a Mode of Interaction to Create Visual Art] [back]
  2. J.Sheridan, N. Bryan-Kinns, A.Bayliss, [Encrouaging Witting Participation and Performance in Digital Live Art] [back]
  3. Stephen Payne, IGR Report[back]
  4. A. Dix, A. Howes, S. Payne [Post-web cognition: evolving knowledge strategies for global information environments] [back]
  5. S.Bardzell, J.Bardzell [Docile Avatars: Aesthetics, Experience, and Sexual Interaction in Second Life] [back]
  6. R.Harper, D.Randall [Thanks for the Memory] [back]
  7. P. Bonhard, M. A. Sasse & C. Harries, [ The Devil You Know Knows Best: How Online Recommendations can Benefit from Social Networking ] [back]
  8. R.Beale [Blogs, reflective practice and student-centered learning] [back]
  9. S.Laqua, N.Ogbechie, A.Sasse [Contextualizing the Blogosphere: A Comparison of Traditional and Novel User Interfaces for the Web] [back]