Sunday, May 25, 2008

Facebook will take over the World (AOL 2.0)

saving comment to the lifestream. "basically it's about
metaphors. what is the metaphor one uses for "being on/in the Web"? it
could be "search", or "conversation", or "me & my friends", or even
"research", or "writing oneself into existence" (eg. via Twitter). or
some combination, of course. and yes, most people define themselves as
"social" in a quite simple way (to avoid the term "primitive"). like
"e-mail" (another social metaphor) still being the killer app of the
Internet, at least here in Germany, which is admittedly a late adopting
country. so i fear Josh may turn out to be right."

What does the Web 2.0 Office look like?

According to Brian Lamb: "Email | Office Suite | FileStore/Share | VPN | Synchronous Voice | Wiki | Blogs | Backups | Calendaring | Project management [GTD]"

To add my view: The Office environment can be described systematically along these lines:

(1) Main focus (email, phone, face2face, workspace on desk/screen) -- semi-focus ("in between" & multitasking, with some focus - like link-blogging, delicious-tagging with comment, ) -- Continuous Partial Attention (partial, peripheral, intermittant; like status & twitter) -- background (the sea one is swimming in, without noticing it, both in realspace and in mediaspace)

(2) communicating - mailing - presence (sensing & making felt)- scanning information - reading texts - writing texts

(3) voice - text

(4) synchronous - asynchronous

(5) active - passive - neither nor (semi-active/semi-passive)

maybe there is more.

Tuesday, May 20, 2008

Information Work: 3 Types

adapted from Mark Bower:

Knowledge Work
Creating, consuming, analysing, transforms and managing information
Managing ideas, projects and teams
Starting with ideas, which are then built into a new document/report/form/business process
Working in an unstructured, free-form way

Structured Task Work
Creating, consuming and processing information, but doesn't transform or manage
Finding facts quickly, creating documents, edits, writing & processing information
Working mainly within structured, pre-defined workflows

Data Entry Work
Creating and consuming data within pre-defined systems
Working with standardised documents, files, lists and forms
Working strictly within structured, pre-defined workflows

Friday, May 16, 2008

Anil Dash Copy & Paste





http://www.dashes.com/anil/2008/03/embedded-journalism.html

Monday, May 5, 2008

Matt Webb Ripped: Getting Things Done As a finite-state machine

(sorry for ripping, just to thin it over for myself, here's the original:)

Computer programmes are something else that have to not halt unintentionally. The way this is done is to model the application as a collection of finite-state machines. Each machine can exist in a number of different states, and for each state there are a number of conditions to pass to one or another of the other states. Each time the clock ticks, the machines sees which conditions are matched, and updates the state accordingly. It is the job of the programmer to make sure the machine never gets into a state out of which there is no exit, or faces a condition for which there is no handling state. There are also more complex failure modes.

Getting Things Done, David Allen, describes a finite-state machine for dealing with tasks. Each task has a state ('in,' 'do it,' 'delegate it,' 'defer it,' 'trash' and more) and actions to perform and conditions to be met to move between the states. The human operator is the clock in this case, providing the ticks. This machine does have exit points, where tasks stop circulating and fall off.

The cleverness of Getting Things Done is to wrap this finite-state machine in another finite-state machine which instead of running on the tasks, runs on the human operator itself, the same operator who provides the ticks. The book is set up to define and launch the state machine which will keep the human in the mode of running the task machine. If they run out of tasks, the GTD machine has a way of looping them back in with tickle files and starting again the next day. If they get into a overwhelmed state, the GTD machine has a way of pruning the tasks. If they get demotivated and stop running the task machine, the GTD machine has ways of dealing with that. Alcoholics Anonymous has to deal with this state too, and it's called getting back on the wagon. The GTD machine even has a machine wrapped around it, one comprising a community to provide external pressure. Getting Things Done is a finite-state machine that runs on people; a network of states connected by motivations, rationale and excuses, comprising a programme whose job it is to run the task machine.

13 #

Websites can also be seen as finite-state machines that run on people. Successful websites must be well-designed machines that run on people, that don't crash, don't halt, and have the side-effect of bringing more people in. Websites that don't do this will disappear.

Instead of a finite-state machine, think of a website as a flowchart of motivations. For every state the user is in, there are motivations: it's fun; it's the next action; it saves money; it's intriguing; I'm in flow; I need to crop the photo and I remember there's a tool to do it on that other page; it's pretty.

If you think about iPhoto as its flowchart of motivations, the diagram has to include cameras, sharing, printers, Flickr, using pictures in documents, pictures online and so on. Apple are pretty good at including iPhoto all over Mac OS X, to fill out the flowchart. But it'd make more sense if I could also see Flickr as a mounted drive on my computer, or in iPhoto as a source library just as I can listen to other people's music on the same LAN in iTunes. This is an experience approach to service design.

Users should always know their next state, how they can reach it, and why they should want to.

If I were to build a radio in this way, it would not have an 'off' button. It would have only a 'mute for X hours' button because it always has to be in a state that will eventually provoke more interaction.

Designing like this means we need new metrics drawn from ecology design. Measurements like closure ratio become important. We'll talk about growth patterns, and how much fertiliser should be applied. We'll look at entropy and population dynamics.

Maybe we'll look at marketing too. Alex Jacobson told me about someone from old-school marketing he met who told him there are four reasons people buy your product: hope, fear, despair and greed. Hope is when your meal out at the restaurant is because it's going to be awesome. Fear is because you'll get flu and lose your job unless you take the pills every day. Despair is needs not wants: buying a doormat, or toilet paper, or a ready-meal for one. Greed gets you more options to do any of the above, like investing. Yeah, perhaps. Typologies aren't true, but they're as true as words, which also aren't true but give us handholds on the world and can springboard us to greater understanding. We can kick the words away from underneath ourselves once we reach enlightenment.

Matt Webb Ripped: Micro/Macro Structures

(one of his astounding notes from the 2007 notebook:)

Micro/macro structure is the first of the challenges that faces the Web: Micro Pattern recognition

What microformats and other forms of structure do is increase the resolution of the Web: each page becomes a complex surface of many kinds of wrinkles, and by looking at many pages next to each other it becomes apparent that certain of these wrinkles are repeated patterns. These are microformats, lists, blog archives, and any other repeating elements. Now this reminds me of proteins, which have surfaces, part of which have characteristics shared between proteins. And that in turn takes me back to Jaron Lanier and phenotropics, which is his approach to programming based on pattern recognition.

So what does phenotropics mean for the Web? Firstly it means that our browsers should become pattern recognition machines. They should look at the structure of every page they render, and develop artificial proteins to bind to common features. Once features are found (say, an hCalendar microformat), scripting can occur. And other features will be deduced: plain text dates 'upgraded' to microformats on the fly. By giving the browser better senses - say, a copy of WordNet and the capability of term extraction - other structures can be detected and bound to (I've talked about what kind of structures before).

The technological future of the Web is in micro and macro structure. The approach to the micro is akin to proteins and surface binding--or, to put it another way, phenotropics and pattern matching. Massively parallel agents need to be evolved to discover how to bind onto something that looks like a blog post; a crumb-trail; a right-hand nav; a top 10 list; a review; an event description; search boxes.

The macro investigation is like chemistry. If pages are atoms, what are the molecules to which they belong? What kind of molecules are there? How do they interact over time? We need a recombinant chemistry of web pages, where we can see multiple conversation molecules, with chemical bonds via their blog post pattern matchers, stringing together into larger scale filaments. What are the long-chain hydrocarbons of the Web? I want Google, Yahoo and Microsoft to be mining the Web for these molecules, discovering and name them.