The death of the mouse?

'all wireless again...' photo (c) 2010, macreloaded.com - license: http://creativecommons.org/licenses/by-nd/2.0/

I’ve seen the future, and there are no mice in it.

Apple certainly have a history of setting the trends in popular computing. From the mass-market appeal of the Apple II, through to the revolution in mobile computing brought about with the iPhone & iPad; not forgetting, of course, the introduction of the mouse with the Macintosh in 1984…

Since then, the graphical user-interface (GUI) has become truly ubiquitous. Whether you favour Windows, Mac, Linux or a mobile device the GUI is the way we interface with our computers today. Even the hard-core geeks who often value the speed and power of the command-line interface, do so from within a terminal application within a GUI. No-one short of a masochist would choose anything else.

With the ubiquity of the GUI has come the near-ubiquity of the mouse. Laptop users will often make do without one (especially when on the move), and a few desktop users will choose the trackball ahead of the mouse. But the mouse has been the predominant interface device for the best part of 25-years.

But I think that’s about to change.

There are two trends driving this, I think.

The first is the fact that increasingly users are favouring using the GUI’s menus, toolbars, and context-sensitive pop-ups ahead of the more traditional keyboard shortcuts. This is inevitable really, since keyboard shortcuts are a throwback to the pre-GUI days (the days of DOS, and applications like Word Perfect 5.1 – the PC word-processor of choice, before Microsoft’s Word became the dominant force). There has been a gradual decline the prominence of keyboard shortcuts in user-interfaces, since the GUI was invented. This is true of both Windows and Apple’s operating systems. Whereas once a “power user” would expect to be able to do everything, without taking his or her hands off the keyboard; today’s applications generally have too many complex features to make this a reality.

This is a good thing. Anyone who actually used the old-school word-processor applications will remember the lengthy cheat sheets that they’d need to keep taped up nearby the screen to remind them of the keystroke sequences required to achieve all but the most routine actions. Today’s applications with context-sensitive menus, mean it’s easier than even to apply the correct formatting, or make the required changes: and to use them the user has to use their pointing device. More importantly, it makes the experience of using applications far less daunting for the vast majority of users who aren’t expert power users. And of course, since the advent of the web, and it’s inherently visual presentation, navigating hyperlinks makes no sense with a keyboard.

The second major change in technology is multitouch. Touch screens, and trackpads of the past were all single touch devices. With multitouch devices (now near universal in touch screens – but surprisingly not yet so for trackpads) the experience is changed from just being a way to drive a pointer on the screen – to having a fully fledged vocabulary of gestures to interact with the system (pinching to zoom, swiping, twisting, etc).

Mobile devices are very much at the forefront of driving this revolution. Once someone has used a tablet device, and seen how naturally you can interact with it, it becomes obvious to ask the question “why can’t all computers work like this?”

Apple seem to have asked exactly this question, since the introduction of iOS. With each new version of OS X, more gestures have been introduced. Apple’s laptop users (of whom there are many) can take full advantage of this, since the introduction of the multitouch trackpads on MacBooks a few years ago. But what about the desktop users?

Until fairly recently, they were left a little out in the cold. Although Apple’s innovative Magic Mouse was (is) a very clever piece of technology (combining, as it does, a multitouch sensor with the body of a mouse), it’s a bit limiting. It’s too small for many multi-fingered gestures, and because a mouse user typically keeps their mouse hand on the mouse for extended periods, it’s too easy to accidentally register an unintended gesture.

With the introduction of the Magic Trackpad last year – Apple effectively signed the death sentence for the mouse. Not that things changed overnight. It’s really only now with OS X 10.7 – Lion, and all of the additional gestures it introduced that the compelling case is made.

I’ve had a Magic Trackpad for less than a week now, and already I’m totally sold. There is so much control over the system using all of the gestural controls, that (for the first time ever) a user can do pretty much everything more conveniently, and more quickly, without touching the keyboard. Apple have created a user experience whereby rather than keeping your hands on the keyboard, you keep your hand on the trackpad.

So will the mouse put up a fight or will it go quietly? I think, given the natural propensity for most humans to fear change, and like what they know – it’ll be a good few years before the mouse goes anywhere; but I really believe that once users try a large, multitouch trackpad – they, like me, won’t want to go back.

It’ll also need rather more multitouch trackpads to become available (although the likes of Wacom have introduced an excellent product with their bamboo touch tablet – such products are still few and far between), and it’ll need Microsoft to introduce “Mac-like” multitouch gestures into the heart of Windows (although if the last few years are anything to judge by, where Apple lead, everyone else follows). This is something that I think is likely to accelerate over time, as the simplicity of mobile OS interfaces start to cross back onto the desktop. So I think we have hit a high-water mark for the mouse; and it’s days are now numbered.

Of course the multitouch trackpad isn’t the only device competing with the mouse. There’s also the question of cutting out the middle-man entirely – and using a touchscreen display. After all the tablet market is doing exactly that… The difference, I think is twofold though. Firstly I genuinely don’t think that (for most purposes) a 30″ touchscreen is very convenient. Dragging something from one side of the screen to the other would be just too much like hard work. On a 10″ iPad it’s fine; but something three times that size?

Then there’s the question of ergonomics. If I have to hold my arms out in front of me all day to interact with my touchscreen, then my arms are going to get tired… And, as an aside that’s why I doubt that the “Minority Report” style gestural interface will never catch on – it’s just too energetic! On the other hand, if I mount my display (as I might a tablet) horizontally flat on my desktop then I’m going to have to sit with a terrible posture – it’s bad enough using an iPad on your lap: sitting over a table display all day is bound to give you a stiff neck.

Whilst I can’t predict that someone won’t solve these problems, or come up with a new, as yet unthought of user interface device – I really do think that the mouse is heading the way of the light pen…

iPad delays…

Wired is now reporting that Apple have now said that the iPad won’t get an “international” (i.e. European) release until the end of May 2010.

This is somewhat disappointing news for Apple fans on this side of the Atlantic.

Apple cite spectacular sales figures in the US as the reason for the delay.  Keeping people waiting doesn’t seem like a terribly clever marketing strategy – so I can only presume that this is down to a real shortage.

I suppose the only upside to this, is that it gives Apple a little longer to develop a firmware fix for the wifi problems that have been experienced by some users.

Why iPad isn’t a laptop killer…

There’s been a lot of comment about Apple’s new iPad – much of it very positive but not universally so

It’s Cory’s point that I want to talk about here.

Before I start, I have to “declare an interest” (as it were) – I am an Apple fanboy; I admit that – but that doesn’t (or at least, I hope it doesn’t, make me unable to take a more dispassionate and reasoned view).

I think that Cory is wrong – because he’s looking at the iPad as a computer…  It’s not; rather it’s the first serious & mass-market “web appliance”.

For as long as I can remember, the editorials in computing magazines have heralded the end “computing” as an activity for the chosen few – and it being for “the masses”.  Even long before my day, there were those who viewed innovations such as “high level languages” as opening up computers for the man in the street.

Sadly, and also for as long as I can remember, those same editorial columns have also been full of stories telling us that computers are just too hard to use.  Even the advent of the GUI, and with the uptake of internet use in the last fifteen-years, mass-market sales, hasn’t really helped.  You still read columns pointing out that washing machines don’t need you to reinstall their operating system every eighteen-months because they’ve gotten a bit slow…

Apple have a great track-record of spotting, and filling, niches in the market (albeit not generally by being the first to do something – but by being the first to do it right).  There were laptops before the PowerBook, MP3 players before the iPod, smart phones before the iPhone, and so-on…  With the iPad the niche is the one currently filled by Netbooks: except (in my opinion) that it isn’t quite that simple.  I don’t think Apple are going after the “ultra-portable” aspect of the market as such (though clearly iPad fits in there): but rather the “portable web browsing” aspect…

And there’s the clever bit; and there’s why Cory is wrong.  The iPad simply isn’t trying to be a “computer” in the sense that we think of them today.  It’s not a laptop without a keyboard – it’s something else entirely.  It’s an information delivery platform – not an information generation platform.  It really isn’t a laptop – but an overblown iPod Touch.  In short – it’s a web appliance.

People think about their computers and their phones different.  Until iPhone (with a few minor exceptions) people didn’t ever think about updating the system software in their phone.  It was what it was…  The iPhone changed that – and make the phone more like a computing platform.  The iPad’s form-factor accentuates this still further; but to think of it that way is to miss the point.

I believe that it’s been designed to be an appliance, in the way that a microwave oven is an appliance.  Yes, you still have to know how to cook to do anything useful with it – but you don’t need to be a radar engineer…

It’s the same with the iPad.  By controlling all of the variables – Apple have produced a device that will “just work”.

Now, none of this is to say that I don’t agree with Cory’s central tenet: that openness of systems is a “good thing”.  Of course it is.  But to apply that to the iPad – is to miss the point.  He writes:

“The way you improve your iPad isn’t to figure out how it works and making it better…”

Well, no, okay.  That’s true.  But neither do I improve my fridge, or my TV, by “figuring out how it works and making it better”…

The truth is that over the last twenty-five years – the complexity of computers has increased exponentially (literally).  Someone keen and interested (with the right background and skills) could (can) understand how a late 1970’s vintage computer works – at the lowest possible level.  Indeed, you can even buy a (replica) of the Apple I in kit form: for home assembly.  But, with the best will in the world, no-hobbist can hope to build 2010 vintage iMac (for example) from scratch.  Miniaturisation and complexity have put pay to that.

This isn’t something that’s unique to computing.  Go back to the 1940’s or 50’s – and you’ll see that it wasn’t at all uncommon for enthusiasts to build their own radios.  Well, sixty or seventy years later – radio has given way to television as the mass-broadcast technology: and I don’t remember the last time I saw someone building their own plasma-screen TV from scratch…

The same is true with software too.  For all the myriad frameworks & libraries that exist today to make things “simpler” – they also (undeniably) add complexity.  Yes, using .net to write a Windows program is easier than writing that same program in C and calling all the APIs by hand; but even easier still is to write for the command-line…  GUIs make it easier to use computers – but harder to program them.

At the end of the day – most people don’t want to write their own code, but rather they want someone else to do it for them.

And, of course – for those that do what to write their own code: there will always be “fully fledged” computers… My suspicion though, is that the folks that buy an iPad instead of a computer – aren’t going to be the folks who want to write their own code.  The app store makes distribution easy – and safe: the installation process couldn’t be easier, and there’s no need to worry about virus and malware.  The lack of the ability to run just “any” code cuts both ways: it makes things safer, but it also limit what can be run to those apps “blessed” by Apple…  Is this a price worth paying?  For many people, honestly, I think it is.

And it’s not as if Apple are the first people to do this.  Take games consoles.  They are (arguably) the first home “computer-based” appliances.  Is there an outcry bemoaning the ability for individuals to write their own for their PS3?  No.  It’s just an accepted part of the way that these “appliances” work.  The iPad will be the same.

I think that the iPad has the potential to become the pattern for the future of computing for “the masses” (in the least pejorative sense of that phrase).  Yes, people will always want fully-fledged computers for “heavyweight” applications – but for mail, web browsing, and the like – I believe that the appliance approach is the way forward: and that iPad is at the vanguard of this coming trend.