The MintyBoost is an open-source, small form-factor, USB charger which runs on AA batteries – and lets you charge your mobile phone, tablet or other USB device, without the need for mains power or a computer. It was designed by one of the leading proponents of the open-source hardware movement, LadyAda – aka MIT Engineering graduate: Limor Fried
BeagleBone is a small, low(ish) cost, open-source Linux computer on a board – using an ARM Cortex-A8 processor running at 720 MHz, with 256 MB of RAM. Unlike an Arduino – this is a fully-fledged computer. This makes it extremely powerful – and makes it a nice device to play with, to learn about embedded Linux.
This added power and sophistication does come at a cost though: this is not (or at least not currently) a tool for complete beginners in the way that an Arduino can be. This is a Linux computer – so if you want to use it to it’s full potential, then you’d better know your way around the Linux command-line.
It runs Ångström Linux – which is a flavour of Linux specifically produced for embedded devices. It’s used as the embedded OS on a number of commercial products. Whilst this is great because it means that there are a large number of people working on keeping Ångström really up-to-date; it’s also a challenge for the hobbyist – because it is very strongly focussed on the embedded device community.
BeagleBone itself is quite a new product (it only came out a few months ago) – so although there’s quite a community around it’s big-brother (the BeagleBoard) some of the documentation for the BeagleBone is a bit patchy.
The first instruction in the getting started manual is to update the software on the board to the latest version of Ångström Linux. This posed quite a challenge, even for me – and I’ve quite a few years of Linux experience (and I’m a some-time sysadin): so it’s definitely not something for someone completely new to the command-line.
If you’re struggling with this – I do have a couple of tips, however. The instructions on the various web pages aren’t really all that well written – unless you’re already familiar with the process. Telling novice users to “Learn to use ‘dd‘ and ‘gzip‘ “ or “know which /dev/sdX or /dev/diskX is your SD card through use of ‘dmesg‘” isn’t very helpful. If you’re already familiar with these tools then this instruction adds nothing – and if you’re not: then the provided links aren’t really going to help…
The first thing that I’d recommend is that you do the update using another Linux computer. Whilst Mac OS X is just a pretty front-end for UNIX, and therefore everything should be just as easy under OS X – it didn’t really turn out to be quite that simple. I could only find the latest version of Ångström in an img.xz archive, and out-of-the-box OS X doesn’t have any tools to let you work with that format: and although adding a suitable command-line tool is quite trivial it’s one extra step that you (almost certainly) wouldn’t have to make with a Linux box.
More of a problem is the way that OS X mounts the SD card. Despite spending several hours digging about, I couldn’t actually find the device representing the SD card. Because the download is (essentially) a disk image – you need to point the archive tools at the low-level device rather than it’s mount-point in the file-system.
Anyway, the long and the short of it is that if you download the image file on a Linux system, and then run the following incantation (obviously replacing the filename with the name of the image file you just downloaded) – it’ll probably work just fine. The only caveat on that is if you have more than one physical drive on your computer – in which case you may need to change the /dev/sdb to something else).
Note the addition of the --verbose flag to the command. This is going to take quite a while (~20 minutes) to run – so it’s nice to be able to see that something is indeed happening!
Once you’re up and running the latest version of Ångström, then what?
That said, Bonescript & Cloud9 do let you very easily run some classic blink light code (the hardware / embedded equivalent of “Hello World!”)
This blinks one of the user LEDs on the board itself – and will blink an external LED driven from pin 3 on header 8 (the bottom header, if you have the board facing up, and have the ethernet port on the right).
As I say, not particularly useful – but at least it shows that everything is working!
The real power of BeagleBone is using it to control hardware from Python or C code: and coupling it with the device’s on-board web server.
So look out for a follow-up post, looking at some more interesting things to do with a BeagleBone, in the near future (just as soon as I’ve worked it all out myself!)… 🙂
In my last few posts I’ve been writing about using an Arduino to generate waveforms. Part 3 of that series is still in the works (I’ve not forgotten, honestly!) – but this time I want to write about an alternative way to make a simple waveform: using good old analogue electronics, without the need for a microcontroller.
To do this we’ll use the ubiquitous NE555 timer chip (or just a 555 for short). The 555 is almost certainly the first IC you’ll play with if you’re learning analogue electronics at school or college – and for good reason, you can do all kinds of things with 555 timers – and whole books have been written on the subject. I’m not going to compete with any of those (Google, Amazon and Wikipedia will be your friends if you want to find out more). Or have a look at the datasheet…
Using a 555 to produce a square wave is a really nice easy circuit to have a go at if you’re new to electronics: as it’s not too complicated to build – and it’s pretty easy to understand what’s going on.
There are a handful of different basic ways to use a 555 – known as monostable, bistable, and astable. In a monostable configuration, the 555 will generate a single pulse (or a length determined by the configuration of the rest of the circuit), so it can be used, for example, as the basis of a simple electronic timer (such a circuit was my first ever electronics project – about twenty years ago!). In a bistable circuit, the 555 can be used as a latched flip-flop – turning on or off an output, depending on the state of two pulse inputs.
We’re interested in the astable configuration though – as that will let us generate a series of pulses in the form of our square wave.
Let’s start by looking at the circuit.
As you can see it’s a pretty simple circuit – with just the 555 IC, two resistors, and two capacitors. Additionally if we want to be able to vary the frequency, we can add a potentiometer (variable resistor) in series with R2, and to drive larger loads we can use a transistor switched by the output from pin 3 (as I’ve done here)…
So how does it work?
First of we need to understand an RC (resistor – capacitor) network. A capacitor charges and discharges at a rate proportional to the value of the resistance. This means that there’s an exponential relationship between the capacitor voltage and time… Specifically:
When the voltage on the timing capacitor (C1) reaches 2/3 of the supply voltage (Vcc) the 555 triggers the flip-flop (via pin 2) and the capacitor starts to discharge, when it drops to 1/3 Vcc, the flip-flop flips again, and starts the capacitor charging again. When C1 is charging the RC is dependant on both R1 and R2; when it’s discharging only R2 comes into play because of the connection to pin 7 between the two resistors.
Don’t worry if you don’t understand exactly how the 555 flip-flop works (it’s slightly out of the scope of this post – but maybe I’ll write something about that if people are interested): the important thing is the that the time taken to perform these charge / discharge cycles is what determines the frequency of the square wave generated.
Specifically the frequency is the reciprocal of the time taken for the circuit to go high + the time taken to go low…
So in our case the frequency should be:
And, sure enough – here’s what the oscilloscope gives us…
I’ve seen the future, and there are no mice in it.
Apple certainly have a history of setting the trends in popular computing. From the mass-market appeal of the Apple II, through to the revolution in mobile computing brought about with the iPhone & iPad; not forgetting, of course, the introduction of the mouse with the Macintosh in 1984…
Since then, the graphical user-interface (GUI) has become truly ubiquitous. Whether you favour Windows, Mac, Linux or a mobile device the GUI is the way we interface with our computers today. Even the hard-core geeks who often value the speed and power of the command-line interface, do so from within a terminal application within a GUI. No-one short of a masochist would choose anything else.
With the ubiquity of the GUI has come the near-ubiquity of the mouse. Laptop users will often make do without one (especially when on the move), and a few desktop users will choose the trackball ahead of the mouse. But the mouse has been the predominant interface device for the best part of 25-years.
But I think that’s about to change.
There are two trends driving this, I think.
The first is the fact that increasingly users are favouring using the GUI’s menus, toolbars, and context-sensitive pop-ups ahead of the more traditional keyboard shortcuts. This is inevitable really, since keyboard shortcuts are a throwback to the pre-GUI days (the days of DOS, and applications like Word Perfect 5.1 – the PC word-processor of choice, before Microsoft’s Word became the dominant force). There has been a gradual decline the prominence of keyboard shortcuts in user-interfaces, since the GUI was invented. This is true of both Windows and Apple’s operating systems. Whereas once a “power user” would expect to be able to do everything, without taking his or her hands off the keyboard; today’s applications generally have too many complex features to make this a reality.
This is a good thing. Anyone who actually used the old-school word-processor applications will remember the lengthy cheat sheets that they’d need to keep taped up nearby the screen to remind them of the keystroke sequences required to achieve all but the most routine actions. Today’s applications with context-sensitive menus, mean it’s easier than even to apply the correct formatting, or make the required changes: and to use them the user has to use their pointing device. More importantly, it makes the experience of using applications far less daunting for the vast majority of users who aren’t expert power users. And of course, since the advent of the web, and it’s inherently visual presentation, navigating hyperlinks makes no sense with a keyboard.
The second major change in technology is multitouch. Touch screens, and trackpads of the past were all single touch devices. With multitouch devices (now near universal in touch screens – but surprisingly not yet so for trackpads) the experience is changed from just being a way to drive a pointer on the screen – to having a fully fledged vocabulary of gestures to interact with the system (pinching to zoom, swiping, twisting, etc).
Mobile devices are very much at the forefront of driving this revolution. Once someone has used a tablet device, and seen how naturally you can interact with it, it becomes obvious to ask the question “why can’t all computers work like this?”
Apple seem to have asked exactly this question, since the introduction of iOS. With each new version of OS X, more gestures have been introduced. Apple’s laptop users (of whom there are many) can take full advantage of this, since the introduction of the multitouch trackpads on MacBooks a few years ago. But what about the desktop users?
Until fairly recently, they were left a little out in the cold. Although Apple’s innovative Magic Mouse was (is) a very clever piece of technology (combining, as it does, a multitouch sensor with the body of a mouse), it’s a bit limiting. It’s too small for many multi-fingered gestures, and because a mouse user typically keeps their mouse hand on the mouse for extended periods, it’s too easy to accidentally register an unintended gesture.
With the introduction of the Magic Trackpad last year – Apple effectively signed the death sentence for the mouse. Not that things changed overnight. It’s really only now with OS X 10.7 – Lion, and all of the additional gestures it introduced that the compelling case is made.
I’ve had a Magic Trackpad for less than a week now, and already I’m totally sold. There is so much control over the system using all of the gestural controls, that (for the first time ever) a user can do pretty much everything more conveniently, and more quickly, without touching the keyboard. Apple have created a user experience whereby rather than keeping your hands on the keyboard, you keep your hand on the trackpad.
So will the mouse put up a fight or will it go quietly? I think, given the natural propensity for most humans to fear change, and like what they know – it’ll be a good few years before the mouse goes anywhere; but I really believe that once users try a large, multitouch trackpad – they, like me, won’t want to go back.
It’ll also need rather more multitouch trackpads to become available (although the likes of Wacom have introduced an excellent product with their bamboo touch tablet – such products are still few and far between), and it’ll need Microsoft to introduce “Mac-like” multitouch gestures into the heart of Windows (although if the last few years are anything to judge by, where Apple lead, everyone else follows). This is something that I think is likely to accelerate over time, as the simplicity of mobile OS interfaces start to cross back onto the desktop. So I think we have hit a high-water mark for the mouse; and it’s days are now numbered.
Of course the multitouch trackpad isn’t the only device competing with the mouse. There’s also the question of cutting out the middle-man entirely – and using a touchscreen display. After all the tablet market is doing exactly that… The difference, I think is twofold though. Firstly I genuinely don’t think that (for most purposes) a 30″ touchscreen is very convenient. Dragging something from one side of the screen to the other would be just too much like hard work. On a 10″ iPad it’s fine; but something three times that size?
Then there’s the question of ergonomics. If I have to hold my arms out in front of me all day to interact with my touchscreen, then my arms are going to get tired… And, as an aside that’s why I doubt that the “Minority Report” style gestural interface will never catch on – it’s just too energetic! On the other hand, if I mount my display (as I might a tablet) horizontally flat on my desktop then I’m going to have to sit with a terrible posture – it’s bad enough using an iPad on your lap: sitting over a table display all day is bound to give you a stiff neck.
Whilst I can’t predict that someone won’t solve these problems, or come up with a new, as yet unthought of user interface device – I really do think that the mouse is heading the way of the light pen…
Last time, we looked at some Arduino code that we could use to generate some square waves.
The problem with the setup we’ve been looking at so far, is that we can only produce signals of one amplitude – equivalent to the HIGH logic level.
In order to be able to produce any other waveforms we’ll need to be able to produce a variety of different output voltages. Although the PWM method we looked at last time gives us a way to do this, it’s not suitable for producing variable waveforms – as it’s time-based. We can see this, if we try to use PWM to produce a triangular waveform: and view the output with an oscilloscope.
If we hook that up to our oscilloscope – it looks like this:
It’s not too easy to see that as a still image – so here’s a video clip of the same thing…
That’s clearly not really what we wanted: so we need to do something differently.
The solution to this problem, is to use some kind of digital-to-analogue converter (or DAC). To begin with, let’s build our own…
Or, if you prefer, in schematic…
This circuit is an 8-bit DAC known as an R-2R resistor ladder network. Each of our eight bits contributes to the resultant output voltage. If all 8-bits are HIGH – then the output is approximately equal to the reference voltage… If we switch the most-signifincant bit to LOW – then we get approximately half of that voltage.
More precisely, for an 8-bit DAC if all 8-bits are HIGH – then we get 255/256th of the reference voltage. Switching the most-significant bit to LOW gives us 127/256th of the reference voltages. More generally for any bit value x, between 0 & 255, our DAC will us x/256th of the reference voltage.
Note that here we’re using PORTD instead of setting the state of the individual output pins one at a time. This is much faster, and it (critically) ensures that all of the pins change at the same time – and that’s important given we want to switch smoothly though digital values to create a smoothly changing analogue voltage.
PORTD is port register D. Our use of it here works because it maps to the 0-7 digital pins (giving us 8-bits); and writing either a binary or decimal value to the register – we control all 8 pins in one operation. For example, if we assigned it the decimal value 123 (which equates to B01111011) would set pins 6, 5, 4, 3, 1, & 0 to HIGH… Now the problem is that pin 0 is used for communicating serial data – so using PORTD means that we can’t use any serial communications whilst code it running. It is, however, the only way to manipulate 8 output pins simultaneously (and we didn’t need serial communications for this application anyway).
We can modify our code very easily, to produce a saw-tooth waveform – by simply commenting out (or otherwise removing) the second of the two for loops.
Note that if we want to drive anything significant (even something as lowly as an LED) we need add a transistor into the circuit, like this:
Finally, on to a sine wave. We’ll need to pre-compute some values for the output voltages over time (the arduino just isn’t quite fast enough to be able to this in real-time, in a situation quite as time-sensitive as producing a waveform).
A sine wave has a cycle of 2π radians – so to produce a sine wave with 256 time-steps per cycle, we just need to calculate y=sin((x/255)*2π) for each point value of x. Since the sine function gives us values between 1 and -1; and we want values between 0 and 255, the simplest way to do this is to multiply the float value by 128 – giving us a value between -128 & +128; and then add 128 – to give us the correct range.
We could do this off-line; but that’s a bit tedious – so we’ll get the arduino to do this for us, in the setup() phase, storing the results in a global array. Then all we need to do in our main loop, is cycle through the array.
If we run that code, with our DAC, and have a look at that on our oscilloscope we see that we do indeed have something that looks quite a lot like a sine wave.
That’s all for this part, next time we’ll have a look at varying the frequency of our waveforms; and look at a more compact and accurate way to do the digital-to-analogue conversion, using a dedicated IC…
I was looking around for an interesting Arduino project, and I came up with the idea of making a function generator (also called a signal generator). The reason I picked a function generator is that it gives us the chance of playing with some interesting circuits – and some interesting code…
Before we start with that – what is a function generator?
A function generator is a circuit that generates some kind of waveform. There are four main types of waveform – the square wave, triangular & saw-tooth waves, and the sine wave.
A dedicated function generator will cost a hundred pounds or more – but it would be very much more capable than anything we’ll build here; but this will give us a chance to look at a few interesting things.
The simplest waveform to get an Arduino to produce is a square wave. The square wave (as the name suggests) simply cycles between the HIGH and LOW logical levels.
(Actually it’s not really that simple at all – a square wave produced by an analogue oscillator is actually made up of a complex mixture of multiple harmonics – wikipedia has a description of this; but we won’t concern ourselves with this…)
The circuit we need to produce a square wave, is pretty much the most trivial circuit imaginable.
In fact all we need to do is connect whatever it is we’re driving, to one of the digital output pins of the arduino (I’m using pin 8 here); and the arduino’s ground.
The code is very simple, too…
// A total delay of 4 ms = 1/0.004 = 250 Hz…
If we want to drive something like a piezoelectric speaker we can connect it to the pin, and the arduino’s ground; and we’ll be able to hear a tone. Alternatively if we pick a low enough frequency we could hook up an LED to see it blink. In fact, if you do that, you’ll note that this setup and code is actually just the blink demo.
Note that we can also use delayMicroseconds() to enter smaller delay values – giving use the ability to produce higher frequencies than the 500Hz that we’d otherwise be limited to…
In fact it’s even easier than this – as Arduino has a function to generate a square wave. For example, to produce the same 250Hz square wave as we had before, we can use: tone(8, 250); in place of the digital writes…
Let’s see exactly what this square wave looks like – buy hooking it up to an oscilloscope…
Apart from the frequency of the wave, there are a couple of other main characteristic that we might want to modify: the amplitude (a measure of how much higher the HIGH level is, compared to the LOW), and the symetry of the wave – the so-called duty cycle. We’ll leave the amplitude aside until next time – as that’ll require us to do a little more work; but let’s look at the duty cycle of the wave.
The duty cycle is simply a measure of how much of the time our wave is at the HIGH level – compared with the total time of the cycle. So far all of our square waves have been symmetrical: with an equal amount of time spent in both states. This is known as a 50% duty cycle.
If we want to change that, we can’t use the tone() function – as that’s designed to produce a symmetrical wave. Instead we need to use the digitalWrite() and delay() functions. So to create a 25% duty cycle square wave at 250Hz – we’d use delays of 1 ms after the LOW, and 3ms after the LOW.
If we use an oscilloscope we can see the shape of the resulting wave…
With the timebase set to 2ms per division, we can clearly see the 1ms width of the pulse, and the 3ms gap between the pulses.
What’s interesting is what happens if we try measuring the voltage with an instrument that’s less time-sensitive – such as a multimeter. A 250Hz signal with a 50% duty cycle, yields an average of 2.4V (which is exactly half of the 4.8V my meter shows the +5V output of the arduino to be). With a 25% duty cycle – we get 1.2V (one quarter of the HIGH voltage); and so on…
In fact, this concept may already be familiar to you – if you’ve used the analogWrite() function – we have just reimplemented the pulse width modulation (PWM) technique that Arduino uses to provide analog (or analog-like) voltages. We can show this in a couple of ways. Firstly we’ll measure the voltage across an LED during an analogue write with our oscilloscope; and then we’ll show that by adjusting the duty cycle of our code, we can dim an LED.
First lets look at a PWM voltage, giving us a 50% duty cycle (or an average output of around 2.4V).
Now let’s try implementing our own version of PWM…
First here’s a schematic of the new circuit…
As you can see we’re simply driving an LED via pin 8 (not forgetting the current limiting resistor, of course); and we’re getting a voltage (between approximately 0V and the reference voltage) from the potentiometer, and feeding it into analog pin 2.
As you can see this is also very simple code. We call analogRead() to get the voltage – and then (for simplicity) we map it onto a range between 1 & 20 – and then use that as our millisecond delay value on the LOW part of the cycle – giving us a (theoretical) duty cycle of between 50% and 5%. I say a theoretical duty cycle – because in reality the time it takes the arduino to do the read and the map will actually throw off our timings. My multimeter shows a voltage of 2.16V rather than the expected 2.4V – dividing the reference voltage (4.8V) by the measured voltage (2.16V) gives us a a duty cycle of around 45%…
As we can see from a more accurate measurement of the duty cycle, we see it’s around 46% – which implies that the time spent in the LOW state is actually around 1.08ms: hence approximately 80µS of additional time in the cycle spent at LOW whilst arduino executes the analog read and map functions). So if we were going to use this in reality – we’d probably need to find a more efficient way of determining the value for the second delay.
Or, more realistically, we could just use the analogWrite() function (which uses it’s own timer – independent of the main processing loop…
If we adjust the ‘scope to show the average voltage – we see that we get approximately the same voltage as the volt meter gave us…
Okay; that’s all for now. But be sure to come back soon for part two – when we’ll start to look at some ways to implement one of the more interesting waveforms: the sine wave.
Google & Samsung have announced the details of the first generation of Chromebooks – thin-client laptops running Google’s Chrome OS & reliant on a web connection for all of their capabilities. But can they succeed in a competitive market-place?
The Chromebook is a nice idea. Thin-client (cloud) computing is becoming (if not yet the norm) common-place in many corporate data centres; but (to date) it’s not really been a viable option for the home-user. Chrome OS changes that – by putting a cloud-based OS (and thin-client laptop) into the hands of home users.
The question is – are home user’s ready for cloud computing?
Google claim that the Chromebook can still be used without a constant network connection, because off-line versions of Google Docs, GMail, and Google Calendars. This will redress one of the major concerns – especially in the UK where the prevalence of wifi isn’t nearly as common as it could be. Even a 3G-based Chromebook won’t guarantee a network connection – especially when travelling by train.
The other problem with cloud-computing, is the question of whether people trust the cloud… Web-based email has been around for a long-time now, and (in fact) lots of people use webmail as their primary means of reading their email. This is the first example of cloud-services that most people will have experienced. We trust the internet with our email – since it’s intended to travel over the internet anyway… But do we trust the cloud – be it google or Amazon, or anyone else for that matter with our data? Our spreadsheets? Our banking records? Our letters?
And that’s a real question. It’s not at all clear if people do… Yes there are quite a few advantages to using a cloud (your data is accessible from multiple locations, it’s backed up, and it’s not dependant on any specific user’s hardware); but are these outweighed by the dangers – and (perhaps more importantly) the perception of danger that comes from not owning one’s own data?
And then there’s the price…
At £350 for the wifi only Samsung Chromebook, these machines aren’t cheap. In fact, apart from the battery-life (claimed to be around 8-hours) they stack up pretty unfavourably when compared to cheap laptops. £350 will buy you a Windows 7 laptop, with hundreds of Gb of local storage; and the ability to run real applications.
Alternatively it’ll go a long way towards buying an iPad or an Android tablet; and it’ll be these, I think, that’ll be the main competitors for the Chromebooks (at least as far as home users go). Given the choice between a portable tablet with all-day battery life, and a laptop – with less capability & (arguably worse usability for many tasks) – are people going to choose the Chromebook?
The US prices for the Chromebooks translate rather nearer to £250 – and at that price (less than £300) the Chromebook becomes a very different prospect; but at £350, its a far harder choice.
Now none of this means that Chromebooks won’t be successful. For business users (especially small to medium sized businesses) already using Google Apps, Chromebook can be a great platform; but for home users – where there’s so much competition – I think it’s going to come down to the price…
I’ve just taken delivery of a device, which I believe, will mark the end of the paperback book as we know it: the Amazon Kindle.
Electronic books aren’t exactly new. E-readers of various types have been around for some time now – but never before have they had quite the flexibility & performance of the latest Kindle.
The single most important, and most striking, aspect of the Kindle is it’s fantastic eInk display. For the presentation of text (importantly – and I’ll return to that thought later on) I can honestly say I’ve seen nothing better on any device. It genuinely looks as if it’s a fake, a display model – where the screen has been replaced by someone pasting on a sheet of paper with some words laser printed on it! Simply put, it is the screen that made me fall in love with the Kindle.
Now, I have to say, before we move on that I am not exactly what you’d call an avid reader – or at least, not of fiction. I very much suspect that most of the kindle sales are (still) to people with creaking bookshelves, and an appetite to devour novels on a daily basis. That doesn’t describe me. I like to read fairly often: but not on a daily basis. I have a fairly full bookcase; but few fiction titles on it. So given this: why have I bought a Kindle?
Mostly it’s because I’m about to take two ten-hour flights in the course of a week – and I very much like to read when travelling (especially at airports!). Also, I’m fed up of either carrying a multiple books onto the plane: or running out of reading material halfway through the trip. Would I have bought the Kindle to use for general reading at home? No; but now I have one – and have used it – I expect to use it that way too…
So, given that I’m outside what I expect to be the main demographic for Kindle buyers, why do I think that it’s going to be such a fundamental change to the way that we read?
The best description that I can come up with for the Kindle is that it is the iPod of books: and I mean that in more than one way. Amazon have taken some excellent design, and cutting edge technology – and have produced a top quality product; much as Apple did with the iPod. But as with the iPod, they’ve done more than that. They’ve done what Apple did with the iTunes Music Store – they’ve made it possible, and easy, to buy content for and on the device. In fact, Amazon have taken it one step further than Apple have with the iPod – in that the synchronization is wireless – and done (for free) over global 3G… Of course, Amazon have it easy compared to Apple in that regard, because books are pretty small compared to music files & video… But this sort of wireless synchronisation of content is a great selling point. In fact Amazon even let you read their content on multiple devices – not just the Kindle. I can have a Kindle reader app for iPhone, iPad, Android devices, PC & Mac; and all of them automatically synchronise content.
So given that the device is a sure winner: is it the end of books? Well, no.
There are a few problems with the device becoming the one and only way that one reads. The greatest of these is the fact that people (myself included) like physical books. They like the experience of holding, and owning, books. They like the smell and the feel of books; and they like the timeless permanence of the book. A 500-year-old book is still every bit as readable today as it was when it was made … I fear the same will not be true for an eBook made today.
The other major problem with the Kindle is the fact that it’s screen (as great as it is), is black and white. Now, depending on what you to do with the device, this is perfectly good. If I’m reading a novel, or an essay, then it’s the words (and perhaps a few diagrams) that I’m interested in. Most books to this day (especially paperbacks) are monochromatic: printed in simple black ink. For those sort of books, Kindle is ideal; but for all the other books – the kind with photographs (colour or otherwise), the kind with elaborate graphical typesetting, the kind where the use of colour is an integral part of the book … then really Kindle is not suitable. But then those generally aren’t the sorts of books that people want to carry with them anyway…
So I fully expect (and hope!) that hardback books will be with us for a very long time to come.
But people treat hardback books, and paperbacks, very differently. Many people treat paperbacks as essentially disposable (read it, throw it away when you’ve done) – even if I find that somewhat abhorrent! Reading a paperback book isn’t about the experience – it’s about the content. The book, the physical pages and ink, is just a means to that end. Albeit that I’ve only read a few chapters with my Kindle so far, I am already convinced that the reading experience of reading with a Kindle is better than the experience of reading many paperbacks! (Especially the really cheap ones!)
There’s one other problem with the concept of eBooks in general: lending…
One cannot lend a friend an ebook (unless you lend them the whole reader) – because digital rights management (DRM) on the books prevent you from doing so. The difficulty is that at present electronic files & DRM don’t really work quite like the real world… If I buy a physical book, and then lend it to a friend, then whilst it is in their possession I don’t have it. That’s obvious. Of course, my friend could photocopy the book and then return it to me – and then we’d both have a copy – but that’s not really practical: and photocopies of books are of poor quality & hard to use. The same is not true for an electronic book though. As a mere datafile, I can copy it endlessly & perfectly. Every copy being identical to the original. I don’t need to lend a datafile, I can copy it – that way we both have it. DRM means that using clever encryption technologies these data files can only be opened by the person who bought it legitimately. Preventing piracy, is important – I don’t think anyone (much) would argue against that; but DRM does mean that lending books is impossible; and that’s something which I think needs to change before the eBook will become truly ubiquitous…
For the paperback book to truly vanish, there needs to be a way to facilitate lending books (and selling second-hand books). A way to ensure that each book sold can be read once (and only once) at any given time – but without limiting who is doing that reading…
This challenge notwithstanding, I really do think that the end is nigh for the paperback. And on the basis of my experience with the Kindle – it won’t be missed…
I’m about to buy a new car; and with a bit of luck it’ll be the last oil-burning vehicle I ever own… But what it won’t be is an all electric car.
Before I go on, a few disclaimers. Firstly I’m not any kind of tree-hugging hippie – in fact I am what might be described as a technophile: or (less charitably – but perhaps more accurately) as a geek… That said, I am concerned about the environment; and I do think that we probably oughtn’t to go polluting the place (whether man-made climate change is real or not 1).
More importantly than that – I think the fact that it’s now economical to go to quite such extreme lengths to extract oil, shows that (fossil) oil is approaching the end of it’s 100-year reign as the dominant fuel. Alternative power sources will be needed for personal vehicles (I just cannot subscribe to the suggestion that the solution is to curtail individuals freedom for self-transportation in motor vehicles); and of the alternatives electric vehicles appear (to me) to be the most logical choice.
So given all of this – why am I not going to be signing up to the order book for the new Nissan Leaf tomorrow?
It certainly appears to be a great little car. There’s a whole load of really clever technology in there – which really appeals to my inner geek; but sadly as great as it is, it’s just too soon (for me) to go all-electric.
The Leaf is meant to be able to do about 100-miles on a full-charge. That’s quite a long way, and it’s certainly true that the majority of my driving is on journeys of less than 100-miles – but they’re not all less than 100-miles. This is the first problem for me. I don’t have room for more than one car (and, let’s be honest, nor can I afford to buy more than one car) : so I need a car that can take me as far as I need to go.
The second problem is the one that really prevents me buying an electric car today.
Simply put, I have nowhere to charge one. I live in a village in a small house without a garage, or a driveway. Unless I run an extension cord out of the kitchen window, and across the pavement – there’s no way that I could get electricity to my car. Worse is that fact that (today) there are only a handful of public-use electric vehicle charging points: and most of these appear to be in central London.
Given the state of the art today – I have to have a petrol car; an all-electic vehicle (EV) just isn’t an option.
I’ve decided to buy a petrol-electic hybrid car: specifically a Toyota Prius. This is another great car – with some very clever engineering. It appeals to me both on the level of being a very cool piece of technology, and because of it’s environmental considerations. And, with petrol prices being what they are – having a car that can do 60-70 MPG is quite appealing!
The Prius for me, will be the half-way house – between traditional fossil oil based motoring, and the brave new world of the all-electric future. I have no doubt that in seven or eight years time – when it’s time for me to replace it with a new car, that things will be different.
Battery technology has advanced tremendously in the last ten years – so it doesn’t seem unreasonable to project that continuing. Even if the pace of advance slows, it seems plausible (if not likely) that by 2018 the range of a typical electric car could be of the order of 200-miles.
But even if it doesn’t change, one thing that I can be certain will – is the availability of places to recharge such a vehicle. Today I fairly often (monthly?) drive a round-trip of about 140-150 miles in a single day: too far for today’s EVs. But that’s not one single continuous journey. I drive somewhere – go and do whatever it was that I need to do there – and then drive home. It’s two journeys – each comfortably less than 100-miles. So if I could recharge my car in between the two trips – even a 2010 EV would do the job for probably 95% (or more) of my driving needs.
Hopefully with the publicity surrounding (and apparently high interest in) in the Leaf, local authorities, companies, and housing developers will start to think about the availability of electric vehicle charging stations. After all, if they sell you the electricity at cost plus a little profit margin, when EVs take off – they’ll be poised to make money (and if there’s one thing that will change the world faster than anything else, it’s the prospect of someone being able to make money by doing it!)
Add in technologies like wireless inductive charging (which would be very convenient for the consumer), fast-charging, or even battery-swapping stations; and you suddenly find that the calculus about EV ownership has changed significantly.
Also as the market expands the prices will (by simply application of economic theory) come down – and despite somewhat sensationalist news headlines to the contrary, the minute price per mile-traveled for electric cars will start to encourage more and more people over to EVs.
I think it’ll be a long time before EVs have the majority market share – that won’t happen overnight: and we may be twenty or thirty-years away from seeing that; but I don’t think it’d be crazy to suggest that 10-15% of all cars sold by 2020 might be EVs…
For now however, I’ll have to make do with a Prius (and to be fair, it is a great car); but I am confident that this really will be the last petrol-driven car I ever buy: at least, I hope so…
1 For the record, I think the evidence speaks for itself…