How smartphones killed off boredom – and ushered in an age of distraction

20170516Smartphone

How smartphones killed off boredom – From The Irish Times, May 11th, 2017

Next month see the 10th anniversary of the release of the first iPhone and the start of the smartphone revolution that has shaped the intervening decade in many ways, both obvious and subtle.
But the birth of the smartphone also presaged the death of something else, something whose absence we hardly remark on but which is of momentous importance – the smartphone has in many ways eliminated boredom in the developed world.
Those moments of idle reflection and those minutes of frustrating inactivity have been rendered a thing of the past by the devices in our pockets, which provide short bursts of diversion whenever we have a moment to fill.
Very soon after getting my first iPhone, I realised that boredom was over – I had an infinite supply of articles and books to read at any moment. And this was before Twitter and Facebook became fine-honed mobile experiences, cunningly engineered to hog users’ attention.
And sure enough, I haven’t experienced boredom in the classic sense in years – it exists for me now more as a kind of vivid childhood memory rather than as a human emotion I might experience in daily life. And I’m certainly not alone in not missing it remotely.
Like most people, I had a strong aversion to boredom, and felt its imminent demise would be one of the great achievements of the smartphone, both on the individual level and in the aggregate. I  found persuasive the prediction of the US public intellectual Clay Shirky in his 2010 book Cognitive Surplus that the spread of the internet would unleash a huge wave of previously untapped human potential – the “cognitive surplus” of the title – leading to a burst of extra productivity and creativity.
Needless to say, it will be a very long time before we comprehensively reckon with how significant the extinction of boredom will be for our species, but now it’s clear that we can’t just dispense with a basic human experience like boredom and not have some unintended consequences to grapple with.
A decade on, and it’s clear that we are already facing many of those unintended consequences. That is because we have not substituted boredom with constant engagement, but rather with distraction. And it turns out constant distraction, with frequent hits of dopamine at every hint of novelty, isn’t necessarily a healthy state of mind.
Around the same time as Shirky’s book came out, Nicholas Carr was selling an altogether more pessimistic vision of how we were going to be affected by immersing ourselves in technology in his book “The Shallows: What the Internet Is Doing to Our Brains”. In Carr’s view, the internet was doing nothing good for our brains, eroding our ability to focus and think deeply.
Carr seemed to echo the warnings of previous generations about the dangers of television, or even the famous warning by Socrates on the potential damage of writing – it epitomised the predictable moral panic that greets every new technology. But while Carr’s polemic was a little histrionic for my taste, the intervening years have proven him rather prescient, and his thesis has been elaborated on by a multitude of writers, catering to an audience who are clearly feeling the ill effects of constant distraction.
Books such as Deep Work: Rules for Focused Success in a Distracted World by Cal Newport, Mindful Tech: How to Bring Balance to Our Digital Lives by David Levy, and The Distracted Mind: Ancient Brains in a High-Tech World by psychologist Larry Rosen and neuroscientist Adam Gazzaley are just some of the high-profile examples of the genre in recent years, the sense of techno-induced anxiety palpable in each of those subtitles.
Indeed, our compulsive need for smartphone distraction is drawing parallels with another form of mass addiction with harmful consequences. In a recent essay on the corrosive effect of smartphone dependence, the millennial self-help writer Mark Manson made the explicit comparison with smoking: “It’s attention pollution when somebody else’s inability to focus or control themselves then interferes with the attention and focus of those around them…The same way second-hand smoke harms the lungs of people around the smoker, smartphones harm the attention and focus of people around the smartphone user. It hijacks our senses.”
Perhaps these are all just growing pains, and the smartphone era is a mere phase, a temporarily jarring transition until we reach a point when we realise our brains are actually perfectly capable of diving in and out of different streams of information, and that it is society that needs to catch up with our diversion-filled environment.
But I suspect the conclusion will be something quite different. Constant distraction, obviously enough, is bad for us. But less intuitively, perhaps we also need to admit that doses of boredom are good for us.

The iWatch Cometh…

 

The most highly anticipated Apple launch event since the arrival of the iPad in 2010 is rumoured to see the long-awaited unveiling of the iWatch – Apple’s bid to truly launch the era of wearable computing. I’m fortunate enough to be attending the event and will post thoughts on what is unveiled, but in the meantime, here is a column I wrote on smartwatches early last year. It will be interesting to see how prescient, or otherwise, I was.

iWatch this space: will Apple reinvent the timepiece?

From The Irish Times, March 25th, 2013

Last week I was travelling in a different time zone, so to keep my bearings I tried to reset the time on my prized but cheap Casio watch – this should not prove an onerous task, I thought. Wrong. I was quickly plunged into an excruciating rigmarole of continuous button pressing and mode changing and inadvertent mistake-making. This should be a lot easier, I thought to myself as I wasted yet more time in search of the right time. And judging from recent rumblings in the technology world, I’m not the only person thinking that.

If the rumour mill is to be believed, the humble wristwatch is next in line for some Apple-flavoured disruption. Recent stories in the mainstream US press, presumed to be strategic leaks out of Cupertino, suggest that Apple has a team of 100 engineers working on the inevitably nicknamed iWatch. The gadget sites are in overdrive, already predicting ways in which Apple can bring that trademark design flair to our lower arms. Analysts are all over the business networks, spouting extraordinary numbers suggesting that Apple could add another $6 billion business to their iPhone, iPad and Mac lines with the introduction of an iWatch. Not to be outdone, Samsung has already announced it has its own smartwatch in the works.

You don’t even need to mention Google Glass to realise the era of wearable computing is imminent, and the wrist is going to be prime real estate. Or rather, wearable computing is imminent once again, because experiments in feature-addled watches have been going on for longer than Dick Tracey has been busting criminals with the aid of a wrist-borne radio.

The efforts to add computing utility to our wristwatches is a more recent trend, but the results have tended to be inglorious failures – think of those ickle Casio calculators, or the ones with in-built IR remote controls.

Indeed, Samsung has long been one of the pioneers of the smartwatch space – they were the first company to ship a watchphone back in 1999, the SPH-WP10, a rather ill-conceived and bulky piece of kit that unsurprisingly failed to become a fashion statement or replace the Nokia handsets everyone was using back then.

More recently, in 2009, Samsung announced the considerably better-looking S9110, a touchscreen watchphone that was hobbled by mediocre specs and poor software. Around the same time, LG announced the GD910 watchphone, which went on sale in the summer of 2009 for an eye-watering sticker price of about €1,000, while Hyundai also introduced a watch phone that year. None took off, unsurprisingly.

And while we’re recalling dimly remembered wrist gadgets, it’s worth pointing out that Bill Gates unveiled a networked Microsoft smartwatch in 2002 – the Smart Personal Object Technology (SPOT) watch was a sales disaster that almost immediately went into the ledger as one of those foolish Redmond endeavours alongside their early tablets and Windows Vista.

More recent efforts, such as the crowdfunded Pebble watch or the Nike Fuelband, have scaled back their ambitions to act primarily as intelligent accessories for our smartphones, offering notifications of incoming messages or monitoring our exercise levels.

Given this long history of products in the smartwatch space, it’s worth asking why everybody is waiting for Apple to effectively “invent” the smartwatch all over again? Is it yet more evidence that the technology industry is overly dependent on Apple to define and crystallise the design of gadgets, and our relationship with them? And above all, can any such device prove to be as revolutionary as the iPhone and iPad?

I suspect not – I’m of the school that thinks any iWatch will not be much more than a new, diminutive iPod model with a range of sensors, Bluetooth 4.0 to communicate with your iPhone and the possibility for third-party developers to create apps to take advantage of the form factor. That could be cool, certainly, but not groundbreaking.

The hype, I reckon, is fuelled by the excitement surrounding wearable computing and, to a lesser degree, the quantified self, our lives recorded and measured by an array of devices. Google Glass is the pre-eminent example, though I suspect also the most overblown.

But it seems obvious to me that we are already in that era, for all intents and purposes – in a practical sense, those smartphones in our pockets are being “worn” just as much as bracelets or badges or necklaces or glasses are worn. The coming array of smartwatches might add marginal convenience for users, but it will be a while before they can completely usurp the smartphone for a whole host of reasons.

Imagining how things might unfold, I’d wager that wearable computing will take the form of a constellation of devices, from sensor-filled watches to camera-equipped badges to, possibly, networked spectacles, that will all interoperate, communicating with one another over Bluetooth, sending data to the cloud, inspiring a host of innovative apps we can barely begin to imagine.

It will be cool, ultimately it will probably be revolutionary, but let’s not ignore the obvious perils of such complexity – there’s no reason to think these devices will eradicate the frustrations I experienced trying to change the time on my Casio.

Searching for Satoshi

 

So it seems as if the famously pseudonymous Satoshi Nakamoto has finally been discovered – and his name really is Satoshi Nakamoto. At least that’s what Newsweek is claiming. But Newsweek isn’t the first publication to go searching for Satoshi – back in 2011, the New Yorker published a piece claiming that Satoshi was actually Irish computer science PhD candidate Michael Clear. Here’s my interview with Clear, published a few days after the New Yorker story broke.

From The Irish Times, October 8th, 2011

When we hear politicians and business leaders talking about the need to create a knowledge economy, they probably don’t mean our brightest and best should go out and literally create a new economy using their knowledge.

But that is exactly what the New Yorker magazine accused one of the country’s brightest and best students of doing this week in a lengthy article on the virtual currency Bitcoin.

High-profile US technology writer Joshua Davis went sleuthing for “Satoshi Nakamoto”, the pseudonymous figure who revolutionised virtual currency and electronic finance when he announced Bitcoin in 2009. Davis combed through the world’s best programmers and cryptographers and eliminated them until he had what he thought was a plausible suspect, unveiling his likely Satoshi in the New Yorker as . . . 23-year-old Trinity College computer science postgrad Michael Clear.

Clear, understandably, was mightily surprised – when Davis contacted him at a cryptography conference in California in August, Clear thought he made it plain that he wasn’t Satoshi. “I thought I might feature in one paragraph as a possible candidate who was quickly eliminated,” he explains over a coffee, equal parts bemused and amused by the whole thing – he had no idea that Davis would paint him as a prime suspect. He has spent the week since the article was published denying he is Satoshi, writing on his website that “Although I am flattered that Josh had reason to think I could be Satoshi, I am certainly the wrong person…It seems that even limited searches yield candidates who fit the profile far better than I think I do.”

Clear pointed Davis towards a Finnish virtual currency researcher as a more likely Satoshi, more as a light-hearted illustration that even a cursory search could yield a huge number of credible Satoshi candidates who might be considered a closer fit.

“I think he didn’t get my sense of humour there, maybe it was a little dry and Irish for him,” Clear says. Above all, though, Clear points out that “I’m not even very interested in economics.”

And economics is clearly central to Bitcoin – it is controlled by peer-to-peer software, with a finite number of bitcoins to be created over the next few years, earned or “mined” by people using the software to crunch numbers and help with the decentralised accountability such a system requires, so they can’t be duplicated or stolen. Bitcoins can be bought and sold in online exchanges, or even be used to buy things – some real-world cafes and motels accept the things. By all accounts, it’s a masterpiece of programming. As Nobel prize-winning economist Paul Krugman described it: “Bitcoin has created its own private gold standard world, in which the money supply is fixed rather than subject to increase via the printing press.”

As interest in Bitcoin increased and its value rose – at its peak it was worth nearly $33 before tumbling back down to about $5 – the mystique about its creator became a core part of its appeal. But there was no evidence that Satoshi ever existed – he was the Keyser Söze of digital currency creation. So when Davis revealed that he finally had a suspect, it created a huge flurry of interest in the online tech and economics press, with many of the subsequent articles and blog posts removing the ambiguity of Davis’s piece altogether, despite all of Clear’s denials.

In true New Yorker style, Davis’s article is a very compelling narrative, reminiscent of those pieces where some Catcher in the Rye fan would go hunting for JD Salinger – Clear himself says he enjoyed it, despite what he felt were some misrepresentations. But plenty of Bitcoin observers have also cast doubt on Davis’s detective work, questioning the rigour of his search among other things – he relied on Satoshi’s excellent use of the Queen’s English to limit his search to British and Irish researchers, and settled on Clear because Davis expected to find Satoshi at that Californian conference.

There is no doubting Clear’s capabilities, that’s for sure – he was elected a Trinity scholar in 2008 (not top undergraduate, as the New Yorker said); graduated with a gold medal in 2010; and won the computer science category of the Undergraduate Awards, for which he’ll be getting another gold medal from President McAleese at the end of the month. “I like talking about projects that I was actually involved in, or things that I have designed and developed,” he says, happily discussing his security software start-up and his PhD research on something called “fully homomorphic encryption”, but he has no interest in taking credit for other people’s work.

Despite the hassle of having to fend off allegations of being the man who invented Bitcoin, Clear does now have a reputation as a world-class programmer and cryptographer, which might be a consolation when the dust settles. And it also says a lot about the high standards being set by Trinity’s computer science department that one of their own could be so easily mistaken as the mastermind behind a vastly ambitious virtual currency. Maybe that long-promised knowledge economy isn’t so far-fetched after all.

Growing old in the age of social apps

  In the wake of Facebook’s gargantuan acquisition of messaging app WhatsApp, time to revisit a column I wrote for The Irish Times at the end of last year about the inevitable plurality of social apps. From The Irish Times, December 2nd, 2013 At a certain point, everybody who writes about technology has to attempt…

Exploring Obama’s Victory Lab

  Barack Obama’s re-election has prompted a lot of shock among American conservatives, convinced they were about to get their man back in the White House. In some of the rancorous fallout, Mitt Romney’s ground game software, Project Orca, has come in for some serious criticism – this Ars Technica piece details how poorly it…

Who invented the iPhone?

So Apple won a sizeable victory against Samsung in the big patent trial of the year/decade/century. Fair play for originality, I think, but still no vindication for the barmy patent system. But as I put it in this recent Irish Times column, “if Samsung could have invented the iPhone, Samsung would have invented the iPhone”….

Why Microsoft is losing the battle and the war

  Microsoft have just announced their first ever quarterly loss in their corporate history, losing $492 million, albeit with a $6.2 billion write-down after its ill-fated (read: stupid) purchase of online advertising business aQuantive. That write-down is enough to get an asterisk beside this loss, although Microsoft’s ludicrously misjudged Online Services strategy meant something like this was only a…

RIM and Nokia’s grand unravelling

In this month’s Innovation Talk, I look at the startling decline of Nokia and RIM. Once the twin pillars of the mobile industry, they’re flailing now, with no sign of hope. In this week’s Monday Note, former Apple executive and all-round sharp guy Jean-Louis Gassée was even more blunt – “Nokia, once the emperor of mobile…

Facebook floats away…

The reaction to Facebook’s IPO is pretty extraordinary, with coverage akin to a major sporting event. This is what we’ll have to live with, now, this FB ticker watching, an unholy extra element to consider every time there’s another cack-handed redesign or privacy-smashing misstep. I think we’d all do well to ignore the minutia of…

Can magazines go iPad?

This is an illuminating, and rather dispiriting, account of creating iPad versions of print magazines from Jason Pontin, the editor in chief and publisher of Technology Review, whom you’d think would be pioneers of iPad publishing. The reality is a lot more problematic: We fought amongst ourselves, and people left the company. There was untold expense…