" />
Data Security

Agreed personal data use and its disagreeable trend

Agreed personal data use and its disagreeable trend

Personal data use is driving the online attention economy, but its recent trajectory is a severe diversion from that hazy 90s dial-up freedom.

I am the generation old enough to remember the internet’s introduction, but too young to remember anything different. Blair in Downing Street, Clinton in the White House and an atrocious racket every time a computer took a stab at getting online – things seemed prosperous and promising – and despite the fact that stuff tends to cost money somehow websites had figured out a way to be free.

I didn’t know or care how, I just cared that they had, and it was good to be part of a group too young to earn money but born late enough not to need it. It seemed right and proper somehow, a continuation of 90s paradigm-busting progress. And as soon as the trend of free use survived the dot-com crash it charged straight into the 00s like Bambi being shoved onto a stage in front of a crowd of drunken hecklers to embark on a stand-up routine; somehow producing marked success.

It was in those beautiful early moments when, to a youngster, things weren’t just “simple”, but the formulation of the world was simple: if something was so, and good, it need not be questioned. Google was there, then Facebook popped up. Napster went down only for Kazaa and then Limewire to appear in its place. And as it all ramped up everything seemed to be on track: Google’s IPO sounded like a big in-joke, seeking $2,718,281,828 (the product of $1 billion and “e”) from an undisclosed number of shares with limited control in order to fund the phrase “Don’t be evil”. And Facebook couldn’t even be bothered to think about profits for years, people just threw money at Mark Zuckerberg and that was an OK thing.

Contrast that history to an announcement a few days ago by India’s Reliance Industries chairman Mukesh Ambani: “Data is the new oxygen, new oil for 1.3 billion Indians. India does not need to import it, has to ensure every Indian has access to it…Our aim is to not only provide high-speed data at affordable prices but also affordable smartphones to help people connect better with the internet.”

It’s a confusing statement for a millennial to parse. It contains all the egalitarian sentiment, implied freedom and unbridled ambition of the late 90s; but, in 2017, aligning those characteristics with the internet seems incongruous at best, delusional at worst, and certainly nowhere near visionary.

Yes, ‘data is the new oil’ – if we must talk in imprecise metaphors – and yes the internet is supposed to be for everyone. But this ‘data is the new oil’ phrase doesn’t refer to everyone, it refers to the ability of huge companies to drive vast economies by creating colossal sums of money. That’s what oil did and that’s what personal data use is now doing.

Youthful naivete aside, this is not just a nostalgic trek through the halcyon days. How did we go from the internet being an egalitarian tool created by and for the people, to it using the people to fuel an even wider wealth- and power-gradient? And where is this trajectory going?

Advertising and influence with personal data use

Facebook and Google, are in many ways the original pillars of the internet and what it has become, pioneers in more ways than were virtuous. As Google charged in to profit through advertising, Zuckerberg still couldn’t bring himself to follow suit so he hired Sheryl Sandberg to do it instead. Thence the forces currently driving world-leading economies were set in motion.

To assert some precision: personal data became the new crude oil, segmentation the new refining process and targeting the new petrol pump. All that remains to be decided is which of Silicon Valley’s finest is the new Rockefeller (I’m betting it’ll be Thiel).

And, as commentary on the Equifax leak veers back away from precision – described as the new version of a environmental disaster – we still miss the point. The point which identifies the intangible data as the most powerful: the views and opinions you hold which can be deduced from your online behavior are most useful in trying to get you to buy things. The intangible data is what Russian political ads on Facebook can be targeted towards, remaining unobservable to other, less opportune – or perhaps more scrutable – parties, like journalists.

If you had to choose, would you prefer your friends find out about your credit score or your search history? Or to put it another way, if you were in a mischievous-but-relatively-harmless mood would you prefer know your friends social security number or purchase history?

In fairness this point hasn’t been completely missed, the EU’s GDPR, coming into effect next year, will specifically grant users the right to know exactly what personal data is being collected and for what purpose, and give them “the right to be forgotten”. But how it jars to interpret a word like “forgotten” in such laudable and revered tones. Tones only heightened by observations like the World Advertising Research Centre concerning itself with fending off inhibitory regulation.

This much has incrementally woven itself into the way we go about our lives so far and could charitably be described as ‘just the beginning’. The Internet of Things (IoT) — to introduce that which needs no introduction — is in its early stages and would not make sense in the slightest degree without data. The promise is of an environment that knows how we will interact with it, predicts what we need, and can do what we cannot be bothered to, can only be done with data. It holds impressive and concerning potential in equal measure. For if it can do such things then, in the same way ‘data-driven’ website functionality improves advertising revenue, what else could the IoT do with the means by which it achieves a slick new user experience?

So, in the tech-spirit of ignoring the status quo, people look for alternatives to such an unfavourable prognosis. Can we earn a living from selling our data? Or maybe we can get micro-payments every time our data is used? I fear the answer may not be so simple, nevertheless observations are available which might be instructive in trying to answer the question “what’s next?”

Fear and loathing with the future of personal data use

The data we pump into tech companies isn’t necessarily enhancing user experience as we might be led to believe. A study from the National Bureau of Economic Research in the US looked at changes in the success of search engine results over a time period when some search engines made significant changes to their data retention policies. They found that when companies introduced time thresholds beyond which data would be deleted or anonymised, the performance of their search engines underwent no change.

The study used results from Bing and Yahoo; Google has never made any change to its data retention policy. I should point out (since I can almost hear the voices of university professors-past in my ear) one study might be interesting, and justification for further research, but it categorically does not prove anything. Nonetheless, this is still one of them “all is not as it seems” moments.

Parallel to this it’s also been pointed out that the big-tech recipients of our data are able to use the spoils of user interactions to build other products. Your Facebook photos and Google searches can be used to train neural networks, providing companies with new products and new lines of revenue. Yet it was only the original functionality you gave up your data to be able to use.

We are, in essence, paying for the use of Google’s and Facebook’s core functionality with our data, only for it to then provide the company with as yet untold further avenues of business, for which we receive no remuneration. Except it wasn’t the core functionality we cashed our data in for. We agreed to the terms and conditions, which amount to a blank check for the company’s ability to spend our data for us.

The EUs GDPR goes some way to improving this situation but personally I can’t see a fair relationship between consumer and tech being formed without a law which endows individuals with copyright over their personal data as a human right. Much more will have to be done beyond that, but more cannot be done without it.

On the other hand, even if companies are not using our data to improve their core products, we’ll get other products which we’ll like and that’ll be cool? That is almost certainly true, but then companies tend not to build new products if they don’t provide a new line of revenue. So given the chasm between original intentions and current capabilities that we’re now aware of, we could reasonable expect new products to be agents of a similar chasm.

This article in Forbes, for example, attempts to instruct people – albeit in a crude way – on how to use data and behavioral psychology to convert leads to sales. Even simple tips about how to create an effective call-to-action button are based on providing stimuli which is proven, by data, to change behaviour. This is all a meta-understanding of behaviour (for now), but it does illuminate an up-to-date manifestation of the way data is used to drive more effective advertising and thus more revenue.

Thinking of advertising, in its most fundamental form, as the attempt of a company to change our behavior (i.e. to buy something we might not have bought without their intervention) is helpful. This hasn’t been a problem until now because the only tools at advertiser’s disposal have been relatively blunt, like TV ads and billboards. But now they can feed off our data and make predictions en masse about how we behave, and I know of nothing stopping them – and plenty which might incentivise them – to work out how to do this on an individual level. A more sinister prospect than the latest hilarious commercial.

So sinister, in fact, that the winning essay for The Nine Dots Prize, a competition which poses questions about contemporary societal issues, was about the attention economy and its effect. The book which describes the thesis in full will be out next year, but the website cites its conclusions as:

  • How the ‘distractions’ produced by digital technologies are much more profound than minor ‘annoyances’
  • How so-called ‘persuasive’ design is undermining the human will and ‘militating against the possibility of all forms of self-determination’
  • How beginning to ‘assert and defend our freedom of attention’ is an urgent moral and political task.

“Freedom of attention” is a phrase that’s going to fester. This is not from some crackpot, it’s from James Williams, an ex-Google employee currently researching ‘the philosophy and ethics of attention and persuasion as they relate to technology design’ at Oxford for his doctoral thesis.

In 2014, the guy who created the pop-up, Ethan Zuckerman, came forward and said he was sorry in order to make a hopefully more permeating point. The negative effect of his contribution was not, in his eyes, a violation of our beloved user experience. The real negative effect of his contribution was what he calls “the internet’s original sin”, or in more colloquial terms “advertising”.

“20 years in to the ad-supported web,” Zuckerman’s essay in The Atlantic concludes, “we can see that our current model is bad, broken, and corrosive. It’s time to start paying for… services we love, and to abandon those that… sell us—the users and our attention—as the product.”

His sortie into public discourse was doused in irony. If you google “pop up ad creator” you will see that most news outlets led with the apology rather than his underlying point. He apologized for something trivial in order to draw attention to more germane consequences. Consequences beyond the reach of an apology for which he feels personally responsible. But it’s the apology which made the headlines. Because we’re still annoyed about the user experience, so it’s the apology which will drive traffic and, thus, advertising revenue.

The original idea, thought to be the answer to new woes, turned out to be a self-perpetuating engine of that which was specifically opposed. So, the pigs have turned in to humans. What’s next?

View Comments (1)

1 Comment

  1. George

    October 16, 2017 at 9:53 AM

    I think that you/we might be over-complicating things. Nowadays, every website, service, etc; has a legal agreement terms and/or privacy policy where it clearly explains what personal data it collects and for what purpose. If you don’t agree with them, then you don’t use their services. It’s that simple.

Leave a Reply

Your email address will not be published. Required fields are marked *

Data Security

Ben Allen is a traveler, a writer and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

More in Data Security

New blockchain-SAP marketplace ushers in ‘second generation of the internet’

Nicolas WaddellOctober 6, 2017
Privacy in the age of performance crime fighting

Why privacy is becoming a collateral of crime fighting

Daniel SanchezOctober 6, 2017
data retention

Data retention fails to improve search results: study

Ben AllenSeptember 27, 2017
data monetization

Companies make millions selling your data, why not sell it yourself?

Nicolas WaddellSeptember 8, 2017
data security

Personal information of over 140 million US citizens has been leaking for a month

Omar ElorfalySeptember 8, 2017
Ethical Hacking

Modern cyber security: ethical hacking and bug bounties

Ben AllenAugust 3, 2017

Is ‘free’ cloud storage really risk-free?

Guest ContributorAugust 21, 2016
Forgery

How you can protect hacked mobile apps from forgery

Guest ContributorJuly 7, 2016
ZeroDB

Interview with MacLane Wilkison: co-Founder of ZeroDB securities for big data & cloud

Tim HinchliffeJune 23, 2016