Advertisement

Social media is giving us trypophobia

Social media is giving us trypophobia
From TechCrunch - January 27, 2018

Something is rotten in the state of technology.

Butamid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of whats wrong and whats rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lensas the telephone actually wasbut an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, its a trypophobics nightmare.

Or the panopticon in reverseeach user bricked into an individual cell thats surveilled from the platform controllers tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We arent so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, were being individually strapped into a custom-moulded headset thats continuously screening a bespoke moviein the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

Its a movie that the algorithmic engine believes youll like. Because its figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Itssecret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits arent safe from its harvest eitherit pays data brokers to snitch on those too.)

No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing its screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat.

If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packetand find out if its really as palatable as they claim.

Of course wed still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done.

Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate.

Smoke and mirrors

Understanding platforms information-shaping processes would require access to their algorithmic blackboxes. But those are locked up inside corporate HQsbehind big signs marked: Proprietary! No visitors! Commercially sensitive IP!

Only engineers and owners get to peer in. And even they dont necessarily always understand the decisions their machines are making.

But how sustainable is this asymmetry? If we, the wider societyon whom platforms depend for data, eyeballs, content and revenue; we are their business modelcant see how we are being divided by what they individually drip-feed us, how can we judge what the technology is doing to us, one and all? And figure out how its systemizing and reshaping society?

How can we hope to measure its impact? Except when and where we feel its harms.

Without access to meaningful data how can we tell whether time spent here or there or on any of these prejudice-pandering advertiser platforms can ever be said to be time well spent?

What does it tell us about the attention-sucking power that tech giants hold over us whenjust one examplea train station has to put up signs warning parents to stop looking at their smartphones and point their eyes at their children instead?

Is there a new idiot wind blowing through society of a sudden? Or are we been unfairly robbed of our attention?

What should we think when tech CEOs confess they dont want kids in their family anywhere near the products theyre pushing on everyone else? It sure sounds like even they thinkthis stuff might be the new nicotine.

External researchers have been trying their best to map and analyze flows of online opinion and influence in an attempt to quantify platform giants societal impacts.

Yet Twitter, for one, actively degrades these efforts by playing pick and choose from its gatekeeper positionrubbishing any studies with results it doesnt like by claiming the picture is flawed because its incomplete.

Why? Because external researchers dont have access to all its information flows. Why? Because they cant see how data is shaped by Twitters algorithms, or how each individual Twitter user might (or might not) have flipped a content suppression switch which can alsosays Twittermould the sausage and determine who consumes it.

Why not? Because Twitter doesnt give outsiders that kind of access. Sorry, didnt you see the sign?

And when politicians press the company to provide the full picturebased on the data that only Twitter can seethey just get fed more self-selected scraps shaped by Twitters corporate self-interest.

(This particular game of whack an awkward question / hide the unsightly mole could run and run and run. Yet it also doesnt seem, long term, to be a very politically sustainable onehowever much quiz games might be suddenly back in fashion.)

And how can we trust Facebook to create robust and rigorous disclosure systems around political advertising when the company has been shown failing to uphold its existing ad standards?

Mark Zuckerberg wants us to believe we can trust him to do the right thing. Yet he is also the powerful tech CEO whostudiously ignored concerns that malicious disinformation was running rampant on his platform. Who even ignored specific warnings that fake news could impact democracyfrom some pretty knowledgeable political insidersand mentors too.

Biased blackboxes

Before fake news became an existential crisis for Facebooks business, Zuckerbergs standard line of defense to any raised content concern was deflectionthat infamous claim were not a media company; were a tech company.

Turns out maybe he was right to say that. Because maybe big tech platforms really do require a new type of bespoke regulation. One that reflects the uniquely hypertargeted nature of the individualized product their factories are churning out at trypophobics look away now!4BN+ eyeball scale.

In recent years there have been calls for regulators to have access to algorithmic blackboxes to lift the lids on engines that act on us yet which we (the product) are prevented from seeing (and thus overseeing).

Rising use of AI certainly makes that case stronger, with the risk of prejudices scaling as fast and far as tech platforms if they get blindbaked into commercially privileged blackboxes.

Guessing games

Advertisement

Continue reading at TechCrunch »