Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.

The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”

But why?

The article continues:


In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.

As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.

“Always”, the mother replied.

Remarkably, documentary makers…


It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences.

This is certainly true of those in…


On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?

At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.

Here’s how it went wrong according to Venturebeat:


The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and


In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. …


“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.” — REGINA RINI, PHILOSOPHER

Alan Turing statue, Manchester, UK

In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. …


More businesses are looking to keep tabs on their employees amid the shift to home-based working. This undermines the new atmosphere of trust and flexibility.

Writing for Aeon last week, Martin Parker, a professor of organization studies at the University of Bristol in the UK, relayed the origins of the word “management”, explaining:


In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.

Zuboff describes the product:


With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device, to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time, and we often resent tech’s power over us.

Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020…

Fiona J McEvoy

Tech ethics researcher. Founder of YouTheData.com. Tech issues for non-tech audiences. @YouTheData @FionaJMcEvoy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store