Skip to main content

Systems are very bad people

In April this year, ASIC commenced action against Macquarie Bank for failing to “monitor, detect and prevent unauthorised transactions”.  These charges relate to actions of convicted fraudster Ross Hopkins, but crucially, ASIC specified their action was “not focused on Mr Hopkins’ conduct”.

Where compliance is achieved through a combination of system functionality and user action - and the focus is not the conduct of the user - where does the moral accountability lie for financial fraud?

*

The practical reality is that users often rely on system functionality to "monitor, detect and prevent" actions which are “not permitted”.   Greyed-out menu items or modal warnings indicate, by convention, which operations are "permitted".

But there are two types of authority at play here - one of the system, the other of the user.

The first is an authority as in "your access level in the system means you could click this button" and the second is an authority as in "you should execute the action this click represents".  

Unfortunately, the dissolutive effect of our modern systems on personal accountability means the two are often conflated.  With the lightening of a dull shade of grey from “disabled” to “enabled”, a new deontoligcal quandry is enlightened.

If I can click, it must be ok to click - right?

*

To test this distinction, imagine if the button said "clicking this button will do a very bad thing".  Now, you "could" click it, but you "should" not.

However, history has shown that, if the button is there to click, people will eschew their personal moral controls, in favour of the controls of the system.


https://www.youtube.com/watch?v=IU0_tNvOV90

The Milgram experiment is a particularly horrifying example of immoral button-clicking and I warn in advance any who haven't seen it before.


https://www.youtube.com/watch?v=Kzd6Ew3TraA

*

This raises an important question for organisations everywhere who seek to encode compliance into their systems.  From a practical perspective, this behavioural science tells us that once you implement a system, the operators will suspend their own good sense and replace it with cues from the system.

In this context, “monitoring, detection and prevention” of these transactions now become a question of system implementation.  Indeed, we can note in commentary such as “the system didn’t flag unauthorised transactions”, that the system iself has taken agency and identity, beyond the operation of its users.

It is not the users who failed to operate the system effectively to flag unauthorised transactions.  It is the system itself which failed.

*

There is a moral concept of the transitive property in that, if I construct a land mine and put it in your front yard, although I could be miles away (or dead) by the time you set it off, most would still say I am morally culpable for your injury.

Somehow, though, this property seems to go amiss in digital systems.  If I am a software developer and I write a system which permits “$2.9 million in unauthorised withdrawals”, it is the operator (he who trod on the mine), not the developer (who built it) who is held accountable.

There is, I think, a case to be made here.  If not in a court, at least in the heart of every developer.  If the system you built enables financial fraud, then you enabled financial fraud. 

It is an uncomfortable thought.

Comments

Popular posts from this blog

If we should

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

The callousing of our callow youth

At the 2024 Democratic National Convention, MyPillow CEO Mike Lindell delivered a ray of hope. Losing an argument to a 12 year old is sort of on-brand for Mike, given his non-consensual relationship with reality and enthusiastic disregard for personal credibility. Mike mainlines social media election misinformation like Neo learns kungfu and he was caught on camera aggressively ejecting his latest meme-load into the face of a child . * That social media actively floods our modern attention, discourse and culture with the most antagonistic, inflamatory and misleading content is, of course, widely known. As Stephen Fry recent put it , Facebook and Twitter … “are the worst polluters in human history.  Worse than any chemical plant ever.  You and your children cannot breathe the air or swim in the waters of our culture without breathing in the toxic particulates and stinking effluvia that belch and pour unchecked from their companies into the currents of our world” * While channeling this

Digital Derangement

Last week, Stephen Fry called Zuckerberg and Musk the "worst polluters in human history", but he wasn't talking about the environment. The self-professed technophile, who once so joyfully quarried the glittering bounty of social media, has turned canary, warning directly and urgently of the stench he now detects in the bowels of Earth's digital coal mine: A reek of digital effluvia in the "air and waters" of our global culture. The long arc of Fry's journey from advocate to alarmist is important.  The flash-bang of today's "AI ethics" panic has distracted our moral attention from the duller truth that malignant "ethical misalignment" has already metastascised into every major organ of our digital lives. * In 2015, Google's corporate restructure quietly replaced the famous "Don't be evil" motto with a saccharine, MBA-approved fascimile.  It seems the motto was becoming a millstone, as it allowed critics to attack