Why media standards have fallen and what it says about us

Source: beconnected.esafety.gov.au

Sometimes I feel that I’ve lost the plot as I increasingly find myself at odds with where society is going. For instance, I rarely watch the programs that are served up on commercial television. Much of what is on “the box” is mind-numbing and/or unnecessarily sensational and I don’t find it entertaining.

Nightly current affairs programs used to be a no-nonsense world with broadcast journalists and reporters fearlessly tackling the serious issues of the day. Nowadays, these programs and their “news” presenters offer trivial stories about weight loss, toddler tantrums and back cures. No wonder Gerald Stone observed in his book, Who Killed Channel 9?, that commercial TV is pitching to the lowest common denominator.

Commenting on the “dumbing down” of the Channel 9 program, A Current Affair, Stone wrote:

Here was a program that once prided itself on a nightly menu filled with hard-hitting interviews, sensational crime investigations and the inside dope on the latest titillating celebrity scandal. More and more it had begun to dwell on diet fads and shopping tips, topped up with melodramatic ambushes of small-time con men, or the inevitable tear-jerkers about battling families who can’t pay the rent.

In fairness, I must acknowledge the media’s claim that they simply produce what viewers and readers want. As a society, we would rather hear about the sordid private lives of celebrities than have a serious debate about the long-term benefits of public policy. So, just as we get the politicians we deserve, we also get the media we deserve.

As citizens, we are complicit with falling standards and they have certainly plummeted. It still staggers me that the reality TV show, Big Brother, was a ratings winner, even though it demeaned contestants, promoted bullying and encouraged sexual behaviour and nudity. Big Brother was vulgar and the antics of its participants eroded the distinction between public and private.

Another reality TV show, The Apprentice, paved the way for Donald Trump to become the 45th president of the US. The show made a hero of Trump in the eyes of the show’s followers and this die-hard fan base supported him in his bid for the presidency. Even so, millions of gullible viewers were unaware that the show’s producers heavily edited the program to portray Trump as a successful, credible and coherent businessman.

Rather than aspiring to educate viewers, the reality television genre emphasises personal conflict and dramatic tension. The media’s appetite for never-ending drama and outrageous arguments finds a natural home in reality television. Media executives like these programs as they are cheap to make (few paid actors) and rate well with viewers. Nonetheless, many find them objectionable, dishonest and trashy.

According to Australian academic, Dr Soseh Yekanians, Aussies have wholeheartedly embraced reality television. In an article that Dr Yekanians penned for The Conversation, she wrote that Australians have an unhealthy appetite for watching people on reality shows psychologically tear one another apart. She cited the following three examples to anchor her assertion.

  • On Channel Ten’s, The Bachelor, two contestants’ merciless name-calling and bullying behaviour became so vicious that they were dubbed the “mean girls”.
  • On Channel Seven’s, My Kitchen Rules, the slurs by two competitors, which included likening one contestant to a “blowfish gasping for air”, eventually led to Seven asking them to leave the show.
  • On Channel Nine’s, The Block, two contestants walked off the show after being heavily criticised by the judges. One of the contestants claimed that the feedback “just became pure insults”.

Clearly, reality television gains ratings by deliberately pitting contestants against one another. As noted by Dr Yekanians, “there is little real about this form of TV, which is heavily scripted and showcases stereotyped characters”.

Regrettably, standards of taste and decency remain in decline as the quality of television programs continues to deteriorate. We seem to have become conditioned to a diet of explicit sex, coarse language and graphic violence with such content now considered the norm. Tabloid television has modelled itself on its close kin, the tabloid press.

Tabloid journalists – the tawdry cousins of broadcast journalists – are known for sensationalism in reporting. Sex, scandals and beat-ups are the order of the day. Journalists must fill column space for their editors by “finding” stories. Many embrace the mantra: “Never let the truth get in the way of a good story” in order to whip readers into a frenzy, and this was the case regarding Donald Trump’s playbook of deceits.

We should look harshly on the media ecosystem that amplified Trump’s lies. The former president rode to power thanks, in part, to support from Rupert Murdoch’s Fox News. While in office, Trump was aided and abetted by Fox and other right-wing US media in spreading false claims. Following his electoral defeat, the rioters who stormed the Capital building were “egged on by these US publishers” according to a Sydney Morning Herald editorial.

But as pointed out in an article published in The Atlantic in November 2018, it was not just right-wing media that promulgated Trumps lies. Mainstream journalists were also accused of becoming “complicit in spreading the president’s falsehoods and conspiracy theories”. The article was published under the deadline – Trump’s Lies Are a Virus, and News Organizations Are the Host – and went on to say that:

The traditional news media are thoroughly infected by the Trump virus. It is not only spreading the disease of the president’s lies, but also suffering from a demise in public trust – at least among one half of the electorate.

[Please allow me to insert a parenthetical note here. Shortly after the outbreak of COVID-19, the WHO accused the media of spreading its own virus. The WHO warned that humanity was not just fighting a viral pandemic but also a highly contagious “infodemic” transmitted by the media. As I opined in a previous post, the media’s penchant for sensationalism throughout the pandemic has resulted in inaccurate news dissemination including the reporting of unscientific cures and unverified medicines.]

There are, of course, many fine and ethical journalists who work outside of the irreverent tabloid world. These individuals fulfil a vital role in society. A true democracy requires the active participation of an informed public, which is only possible if citizens have unfettered access to information. Ironically, the phone hacking scandal in Britain only came to public attention due to the free press.

In response to the scandal, The Telegraph in London published the following editorial.

This newspaper cares passionately about maintaining the highest standards of journalism. We believe that journalism, when practised properly, protects the public from abuses of power by exposing those who are guilty of dishonesty, corruption or injustice. Journalism that harms the innocent – by telling lies or spreading falsehoods about them, or by unjustifiably invading their privacy – does the exact opposite of what good journalism aims to achieve.

Hear, hear! Unfortunately, not all journalists and/or media outlets ascribe to this level of professionalism. And that’s not just my opinion – many mainstream journalists also lament falling standards of truthfulness, accuracy, objectivity, impartiality and fairness. One senior Australian journalist put it this way:

I’ve spent my working life as a journalist …. But now, reading the newspapers and watching the news, I can’t help but wonder if this is a craft that is not only losing its centre of corporate gravity and support, but also some fundamental sense of its mission and responsibility … the major market tabloids … are the dominant organs of news in all our capital cities. They cry wolf, they cry terror, they fan the flames of disquiet and distrust. Because fear sells.

In his 2011 book, Sideshow: dumbing down democracy, former Australian federal government minister, Lindsay Tanner, was withering in his critique of the media. He cited a number of examples where the media created unnecessary panic including the Global Financial Crisis, the Year 2K computer bug and the swine flu epidemic. The media reporting of these events produced a public response out of proportion to the threat.

The power of the media comes from its ability to influence and shape the perception of the public. We look to the media to tell us what is happening in the world as we don’t have the time or skills to sift through vast amounts of information ourselves. The media sets the news agenda and political tone and this informs our decision-making as citizens.

The free press plays a vital role in society and can serve citizens by exposing wrongdoings and informing debates. Still, it is disappointing to note that some sections of the media do not operate to the highest ethical standards. No wonder that in Australia – and other parts of the world – journalists are among the least trusted professionals.

Strange how the media can scrutinise the behaviour of others but is incapable of serious self-examination.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Why we don’t recognise our own incompetence

Source: The Rock & Roll Shrink Radio Show

Imagine that you are hosting a dinner party for a group of friends. Throughout the meal, one guest is spouting off on a topic that he claims to know well. As those around the table listen to his opinions, it’s blindingly clear to everyone that he is grossly ill-informed. Yet, he arrogantly prattles on in the belief that he is the fount of all knowledge.

All humans have blind spots, which is why many of us are oblivious of our own ignorance. We can believe things about our ability that are just not true because – to be blunt – some of us are so dimwitted we don’t realise how dense we really are. A good example is Donald Trump whose confidence and bluster as president never wavered despite his woeful grasp of policy matters.

That we are lousy at accurately evaluating ourselves is not a surprise to social psychologists, David Dunning and Justin Kruger. Their research shows that people who are capable at a particular task or in a certain topic typically underestimate their ability while people who are incapable at a particular task or in a certain topic frequently overestimate their ability.

This disconnect is called the Dunning-Kruger effect and it reveals that while the competent are often plagued with doubt, the incompetent are habitually cocksure of their excellence. Put simply, the Dunning-Kruger effect is the tendency for people to misjudge their abilities, with the skilled putting themselves down and the inept hyping themselves up.

We have long known that fools are blind to their own foolishness. As renowned British naturalist, Charles Darwin, wrote in 1871 in The Descent of Man: “Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

Another wise man (allegedly Aristotle) said that “the more you know, the more you know you don’t know”. Smart people are clever enough to know that they don’t know everything, so they read and study to fill the gaps in their intelligence. In contrast, asinine people don’t read or undertake continuous education because they are clueless to the fact that they have knowledge gaps.

The Dunning-Kruger effect stems from our ignorance of our own ignorance. It is a cognitive bias which causes unskilled individuals to suffer from illusory superiority. One way to avoid falling victim to this phenomenon is to inject a healthy dose of humility into your sense of self-regard. For many people, that is easier said than done.

As English philosopher, Bertrand Russell observed: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts”. We have seen both sides of this cognitive pitfall in action during the COVID-19 outbreak.

Highly qualified epidemiologists – and other scientists who have devoted their careers to studying infectious diseases – readily admitted the limits of their knowledge regarding the behaviour of the novel virus. As true experts, they know where their expertise ends. Fortunately, when the pandemic hit, they knew enough to urge the introduction of social distancing practices and lockdowns.

Still, many people, including political leaders like Trump (USA), Bolsonaro (Brazil) and Modi (India), were dismissive of – an even hostile towards – medical experts, and flouted health warnings. Unsurprisingly, coronavirus outbreaks in these countries spiralled out of control due to the incompetence of self-absorbed “Covidiots” as evidence by their Dunning-Kruger performances.

Around the world, many populist politicians masqueraded as health professionals yet refused to take even basic precautions to keep the public safe. In doing so, they displayed their absolute ignorance of science to the detriment of their citizens. By rejecting COVID-19 countermeasures and downplaying the threat, millions of innocent people died unnecessarily.

Defiant perspectives on COVID have come not just from ignorant people but also lawyers, engineers, accountants and other professionals. Otherwise astute members of society, including Elon Musk, rejected the assessment of medical experts. Musk – who many consider to be a genius – fell foul of the Dunning-Kruger effect. As noted in online magazine InsideHook:

… Elon Musk is not a medical genius. In this instance, he is no more than yet another unqualified mouthpiece in a growing list of blowhards regarded as armchair epidemiologists.

Similar to the coronavirus pandemic, many citizens and politicians show disdain for the science of climate change in the conceited belief that they know better than the experts. (No wonder disaster movies typically begin with the government ignoring a scientist – a case of art imitating life!). The world is full of climate change deniers who are blissfully ignorant of their ignorance.

The science has been settled to the highest degree that climate change is primarily due to human activity. Consequently, air and ocean temperatures are rising, arctic ice is melting, ecosystems are shifting and sea levels are rising. The signs are all around us – the Earth is patently warming which makes the endless debates questioning the truth of climate science gobsmacking.

While many governments agree with the science, politicians make cosmetic changes and largely adopt a business-as-usual philosophy. Meanwhile, climate activists continue to express their frustration and disbelief while climate deniers remain dogmatic in their opposition to climate action. Humanity is fiddling while Rome burns.

As one commentator observed, the Dunning-Kruger effect is:

… more noticeable in the denier set because most of them lack scientific or climate science credentials and training and yet they are challenging the collective views of thousands of trained scientists who do have the required training, credentials, knowledge and skills to discuss climate science.

Science-based arguments are rejected by citizens around the world. These same people voted for climate denying governments in places like America (under Trump) and Brazil. Deniers spend a lot of time on social media eagerly absorbing anything that supports their unscholarly position, even when it’s outrageously absurd and completely uncorroborated by evidence.

The political landscape is replete with evidence of the Dunning-Kruger effect. Take Trump’s rise to the presidency which can be largely attributed to ignorance – his popularity was highest among voters without a university degree. As described in a 2016 Politico Magazine article:

Their expertise about current affairs is too fractured and full of holes to spot that only 9 percent of Trump’s statements are “true” or “mostly” true, according to PolitiFact, whereas 57 percent are “false” or “mostly false” – the remainder being “pants on fire” untruths. Trump himself has memorably declared: “I love the poorly educated.”

Over the past decade or so, citizens who elected populist governments have been let down badly. Voters were lied to by politicians like Trump, but were not smart enough to know it. In democracies such as Turkey, Hungry, Poland and the Philippines, citizens unwittingly elected governments which normalised authoritarianism and diminished their democratic rights.

The “right” leaders were not elected as voters lacked the skills to assess the abilities and competencies of others. Votes were cast based on personal feelings or false information, which is why two eminent political scientists believe that the problem with democracy is voters. While many of us rate ourselves highly in political knowledgeability, the harsh reality is that most of us are ignorant as voters.

The Dunning-Kruger effect is real and permeates all aspects of life. It is evident in people’s viewpoints on education, vaccination, work, sports and even investing. In all walks of life, you will find people who think that they are much better and/or knowledgeable than they really are.

Overconfidence is the mother of all psychological biases and has been blamed for the sinking of the Titanic, the explosion of the space shuttle Challenger, the 2008 meltdown of the subprime mortgage market and the 2010 oil spill in the Gulf of Mexico. Overconfidence accounts for a wide range of poor outcomes – including war.

That a little knowledge can be a dangerous thing has been known by philosophers since Socrates. This perceptive ancient Greek thinker said that “the only true wisdom is knowing you know nothing”. As wise as this is, I’ll end with the words of Benjamin Franklin which resonate with me:

“Being ignorant is not so much a shame, as being unwilling to learn.”


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Big technology companies thrive during coronavirus

Illustration: Trent Joaquin; Sources: Amazon, Apple, Microsoft, Google, Facebook

COVID-19 has wrought economic disruption on a monumental scale. Around the world, businesses of all shapes and sizes have fought a life-and-death battle for survival. Business owners have grappled with forced closures, plummeting revenues and surging losses. Despite drastic measures – including slashing jobs – many businesses have been unable to stay afloat.

The decision by policymakers to induce massive economic suffering to save lives was a brutal trade off – but not for the technology industry. While other industries were decimated by store, restaurant and office closures, the technology sector powered ahead. Government stay-at-home orders were a godsend for technology companies – demand for their just-a-click-away services skyrocketed.

Specifically, the COVID crisis turbo-charged the profits and share prices of the technology industry’s Titans – Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft. The quintet benefited enormously from a greater reliance on their services during the pandemic as the world moved almost entirely online for work, school and entertainment.

Each of these leading digital powers was able to capitalise on being viewed as an essential service for a public in lockdown with their shares enjoying a jaw-dropping bull run. As noted in a report in The Wall Street Journal, their combined revenue during 2020 surged by a fifth to a mammoth $1.1 trillion while their aggregate market capitalisation soared by half to a staggering $8 trillion.

Thanks to the pandemic, the technology conglomerates now make up five of the six largest companies in the world. During 2020, their stocks soared to dizzying heights and all are now valued at over $1 trillion. According to an analysis conducted in March this year by MacKeeper, these companies are worth more than most countries. MacKeeper noted that:

  • Apple’s gargantuan $2.2 trillion valuation makes it richer than 96 per cent of the world. Only seven countries have annual GDP figures that outrank Apple’s market capitalisation.
  • Microsoft’s colossal $1.8 trillion market cap puts its value on par with the GDP of Canada and makes it richer than many developed economies including Australia. Only nine countries are worth more than the developer of Windows.
  • Amazon’s $1.6 trillion valuation would make it the 14th richest “country”. Its revenue per employee is $351,531 annually, which exceeds the highest GDP per capita in the world.
  • Alphabet, Google’s parent company, is valued at $1.4 trillion, putting it ahead of all but 12 nations.
  • Facebook, while falling short of the trillion-dollar mark, is valued at a respectable $763 billion. (NB: Facebook’s cap passed $1 trillion on 28 June 2021.)

The economic effects of COVID have catapulted the tech Titans to heights that few would have imagined possible prior to the pandemic. They now account for nearly a quarter of the total value of companies in the S&P 500 index – the barometer of corporate America – and that is almost double the percentage of just five years ago. Never before has this level of market influence been seen from one sector.

These behemoths have phenomenal corporate power and have been dubbed the “Frightening Five”, which is why some believe that their wings need to be clipped. Together, they control much of the critical digital infrastructure which underpins global commerce. Over recent years, their digital services have played a greater role in our daily lives.

The imposition of social distancing and travel restrictions during the pandemic dramatically increased our dependence on digital platforms to service our basic needs, including staying connected with family, friends and colleagues. This reliance was tangibly demonstrated in the exponential rise in spending on computers, online retailing, cloud-computing services and digital advertising.

The pandemic has made a clutch of tech firms an even more integral part of work and personal life. Indeed, the coronavirus has created huge tailwinds for these juggernauts by driving behavioural shifts that will long outlive the health crisis. “Digital adoption curves aren’t slowing down – they’re accelerating”, said Microsoft Chief Executive, Satya Nadella.

Big tech is on a roll and their deep pockets will enable them to withstand almost any challenge to their market dominance. Investors have been astonished at their earnings growth and resilience in marching unscathed through the health chaos. In the words of one analyst, “the digital revolution is here to stay, and these businesses are embedded in our lives”.

One journalist who tried to live without the tech heavyweights claimed that it was impossible. Another journalist found that she could reduce but not eliminate the Frightening Five from her life. Despite her goal of wanting to “excise these companies from my life as completely as possible,” she discovered that “these tech giants dominate the Internet in so many invisible ways,” it’s not possible to avoid them.

According to an article in The New York Times, the Big Five’s platforms “are inescapable; you may opt out of one or two of them, but together, they form a gilded mesh blanketing the entire economy”. Many of these digital platforms generate what economists call “network effects” – they keep getting more indispensable as more people use them.

Millions of people find it hard to imagine going through a single day without using an Apple iPhone, conducting a Google search, reading a Facebook News Feed, receiving a package from Amazon or launching Microsoft Office. Nonetheless, these same people find it unsettling that a handful of unelected tech executives wield so much power.

Governments around the world also worry about the misuse of monopoly powers and the gobbling up of competitors. Government probes have been conducted into their business practices and have uncovered privacy concerns and data security breaches. Governments from Australia to the US are starting to crack down on big tech companies to rein in their power.

Over recent times, US tech companies have faced increased scrutiny in Washington over their size and power. Republican and Democratic lawmakers have banded together to introduce a package of bills which will change antitrust laws which regulate the conduct of business corporations. The legislation should level the playing field by ensuring that tech companies are held to the same rules as everyone else. As noted in one report:

The new laws would make it easier for the government to break up dominant companies. It could also prevent these companies from snuffing out competition through pre-emptive acquisitions. And it could curtail the tech giants from entering different businesses where they’d be able to use their market power to crush smaller competitors.

In any market, regulators like to see competition as this gives consumers choice. Having one dominant player lessens competition which typically leads to higher prices. But in the case of Google and Facebook, their services are free thereby making concerns over pricing irrelevant. The companies themselves say they are successful because of the quality of their offerings, so why punish success? To quote Bloomberg Technology:

Consumers appear to agree it’s hard to beat Google’s suite of free products or Amazon’s convenience. Their dominance may not be about predatory practices so much as the nature of competition in the digital marketplace, where tech platforms benefit from network effects: As more people use them, the more useful – and dominant – the platforms become.

So, the focus must be on whether there are other harmful political, economic or social effects. Some believe that the tech giants have become more like governments than companies given the staggering amount of money at their disposal and the enormous influence they have over democracy in society. Case in point, Facebook has become a global political force as the largest and most influential entity in the news business.

The digital economy knows no national borders and this is a threat to the jurisdiction of governments around the world. Given this, we will likely see increasing friction between the Big Five companies that rule the tech industry and the governments that rule the lands these companies are invading. The nation-state is fighting not to lose its grip.

Love them or hate them, there’s no escaping the tech superstars. They have become part of the fabric of our lives and will continue to cast a long shadow over the political, economic and social landscapes. They have created an Internet oligopoly which has changed the face of modern capitalism and made them indomitable.

The Big Five have only one question: What pandemic?


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Why the future may not need us

Credit: BBC Future

Attempting to predict the future is always a roll of the dice. No one can see around corners, but that has not stopped think-tanks and other forecasters from trying to gaze a few years down the road. Every other week, another report or book is released telling us how tomorrow is going to unfold.

Yet the future has too many variables for anyone to say with certainty what will happen. Long-promised gizmos like flying cars, robot maids and personal jetpacks have failed to materialise for the masses. Similarly, bringing cryogenically frozen corpses back to life remains firmly in the realm of science fiction.

History shows that some innovations turn the world upside down while others flop spectacularly. Determining which discoveries will disrupt the status quo is fraught with danger. This is particularly the case with digital technologies which are taking us into uncharted waters.

I’m not a seer when it comes to the future, nor is Yuval Noah Harari. Professor Harari does not claim to know for certain what’s in store for humanity. Nonetheless, his book – Homo Deus: A Brief History of Tomorrow – provides a provocative and fascinating insight into what might lie ahead.

Harari is a professor of history at the Hebrew University of Jerusalem. His preceding book, the global bestseller – Sapiens: A Brief History of Humankindlaid out the last 70,000 years of human history. It examined how humanity managed to rein in famine, plague and war.

While Sapiens showed us where we have been, Homo Deus points to where we are going. Harari openly admits that predicting the future isn’t as easy as deconstructing the past. Even so, his future orientated sequel provides a glimpse of the forces that will shape the 21st century.

Harari’s central claim is that Homo sapiens (Latin for wise man) will become Homo deus (Latin for god man). We are on the cusp of an evolutionary transition in history that may witness the creation of a new species of superhumans – the man-gods of Harari’s title.

Harari is an atheist, so when he uses the word “god”, he is not referring to a supreme deity who is all-powerful (omnipotent), all-knowing (omniscient) and all-present (omnipresent). Rather, he means humans with life spans greatly extended by science and intelligence vastly enhanced by technology.

Harari believes that humans will increasingly focus on god-like pursuits such as chasing enduring happiness (wellbeing) and everlasting life (wellness). “In seeking bliss and immortality,” he writes, “humans are in fact trying to upgrade themselves into gods”.

Achieving this upgrade will take “divine powers” and will happen through new “techno-humanism” technologies such as genetic modification, artificial intelligence and cyborg engineering. We are approaching a crossroads in evolution where machines will become more human-like and humans will allegedly become more machine-like.

Biology and computing are coming together and could theoretically result in a human brain being directly connected to a computer. That “distant possibility”, says Harari, will be reserved for a tiny number of elites – a superclass of humans – given the significant cost of biotechnology.

There will also be a massive “useless class” who will be pushed to one side by intelligent machines which will do jobs better than people can. The list of occupations where people will be “unemployable” include bus drivers, bartenders, construction labourers, veterinary assistants and telemarketers.

One of the divides between the superclass and useless class will be biological, with the former having superior physical and cognitive capacity and living much longer. A second dividing line between the classes will be artificial intelligence which will give unprecedented power to the few who control the algorithms which run our lives.

To illustrate, at some stage in the future, all vehicles are predicted to be self-driving and one corporation may well control the algorithm that runs the entire transport market. In that scenario, all the economic power previously shared by thousands will be in the hands of a single corporation.

A second example can be found in the operation of the military. Armies – which once consisted of millions of men – are increasingly being dominated by small groups of super-warriors who control technologies like drones and fight cyber wars. The best armies today require a small number of highly professional soldiers using very high-tech equipment.

Across the board, human authority is shifting to algorithms and external data processing systems which, according to Harari, may “know us better than we know ourselves”. Harari envisages that “Dataism” – a universal faith in the power of algorithms – will become sacrosanct.

Our lives will be dominated by non-conscious but highly intelligent algorithms as we are sucked deeper into the online world and turned into faceless data. In this brave new digital world, Dataism will purportedly become our 21st century religion, replacing a homo-centric world with a data-centric world.

Billions of people around the world are regular users of social media and share intimate details of their lives on platforms such as Facebook, YouTube, WhatsApp and Twitter. But all that free browsing and connecting comes at a price – your entire life (which is why I’m not a user, albeit I do utilise Google).

Harari warns that when you get something for free, you are the product. He points out that “in the twenty-first century our personal data is probably the most valuable resource most humans still have to offer, and we are giving it to the tech giants in exchange for email services and funny cat videos”.

The dehumanisation of decision-making is being facilitated by everyday citizens. The algorithms that nowadays make automated decisions – which were formerly the exclusive remit of humans – are being fed a diet of data supplied by us. As noted in an online article by two mathematicians:

An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements. The ingredients used are the data we provide through our actions online – knowingly or otherwise.

Harari believes that “humankind is poised to replace natural selection with intelligent design, and to extend life from the organic realm into the inorganic”. Still, he is not asserting that the future has to unfold this way. Harari is a great writer and thinker and his book maps the different possibilities that humankind is facing. Nonetheless, he states that our species:

… is likely to upgrade itself step by step, merging with robots and computers in the process, until our descendants will look back and realise that they are no longer the kind of animal that wrote the Bible, built the Great Wall of China and laughed at Charlie Chaplin’s antics.

Frankly, I find Harari’s predictions chilling and hope that they do not come to pass. I don’t want my great-grandchildren to be bio-engineered. I don’t want humans to be turned into flesh and silicon cyborgs. And I don’t want the techno super-rich to live forever by implanting their brains into robots.

My attitude to immortality was outlined in a recent post, The quest for healthy aging and longevity. Assuming I’m right and immortality is never achieved, the unequal availability of life extending procedures will nonetheless take a toll on society. For Harari this means that:

… we might see the emergence of the most unequal societies that ever existed … economic inequality will be translated into biological inequality.

I hope that the world is not spinning too fast to avoid this grim future.

Before you go…

Last year, I read a book by Roger McNamee called Zucked: Waking Up to the Facebook Catastrophe. McNamee was an early mentor to Mark Zuckerberg and a Facebook investor. Nowadays, he spends his time warning people of the dangers of social media platforms. In an article he wrote, McNamee warns that it’s time to wake up to the dark side of Internet technology if we are to avoid a dystopian nightmare. Well worth a read.

One last thing…

I have long believed that the discourse in the mainstream media about artificial intelligence (AI) is grossly exaggerated. This misrepresentation is reinforced by Hollywood which continues to feed the paranoia about AI with a diet of movies portraying robots as evil machines. Filmmakers, along with the media, know that people are predisposed to fear what they do not understand. I do not believe that robots will become human facsimiles and this article does a good job in explaining why our fears of AI are overblown. Also, well worth a read.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Understanding the left-right political spectrum

Source: Slideshare.net

We all understand the difference between up and down. We also know the distinction between north and south. But when it comes to left and right in a political sense, many of us are less clear. What does it really mean to be left-wing? How does this vary from those who lean to the right?

The terms “left-wing” and “right-wing” define opposite ends of the political spectrum, yet there is no firm consensus about their meaning. Over time, these labels have become blurred. Tony Blair once argued that the contrast between the two had melted away into meaninglessness.

The genesis of the political categories “left” and “right” date back to eighteenth century France and the French Revolution. Members of the National Assembly were seated according to their political orientation. Supporters of the king sat to the right of the Assembly president with supporters of the revolution to his left.

In line with this historic division, contemporary left-wingers are said to be anti-royalists who favour interventionist and regulated market economic policies. Right-wingers, on the other hand, are said to be monarchists who favour laissez-faire, free market economic policies.

While those on the left support higher taxes on the rich and welfare for the poor, the right favours lower taxes on businesses to help them grow. The left believes in an equal society and big government whereas the right argues that social inequality is unavoidable and that governments should play a limited role in people’s lives.

The Australian Labor Party has traditionally been seen as left-wing (socialist) with historic ties to the union movement. The Liberal Party of Australia has customarily been seen as right-wing (capitalist) with a long-standing pro-business posture. Many see this partisan profiling as outdated in describing Australia’s modern political landscape.

A case in point is the issue of Australia becoming a republic. Based on traditional ideology, you would expect this cause to be championed by the “anti-royalist” Labor Party. Yet the push for a republic has been spearheaded by a member of the “monarchist” Liberal Party.

Past prime minister, Malcolm Turnbull, is a Liberal blue-blood. (NB: Left-wing parties are typically associated with red, the colour of revolution, while right-wing parties are often associated with conservative blue.) He is a former investment banker who – uncharacteristically for a conservative politician – is also a staunch supporter of the Australian Republican Movement. Turnbull co-founded the movement.

In trying to discard the monarchy (via a referendum in 1999), Turnbull was seen to have taken a left-wing stance which caused some right-wing hardliners to label him a turncoat. Still, he is not the only Australian politician to be off course in a strict ideological sense. Former Labor treasurer, Paul Keating, lurched to the right economically.

Keating’s laudable economic reforms included deregulating the financial system, floating the Australian dollar, reducing import tariffs and introducing compulsory superannuation – sound initiatives that a Labor treasurer was not expected to do. It’s said tongue-in-cheek that Keating was Australia’s best “Liberal” treasurer and the architect of neo-liberalism in Australia.

Many of Keating’s reforms were based on the 1981 Campbell Inquiry Report into Australia’s financial system. John Howard instigated the inquiry when he was Liberal treasurer under prime minister, Malcolm Fraser. But Howard disappointed his traditional business supporters by implementing only one of Campbell’s 260 recommendations.

Ironically, it was Keating who introduced many of Campbell’s recommendations. He implemented a globalisation agenda which made Australia internationally competitive and opened our economy to the rest of the world. Unsurprisingly, big business embraced Keating – even though the Labor Party and corporate Australia are supposed to be adversaries.

So, how left-wing was Keating as a left-wing politician? In reality, he moved the Labor Party to the right of centre. So, the message is clear: While some may argue that ideological creeds are reflected in the policies of each party, this is often not the case.

If the truth be told, political viewpoints along the left-right scale do not fit neatly into one ideological camp. Within each camp, there are factional groups who believe that some things outweigh others. So, an individual may identify with left-wing ideals on one issue but consider themselves right-wing for everything else.

Those whose political outlook sits somewhere in the middle of the left-right divide are classified as taking a “centrist” stance. And to complicate things further, those who hold extreme political views belong to either the far-left or the far-right.

People on these outermost poles of the political spectrum often see themselves as aggrieved individuals. They are radicals who are deeply estranged from mainstream political mores. Their degree of alienation from contemporary society can be seen in their extreme ideologies.

Both the far-left and the far-right have a victim-like mentality and employ militant strategies. Their political engagement relies upon force, violation of civil liberties and disdain for democratic ideals and practices. They normalise violence in their attacks on governments, globalisation and social elites.

The far-left includes Islamic terrorists while the far-right boasts white supremacists and neo-Nazis among its ranks. These extremist hate groups engage in violent acts and display many parallels. As they have overlapping tactics and stances, some academics contend that it is misleading to classify the far-left and far-right as opposite poles.

It is suggested that a more realistic classification is provided by the Horseshoe theory. This theory asserts that the political spectrum is not a straight line with opposing ends. Rather, it is a horseshoe with its farthest outliers bending in toward each other and sharing a number of similar beliefs.

To illustrate, supporters of the extreme right and extreme left are more likely to believe in conspiracy theories even when they are contradicted by mainstream science or factual evidence. These theories include the belief that coronavirus vaccines are harmful, climate change is a hoax and the US government planned the 9/11 terrorist attacks.

One area where the far-left and far-right markedly differ is in their interpretation of the past. As noted in the US online newsletter, The Perspective, these interpretations dictate their political stances and calls to action.

The far-right expresses nostalgia for the past and actively works to preserve their history, regardless of what that might mean in today’s context. … Conversely, the far-left … associates the past with its ills – slavery, sexism, and other injustices. History and its institutions are not to be preserved and cherished, but rather, an embarking point from which to begin reform.

History is in the eye of the beholder and so too is populism. There is no agreed definition of populism – it means different things to different people. In political science, populism is seen as an approach that frames politics as a battle between two opposing groups. In his book, Populism: A Very Short Introduction, Cas Mudde labels these antagonistic groups as the “pure people” (ordinary masses) and the “corrupt elite”.

Populism is not sustained by a single political ideology. Rather, it describes a style and approach to politics. Populism can be deployed in the service of almost any ideology – left or right, moderate or extreme. Populists can come from all parts of the political spectrum and they have popped up all over the world.

Think Marine Le Pen in France, Viktor Orbán in Hungary, Rodrigo Duterte in the Philippines, Indian prime minister Narendra Modi and, of course, Donald Trump in the US. All of these populist leaders climbed to prominence by dividing people into good or bad. Populism defines our current political age. In the words of one US journalist:

Once in power, populist leaders represent “a threat to liberal democracy” … (such as) Trump calling the press the “enemy of the people,” criticizing judges, resisting congressional oversight, claiming that elections are “rigged,” flouting laws, and claiming that a “deep state” of bureaucratic actors is out to get him to deny the will of the people he represents. It happens with other populist leaders all over the world.

■      ■      ■

It’s axiomatic that thinking in terms of a left-right spectrum is outdated. While these binary labels may be convenient shorthand descriptors, they are too generic. People hold a range of opinions on social and economic issues and these do not fit neatly in the traditional left-right continuum.

Also, citizens care about the matters that affect them and not the political ideology that supposedly underpins a given issue. Further, humans can hold seemingly contradictory beliefs. All of this makes the political spectrum largely meaningless, but we continue to use it due to laziness. As noted in the online magazine Quillette:

Putting people into one of two ideological boxes is far easier than understanding their unique point of view. Reducing politics to a simple contest between right and left is far easier than reasoning through hundreds of issues. Humans generally prefer simplicity to truth and would rather sign up for a “side” than do the hard work of thinking.

Whether you swing left, lean right or aim dead centre, it’s incumbent on all of us to stay politically informed.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

The quest for healthy aging and longevity

Credit: World Science Festival

The human species has a natural shelf life due to aging – a debilitating condition that no one can escape. Our biological functions decay over time and the Grim Reaper eventually catches up with all of us. Despite the amazing advances in medicine and the resultant rise in life expectancy, you can’t keep Father Time at bay forever. Still, there are those who believe that humans can significantly extend their expiry date.

It is generally accepted that the apparent limit to human lifespan is about 120 years. The life extending treatments necessary to ward off the ailments that accompany old age and stop people living well beyond 120 years do not currently exist. So, at this stage, we are all doomed to age and die – assuming that some other fatal event does not take us out first.

That said, the good news for those who want to stay young and live longer is that scientists are working to recalibrate the human body clock. It’s already evident that people over 50 aren’t aging as fast or poorly as their parents, and anti-aging research is set to improve this trend. Over time, medical treatments to head off the slow march towards death will become increasingly common.

We know that the duration of human life is influenced by genetics, the environment and lifestyle. Yet the causes of aging are extremely complex and unclear. With the rise in longevity clinical trials, more answers – and questions – are emerging. Scientists are now asking whether our natural genetic makeup is limited to a maximum span of 120 years or whether this boundary can be breached.

The study of longevity is a developing science. Our genes harbor many secrets to a long and healthy life and researchers are trying to find the key within our genome to edit out bad stuff. Genes are akin to little packets of information found in each cell in our body. These packets contain critical instructions which tell our body how to grow and develop.

An online article by Nature Publishing Group explains that:

The action of a single gene can have huge effects on how long a creature lives. This may seem hard to believe because so many things go into determining lifespan, including a host of lifestyle factors and a long list of diseases. Nonetheless, remarkable effects on lifespan are seen when particular genes are deleted from an animal’s genetic sequence. Furthermore, research – particularly that involving microscopic roundworms – continues to provide scientists with tantalizing clues about the molecular pathways involved in aging.

Researchers at the University of Rochester have discovered that one of the keys to longevity resides in a gene called Sirtuin 6 or SIRT6. The Sirtuin family of genes and their proteins play a role in controlling aging by repairing damaged DNA, thereby preserving health and youthfulness. SIRT6 has also been identified as a critical regulator of telomere integrity.

Telomeres are an essential part of human cells that affect how our cells age. Telomeres are caps at the end of each strand of DNA which protect our chromosomes, just like the plastic coating (tips) on the ends of shoelaces. Without tips, shoelaces would become frayed and no longer able to do their job. In the same way, without telomeres, DNA strands become damaged resulting in the inability of cells to fully replicate.

The cells in your body are continually dividing and renewing. With each round of cell division, telomeres become shorter. Eventually, our telomeres become so short that the genes they protect could be damaged, so the cells stop dividing and self-destruct. This programmed cell death (called apoptosis) contributes to aging.

Apoptosis is “cell suicide” and scientists believe that reducing the rate of telomere shortening could slow the body’s cellular clock. Research shows that longer telomeres are associated with a longer lifespan while shorter telomeres are connected with the ailments of aging: heart disease, cancer and osteoporosis. If scientists can preserve or elongate telomeres, humanity will be one step closer to a genetic Fountain of Youth.

While most scientists are purely trying to extend life, some researchers are brazenly focussed on helping people dodge death altogether, by turning science fiction into science fact. To many, the quest for immortality seems nonsensical. Even so, Ray Kurzweil, Google’s Director of Engineering, preaches that “immortality is within our grasp”.

Google co-founder, Larry Page, has invested $1.5 billion of Google’s money into a R&D project called Calico, which aims to “cure death”. Calico is short for the California Life Company and the mission of this Google-backed biotech firm is to “harness advanced technologies to increase our understanding of the biology that controls lifespan”. For the masters of the universe at Google, death is the ultimate engineering problem to be solved. The company believes that it will eventually be successful at hacking the code of life.

Aging and death are existential certainties, which is why many consider Google’s anti-death project to be unbridled hubris. While the search for immortality is an epic goal, few believe that Google will solve “the problem” of death. A more likely outcome is that Calico will find ways to dramatically extend human life. That, in itself, raises a series of socio-economic questions regarding overpopulation, class divisions and the affordability of anti-aging technologies.

What seems forgotten by Calico’s backers in their conquest of death is that mortality has value, which is why immortality is not an easy sell. Research reveals that most people don’t like the idea of living forever. As outlined in a 2017 Smithsonian Magazine article:

… a large percentage of today’s population also subscribes to religious beliefs in which the afterlife is something to be welcomed. When the Pew Research Center asked Americans in 2013 whether they would use technologies that allowed them to live to 120 or beyond, 56 percent said no. Two-thirds of respondents believed that radically longer lifespans would strain natural resources, and that these treatments would only ever be available to the wealthy.

Google has resourced its science start-up with some serious intellectual firepower and these scientists and researchers are working behind-the-scenes to challenge the inevitability of death. The proponents of a “transhumanist” movement called the Church of Perpetual Life support Google’s initiative. They believe that technology could one day see our consciousness digitised into computers, turning us from biological humans into robots – the so-called singularity. (Some fear that this will see artificial intelligence morph into a modern-day Frankenstein’s monster!)

For Silicon Valley’s death-cheating initiative to be successful, it must find a way to defy the laws of thermodynamics so that an individual can live forever. The key to immortality is tied to the second law of thermodynamics which states that everything must decay. The cells in our bodies follow that rule and eventually deteriorate. That leads to entropy – the inescapable and irreversible process of disorder and eventual death.

Turnover is a basic characteristic of life. Ergo, the search for the elixir of everlasting life runs against the natural order of things as all living organisms ultimately die. Humans are not biologically immortal and no amount of R&D money can alter that fact. We will all eventually kick the bucket as immortality – the Holy Grail of biological sciences – is a fairy tale. Even if it were possible, I think that a never-ending life would be a fate worse than death.

The death of death has been grossly exaggerated.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Why nationalism is a threat to globalisation

Credit: David Parkins

The postponed 2020 Olympic Games – which are now scheduled to kick off in Tokyo in July – will provide an international stage for countries to showcase their elite athletes. Spectators the world over will cheer for their nation’s sportsmen and women as they vie for Olympic gold. The fierce but friendly competition will fuel national pride and expose a positive side of nationalism – the celebration of the sporting success of those representing one’s homeland.

Beyond the Olympic arena, however, chest-thumping nationalistic patriotism has a dark side. It’s understandable that every country tries to instil a national consciousness among its own citizens. But when that patriotism morphs into a sense of superiority over other countries, it leads to a combative us-against-them mindset which is a poisonous ideology. In many parts of the world, patriotism has turned toxic.

Populist politicians have been selling nationalism as patriotism by promoting blind loyalty to one’s country to the detriment of global connectivity. Claiming to speak for “the people”, populists like Donald Trump have appealed to the anger and discontent of voters, tapping into their fears about jobs, race and immigration. In the West, many people feel left behind by technological change, growing inequality and the global economy.

Events over recent years show that there has been a nationalist backlash to globalisation. The UK’s decision to leave the European Union, Donald Trump’s win in the 2016 US presidential election and the growing momentum of right-wing parties in France, Austria and Germany all attest to this. In an increasing number of countries, the radical right – a group of extremist parties united by their hatred of immigrants – has surged in popularity.

These populist parties and their followers have been variously described as racist, xenophobic, anti-Islamic and anti-refugee. Parties of the far-right focus on tradition – real or imagined – and play on a nostalgia which yearns for simpler times. They want to turn back the clock to when national cultures were not influenced by immigration (and globalisation) and jobs were the preserve of native-born citizens.

This delusional hankering for the “good old days” was epitomised in Donald Trump’s right-wing rallying cry to “Make America Great Again”. Trump mistakenly believed that this greatness would be achieved by closing borders, curtailing trade and building a wall to keep out Mexicans. We should not forget that it was this kind of old-fashioned nationalism which helped fuel two world wars.

Following their defeat in World War I, the Germans felt humiliated and this enabled Hitler to exploit people’s feelings of resentment towards the ruling elite. Hitler also promised to make Germany great again. The parallels between how Trump and Hitler came to power are instructive. The rhetoric of both men was dangerously populists in nature. Not surprisingly, historians have been comparing Trumpism to fascism. One writer recently opined that:

It hardly takes a genius to see the similarities. Hitler promised to return Germany to her former glory by weeding out the traitorous politicians who had cost her the war. Trump promised to “Make America Great Again” by “draining the swamp”. Hitler blamed Germany’s problems on the Jews. Trump blamed Mexican “rapists and criminals”. Hitler’s supporters chanted slogans like “Im Felde Unbesiegt” (Undefeated on the Battlefield), Trump’s supporters had theirs too: “Build the wall”, “Lock Her Up”, and of course, his latest: “Stop the steal”.

As part of rebuilding the world after World War II, the Liberal International Order was created. Liberalism is an international (as distinct from national) worldview that opposes isolation and protectionism. The liberal vision looks for collective solutions to global problems by working co-operatively with the help of international institutions and alliances to make the world a better place. Nationalists, in contrast, want a more homogenous society and tighter controls by governments over territories and borders.

The mantra of nationalist politicians – “country first” – fuel calls to build fences and erect trade barriers. Yet since 1950, the burgeoning growth in international trade has helped make the world a more peaceful place. Free trade raises the cost of war by making nations more economically interdependent. The more people rely on trade with others, the greater the cost to all parties of a conflict.

One of the hallmarks of liberal internationalism is rule-based relations which are enshrined in institutions such as the United Nations. Under nationalism, however, we would see a more contested and fragmented system of economic blocs and regional rivalries. The desire to increase sovereign control invariably results in isolationist policies, particularly with regard to immigration.

In his final address to the UN General Assembly on 20 September, 2016 Barack Obama delivered a stinging rebuke to those who would build walls saying: “A nation ringed by walls would only imprison itself”. In the same speech, he defended liberal globalisation arguing that open markets, capitalism and democracy should remain the guiding forces of the international order.

… I believe that at this moment we all face a choice. We can choose to press forward with a better model of cooperation and integration. Or we can retreat into a world sharply divided, and ultimately in conflict, along age-old lines of nation and tribe and race and religion. I want to suggest to you today that we must go forward, and not backward. I believe that as imperfect as they are, the principles of open markets and accountable governance, of democracy and human rights and international law that we have forged remain the firmest foundation for human progress in this century.

It is paradoxical that the growing calls for a less open world would actually hurt the poor most of all. Since the end of World War II, free trade has lifted millions out of extreme poverty. It is irrefutable that globalisation has been good for the global poor. This point was also made by President Obama.

The integration of our global economy has made life better for billions of men, women and children. Over the last 25 years, the number of people living in extreme poverty has been cut from nearly 40 per cent of humanity to under 10 per cent. That’s unprecedented. And it’s not an abstraction. It means children have enough to eat; mothers don’t die in childbirth.

President Obama went on to say that “our international order has been so successful that we take it as a given that great powers no longer fight world wars; that the end of the Cold War lifted the shadow of nuclear Armageddon; that the battlefields of Europe have been replaced by peaceful union”.

Populist politicians are undermining liberal internationalism and this poses a threat to peace and prosperity. Less international co-operation will lead to increased distrust between nation-states and may even give rise to conflict. Nativism and its beggar-thy-neighbour policies is a backward and dangerous step for the world. In the words of the old adage, it really is a case of “united we stand, divided we fall”.

The rise of the new radical right reflects a deep social and economic malaise affecting an increasing number of nations. The past decades have ushered in an unprecedented level of socio-economic change and voters are expressing their dissatisfaction at the ballot box. Only time will tell how long this anger and resentment lasts. What is clear is that the rhetoric of the far-right has struck a chord with a critical mass of voters.

The world’s political landscape has been transformed by a nationalist movement which has gone global.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Don’t be fooled by self-appointed COVID authorities

Source: UNSW Centre for Integrated Systems for Epidemic Response

Throughout history, fake experts have suddenly appeared during times of crisis. They emerge from obscurity, stand on their various soapboxes and proliferate misinformation. Such falsehoods create a climate of fear, which is fuelled by those eager to put in their two cents worth.

Periods of great uncertainty always provide a fertile breeding ground for the spread of mistruths. The current COVID outbreak is no exception and has thrust previously obscure individuals into the pandemic limelight. People claiming to be health experts have popped up everywhere as talking heads in the media.

In response, the World Health Organisation (WHO) issued a statement in February warning that humanity is not just fighting a viral pandemic but also an “infodemic”. Like the virus, the infodemic has proven to be highly contagious and has been transmitted by mainstream and social media.

Public nervousness and the desperate search for cures has made it impossible to completely immunise a gullible public against fabricated stories. In the words of the WHO boss, we are “battling the trolls and conspiracy theorists that push misinformation and undermine the outbreak response”.

We should not heed the barrage of half-baked COVID health advice from Twitter, Facebook or deranged politicians like former President Trump. Yet millions have listened to their quack remedies and pseudo-scientific explanations. While some of these cures seem legitimate, most are patently wrong.

The WHO’s mythbusters site pours cold water on a raft of dodgy health tips that allegedly prevent or cure COVID-19. These include eating garlic, drinking bleach, snorting cocaine, rinsing the nose with saline, gargling with salt water and spraying alcohol or chlorine all over your body.

The Australian Government also has a mythbusting site and it debunks a number of COVID-19 myths including that hot temperatures kill the virus, 5G networks spread the virus, drinking water every 15 minutes prevents infection and hydroxychloroquine is an effective treatment.

Around the world, mass media coverage of the pandemic has contributed significantly to the COVID-19 infodemic. The mainstream media’s penchant for sensationalism has resulted in inaccurate news dissemination including the reporting of unscientific cures and unverified medicines. As noted in The Harvard Gazette:

At many major news outlets, reporters and editors with no medical or public health training were reassigned to cover the unfolding pandemic and are scrambling to get up to speed with complex scientific terminology, methodologies, and research, and then identify, as well as vet, a roster of credible sources.

The media’s failure to correctly identify qualified and trustworthy sources of information about COVID is a case of history repeating itself. From major incidences like terrorist attacks to routine events such as interest rate hikes, the media’s modus operandi is to call upon supposed “authorities” to act as instant experts and explain what has happened and why.

But these so-called pundits are often no more than self-proclaimed gurus. Indeed, they typically know little more than the rest of us. Even so, put them in front of a camera, and these publicity seekers can’t resist asserting their opinions on subjects in which they have little or no formal training or expertise.

The Y2K computer bug is a classic example. While technology legend, Bill Gates, saw the millennium bug as a “minor inconvenience”, less qualified IT commentators promulgated doomsday scenarios and were aided in their deception by the media which spun compelling but inaccurate stories.

A naïve public bought into the outrageous predictions about planes falling from the sky and missiles self-launching. Nonetheless, the bug did not bite and the New Year passed with nothing more than the expected hangover. Those who foretold of a global computer apocalypse caused unnecessary panic but were never brought to account.

Nothing had changed by the time of the Fukushima power plant disaster in 2011. Yet again, the media wheeled out instant experts who hyperventilated over the very modest amounts of radioactive fallout. While fears about radiation contamination were clearly overblown, they made for dramatic headlines which trumpeted the dangers of nuclear energy.

A report released five years after the disaster by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) found that not one person had died because of the meltdown. Referencing the UNSCEAR Report, a Forbes magazine article stated:

No one will die from Fukushima radiation, there will be no increased cancer rates, the food supply is not contaminated, the ocean nearby is not contaminated, most of the people can move back into their homes, and most of the other nuclear plants in Japan can start up just fine.

Almost three years to the day after Fukushima, the world was gripped by the mysterious disappearance of a Malaysian Airlines Boeing 777. The aircraft vanished without a trace, bringing a gaggle of know-it-alls out of the woodwork. They went into overdrive speculating about what may have happened to the plane.

Many of their theories were not supported by a shred of solid evidence. Nonetheless, their views were given air time by media outlets. This helped networks maintain rolling coverage of the tragedy and filled the huge gap in reliable information about the plane’s fate.

Suggestions from armchair sleuths, aviation experts and conspiracy theorists were broadcast. Fringe theories flourished and ranged from the sinister (electronic warfare), to the far-fetched (remote island landing) to the insane (abducted by aliens).

Clearly, listening to near-experts is a fool’s errand which is why the media must do a better job of identifying opportunists who simply want 15 minutes of fame. Around the world, television, radio and print interviews have contributed to new-found notability for charlatans who were not properly vetted prior to being unleashed on an unsuspecting public.

The coronavirus has shown, once again, how easy it is for someone to claim to be a subject matter expert. And if the “expert” is deemed to be camera-ready, there is always the temptation by the media to forgo a credentials check. Even so, background checking should never be optional, even when working to a tight deadline.

Fact-checking the experience of an “expert” may seem like a tedious extra step to a journalist, a reporter or a broadcast producer – but it’s essential. The media is critical when politicians and CEOs – who also work to tight deadlines – get facts wrong. So, the same standards should apply equally to news outlets.

Please allow me to end with an observation. The media does a great job in holding others to account for their failings and shortcomings and is quick to throw stones. Despite that, the media reacts negatively to feedback about its own performance and is poor at self-examination and reflection.

“Journalists and media professionals automatically take up defensive positions when confronted with criticism,” notes Julie Reid, Associate Professor in the Department of Communication Science at the University of South Africa.

In an article published in The Conversation, A/Professor Reid acknowledges that, in many countries, political and government interference in the editorial independence of news outlets is still prevalent. This causes journalists and media professionals to feel that they are under attack. This, in turn, gives rise to a siege mentality which is reflected in the news media’s reluctance to embrace genuine critique or evidence-based scrutiny of its performance. She writes:

The rantings of a crooked politician who dismisses the news media’s reportage as fake news and calls for draconian media regulations to conceal his own corruption is one thing. The critique and criticisms of media analysts, but more especially of ordinary citizens, whose only request is that the news media works better for them, is an entirely different matter. And ought to be respected.

I’m an ordinary citizen who merely seeks better accuracy in news reporting. Like all citizens, I have the right to hold the media’s feet to the fire over its reporting of the pandemic. On all continents, mainstream media outlets have aided and abetted charlatans in spreading bogus COVID information and this has circumnavigated the planet in seconds.

Media professionals seeking advice on best practice in responsible journalism during a health crisis would benefit from reading an article by Catriona Bonfiglioli. Ms Bonfiglioli is a senior lecturer in media studies at the University of Technology, Sydney. In a 2020 piece she wrote for The Journalism Education and Research Association of Australia she stated that when journalists report on the coronavirus, it is important that their words:

… help people understand best prevention tips, minimise stigmatisation of people with COVID-19, reject fake health news, and resist the allure of “sexy” controversies and contrarians hitching a ride on the news wave by contradicting public health advice or calling for extreme measures.

Responsible journalism IS possible during times of crisis.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

How the quiet many are drowned out by the outspoken few

Credit: Silent majority illustration by Greg Groesch/The Washington Times

It’s true – I’m a member of the “Silent Majority Party”. My fellow members and I never demonstrate, wave placards, stage sit-ins or stir-up trouble. We are a quiet bunch with no media spokesperson. Although many of us are discontent with what is happening in society, we keep our opinions out of the public arena and quietly get on with our lives.

In my particular case, I express my views on contemporary issues via this blog, but that’s where I draw the line. Like my fellow Silent Majority Party brethren, I leave the shouting, heckling and disruption to the vocal minority. Their manipulative antics invariably capture the attention of the media, which provides prime-time coverage of their opinions – no matter how radical.

Non-peaceful protests, rallies and marches tend to attract more media attention than peaceful demonstrations. “A little violence goes a long way,” proclaims US political journalism company, Politico, because “the press loves the sound of breaking glass, police-car sirens and tear-gas grenades”. Such activities certainly trump inaudible forms of protest like letter writing.

Political activists take to the streets over a diverse range of issues, grievances and concerns. In addition to protest activities, organised interest groups lobby politicians, mobilise grass-roots action and orchestrate media campaigns to get their message across. Swaying public opinion and influencing policy outcomes is the name of the game.

A US study claims that small groups which reach a critical mass of 25 per cent, can overturn established norms. Decades of work in sociology, physics, and other disciplines have supported this idea. As noted in a newspaper article:

Small groups of people can indeed flip firmly established social conventions, as long as they reach a certain critical mass. When that happens, what was once acceptable can quickly become unacceptable, and vice versa. Two decades ago, most Americans opposed gay marriage, bans on public smoking and the legalization of marijuana; now, these issues all enjoy majority support.

While I’m a passionate advocate for freedom of expression and would never stop anyone from exercising their democratic right to protest, the reality is that moderate voices are stifled by the clamour of minority interest groups and individuals. As most citizens do not shout from the rooftops or force their beliefs and politics on to their fellow citizens, they are largely invisible.

According to the NSW Bar Association, freedom of speech is not an unfettered right to do and say what we want. “It is a personal right which, in any civilised society, carries with it, the corresponding duty to consider the rights of others. Freedom of speech is therefore a qualified right, not an absolute right, in accordance with international human rights law,” wrote the association.

Research shows that being confident and loud is one way to win an argument – even if you are wrong. Shout louder than anyone else and people will assume you’re right. In the world of politics, bolshie behaviour has been seen as the way to get ahead. Donald Trump shouted louder and more outrageously than any other politician and this enabled him to dominate the news.

Nigel Farage – who led the former pro-Brexit UK Independence Party (UKIP) – is also a loud, rabble-rousing politician. His bombastic style convinced many Britons that breaking away from the European Union would be a good thing. He cleverly harnessed the power of voter discontent and exploited the populaces’ deepest fears about immigration.

Farage used xenophobic language to spruik a racist message, which had his misguided followers chanting “we want our country back”. The UK is now suffering post-Brexit regret over its disastrous decision to leave the EU. In the words of former UK PM, Gordon Brown, Farage “highjacked patriotism” by manufacturing distrust and disunity.

I’ve long observed that loud and aggressive people tend to get their way – they won’t take “no” for an answer. They’re the ones who will not accept that the doctor is booked until next week, but argue their way into the surgery that same day. They’re the ones who become irked at airline cabin baggage restrictions and hog the overhead bin after airline staff relent in the interests of on-time departure.

These “entitled” individuals believe that the rules don’t apply to them and that they deserve preferential treatment. Meanwhile, the rest of us graciously accept that the doctor can’t see us today and that we need to stay within baggage allowance limits. So, should we all stomp our feet every time we are upset when things don’t go our way? I think not.

We should not go through life being hijacked by our anger. To lose your temper and yell is not a constructive way to deal with a difficult situation – it’s also damaging to relationships. Being calm and quiet, on the other hand, is not a bad thing. The world is full of quiet achievers. The best performing staff aren’t necessarily the most vocal. Nor are the most valuable customers necessarily the loudest ones.

In the same way, quieter citizens are not necessarily apathetic. Rather, they dislike the politics of confrontation and prefer to cast an informed vote at each election. Politics based on “he-who-shouts-loudest” often comes unstuck where it matters most – in the privacy of the polling booth.

Democracy gives each of us an equal say because of the principle of one-person, one-vote. Each person who casts a vote is equal to every other voter – no matter how much noise an individual may make. I care deeply about our nation and am an avid follower of the political system. This enables me to cast an informed vote for the party with the policies that I believe will serve our nation best.

In fairness, I must acknowledge that the electoral process is far from perfect. We like to believe that voters evaluate the evidence put in front of them over the course of a campaign and then make an informed decision at the ballot box. This, however, is fantasy as research shows that the average voter is surprisingly unsophisticated. Most citizens don’t make their voting decisions based on policy questions. Voters are poorly informed and make irrational decisions.

On the plus side, Australia’s compulsory system saved us from Trumpism. As pointed out in an article in the Australian edition of The Guardian, Donald Trump was elected with only a quarter of eligible voters supporting him, and just 37 per cent of eligible Britons voted to leave the European Union. In 2015, (then) US president Barack Obama praised Australia’s system, saying it would be “transformative” if everyone voted in the United States.

Notwithstanding my personal preference to voice my concerns in the privacy of a polling booth rather than publicly on the street, I accept wholeheartedly that political activism is part and parcel of a free and open society. Let those who wish to demonstrate without violence, do so.

But let’s not criticise those who choose a less vocal way of expressing their views.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting

Why it’s important to understand economics

Credit: tradeselecter.com

Economics touches every part of our lives. We encounter it as workers, parents, citizens, savers, investors and borrowers. It’s at the centre of public debate on everything from education to immigration to the arts. Yet most of us struggle with basic economic concepts such as bond yields, trade deficits and GDP growth.

Turn on the TV or radio and you’ll be bombarded with economic data that screams for your attention. We are surrounded daily by talk of falling interest rates, rising oil prices, ballooning national debt and seesawing exchange rates. This statistical information helps explain the world in which we live.

Even so, most of us have not read an economics textbook, which is why the discipline remains shrouded in mystery. The technical language and mathematical models of economists are not well understood by the populace. Surveys confirm that most people have a poor grasp of economics.

This deficit in economic understanding hinders us in making informed judgments about the accuracy of economic statements that are made. Many people do not have the economic literacy necessary to critically analyse the economic headlines which dominate the 24/7 news cycle.

One of the primary activities of modern governments is to determine economic policies. The media play a central role in informing the public about these polices and explaining them in clear terms. Together, politicians and the media shape public beliefs and attitudes about economic matters.

Public perceptions about the economy have important political consequences. Governments are largely re-elected or rejected based on economic perceptions. Yet on many economic issues, the gap between public perceptions and economic reality is very wide.

This is not entirely the fault of voters who are constantly exposed to plausible-sounding economic misconceptions. These fallacies include many beliefs widely disseminated in the media and by politicians. As noted by American economist Thomas Sowell in the preface to his book, Economic Facts and Fallacies:

Some things are believed because they are demonstrably true. But many other things are believed simply because they have been asserted repeatedly – and repetition has been accepted as a substitute for evidence.

Fallacies abound in economics, affecting everything from domestic housing to international trade. Fallacies also have staying power – even in the face of irrefutable evidence against them. Economists could spend an inordinate amount of their time debunking the scores of economic fallacies.

In Fifty Economic Fallacies Exposed, Geoffrey Wood – Professor Emeritus of Economics – examines a range of popular economic misconceptions and explains how these mistaken beliefs misinform economic discussion. Among other things, he looks at the supposed dangers of international trade, the alleged ability of governments to control the economy and the purported benefits to consumers of regulation.

People embrace economic fallacies due to a phenomenon called the fallacy of composition. This fallacy infers that what is true for an individual is also true for a whole group. A classic non-economic example is that of a person who stands up at a concert so that he/she can see better. But if everyone stands, the view of many spectators will worsen. So, what is true for one individual in the crowd, is not true for the whole stadium.

Thinking about economic issues using the same flawed logic can also lead to incorrect conclusions. For example, our individual experience as a worker is a poor guide to the workings of an economy as a whole. Despite this, those who lose their jobs due to automation typically surmise that technology is a threat to all workers. However, this is an erroneous assumption as new technology does not result in higher overall unemployment.

The misconception that new technology destroys jobs is referred to as the Luddite fallacy. In the early 19th century, English textile workers and weavers protested against the changes ushered in by the Industrial Revolution. These “Luddites” smashed mechanised knitting machines as they believed the new labour-saving devices would steal their jobs.

Nonetheless, by the end of the 19th century, there were – according to economist and author James Bessen – four times as many factory weavers as there had been in 1830. Automation reduced labour costs for factory owners. This, in turn, enabled the price of garments to be lowered. This, in turn again, increased product demand leading to the need for more workers. 

Automation allows workers to deliver better, faster, and cheaper services and that’s good for growth and therefore good for the economy. What’s also undoubtedly good for the economy is immigration – but not according to the lump of labour fallacy.

This fallacy is premised on the mistaken belief that the amount of work available in an economy is fixed, so no one can get a job without taking one from someone else. Unsurprisingly, this fallacy is used to prosecute the entrenched myth that migrants steal jobs from native-born workers. It’s an understandable supposition, but it’s incorrect.

Immigrants who gain work also gain income to spend, creating new jobs. Immigration, therefore, increases the demand for labour and stimulates employment. Commenting on the positive impact that immigrant workers have on the US economy, The New York Times Magazine explained that imported workers:

… use the wages they earn to rent apartments, eat food, get haircuts, buy cell phones. That means there are more jobs building apartments, selling food, giving haircuts and dispatching the trucks that move those phones. Immigrants increase the size of the overall population, which means they increase the size of the economy. Logically, if immigrants were “stealing” jobs, so would every young person leaving school and entering the job market; countries should become poorer as they get larger. In reality, of course, the opposite happens.

It is axiomatic that the conventional wisdom which says that immigrants take jobs and lower wages is absolutely wrong. In reality, immigrants create jobs and make native workers more prosperous. As world renowned economist and Nobel Laureate, Paul Krugman, commented “… the (lump of labour) fallacy makes a comeback whenever the economy is sluggish”.

One final economic myth – which has been perpetuated for decades – is the household fallacy. A government is not a household and therefore does not need to manage its finances like a household. The household fallacy disguises this truth by falsely claiming that what is true for an individual household regarding debt is similarly true for a government.

The household analogy is simple: Governments need to live within their means (like households) by not spending more than they receive or risk going broke. In reality, governments do not always need to have a balanced budget. In fact, they can run prudent annual deficits indefinitely, as many countries do. Britain has maintained a national debt for more than 300 years. Going back to 1776, the US has been in continuous debt except for seven short periods.

Balancing the national budget sounds appealing and promising to get it back in the black resonates with many voters. Still, policymakers should avoid playing populist politics by trying to imitate family budgets. Fiscal austerity is commendable at a household level but can equate to economic irresponsibility at a sovereign level. As noted in a UK media article:

The familiar logic of the household analogy has become so embedded into public life that spending proposals that would help tackle some of our most pressing challenges – climate change, the housing crisis, unsustainable household debt – can barely make it out of the door. All too often, such proposals are stopped in their tracks by rival politicians and the media asking where the money is going to come from.

[Note: A fuller explanation of the benefits of government debt can be found in my recent post – Modern Monetary Theory.]

It can be seen that economics is replete with fallacies and has a bad reputation for its lack of precision and certainty. As economics is not a natural science – like physics, chemistry or biology – its propositions are rarely absolutely true or false. Broadly speaking, economics is the study of human behaviour as it relates to money and we humans are neither rational nor predictable.

Even so, it is incumbent on all of us to better educate ourselves in the workings of the economy. What currently passes for a conversation among the electorate about economic issues is often amateurish. So, learning about economics often starts by unlearning what you thought you knew.

We are ignorant of our own ignorance when it comes to economics.


Paul J. Thomas
Chief Executive Officer
Ductus Consulting