Australian Border Force Tackles Today's VUCA Challenges

Hello everyone, it's an honour to speak at the inaugural Leaders in Crisis Management Forum today. It's a topic which has punctuated my 44-year career in public service, which will end later this year.

The 19th century French linguistic maestro Anatole France said that "to accomplish great things, we must not only act, but also dream; not only plan, but also believe".

This is such an important forum for the next generation of dreamers, planners and believers, because we are at the beginning of the most profound, exciting, maybe even scary, technological transformation we may ever see.

Since I walked through the gates of the London Metropolitan Police college as a teenager, I've seen a lot and done a lot. Most importantly, I've always thought a lot about the future - and today I'm going to talk about the future of crisis leadership.

Firstly, I'd like to thank the our friends at Home Team Academy and its chief executive Anwar Abdullah for the very kind invitation to attend the Milipol Asia-Pacific/TechX Summit.

Yesterday I was on the panel that talked about Securing Borders, which included my friend Marvin Sim, the Commissioner of Singapore's Immigration and Checkpoints Authority.

I really admire the collaboration between the Home Team agencies. It's a fantastic example of a team that is truly greater than the sum of its parts.

I'll set the scene by briefly telling you about some of the major crises that I've been involved in.

My first was as a young constable, when I had to police the civil unrest during the UK miners' strikes in the mid-1980s. This would turn out to be a cataclysmic fight about industrial relations, the future of the coal industry and of the dozens of mining communities that were directly affected.

In 1992 I was an Anti-Terrorist Branch detective at New Scotland Yard and was among the first responders at the Baltic Exchange bombing in London's financial centre, the day after a General Election.

I arrived only a few minutes after the detonation and was staggered by what I saw; the huge crater, massive buildings reduced to rubble, the dust everywhere. There was an acrid smell, and it was eerily silent.

The Provisional IRA destroyed the iconic Baltic Exchange, and ripped a hole in London's financial centre. This led to the creation of security zone called the "Ring of Steel", which forever changed the face of the City of London.

Later in the 90's I had to tackle a crisis of a different nature - endemic corruption within the Met Police itself. Some crises burn slowly or fester under the surface until their elements converge and bust out like a perfect storm, wreaking reputational havoc.

In 2000 I was sent to work with the New South Wales Police Service through an exchange program. Me and my now wife really liked Australia, a lot, and in 2002 I commenced my career in Australian law enforcement.

In 2014, as an Assistant Commissioner, I led the Australian Federal Police response to the downing of Malaysia Airlines Flight 17, or MH17, in Ukraine. 298 people were murdered including 27 Australian children, women and men.

Our mission was emotionally, operationally and diplomatically tough; get to the crash site with our Dutch partners and safely recover the bodies of the victims and collect evidence.

Each day my officers had to venture into a literal battlefield - the Ukrainian army and Russian-backed separatist forces were shelling the area with mortar fire by night and the Russian army itself was only about 30 kilometres away across the border.

As the Australian Border Force Commissioner in 2020 when the pandemic struck, I was asked by my government to close Australia's border to high-risk travellers - which we did within a day.

In the following months we implemented many other policies, some of which were publicly contentious. And I can tell you, whilst closing the border was quite a challenge, reopening it was a lot more complex.

I would offer you four observations about these crises:

  • Each was unique, there was no playbook to guide our responses.
  • They were all about people - and however well we thought we performed, what really counts is what other people think.
  • Your ability to tailor a response really quickly is important.
  • The multi-dimensional and international aspects of the MH17 and the pandemic responses required new thinking about national security and could possibly be described as 'polycrisis'. They certainly challenged some of my ideas about crisis preparedness and response.

    I have also gained some important insights through attending the Harvard Kennedy School's executive education programme 'Leadership in Crises'.

    There I got to engage with people like Joseph Pfiefer, who was the first chief on scene at the World Trade Center terrorist attack on the morning of September 11, 2001. This changed the world in so many ways and serves as a stark example of polycrisis - who could have predicted all the global consequences of 9/11?

Learning from academic research and the experiences of others is valuable, especially in crisis leadership.

The Faculty at the Harvard Kennedy School contend that crises are defined by novelty and I agree - there might be similarities to things we have encountered in the past, but there will be crucial differences in any crisis.

Because of these similarities we might initially frame an event as a routine emergency and try to apply a response that has served us well before.

But if we miss the signs early we risk delay, confusion, a loss of trust and sub-optimal outcomes. Leaders have to recognise the signs early and adapt the response accordingly.

So I believe that recognising and understanding the dimensions of novelty is absolutely key.

And in speaking of novelty, with its complexities and uncertainties, it's clear that in the future our ability to plan for crises, to reduce the uncertainties they present and to quickly diagnose the effectiveness of our actions in novel scenarios... all of this opens up a conversation about the possibilities and challenges of artificial intelligence.

Many of us are already somewhere on this journey, and in my organisation we are extensively using data science and specialised AI systems in a number of novel ways. I'm just going to touch on a few aspects of what we're doing with AI today and how we see this evolving.

The extraordinary power of being able to analyse data in close to real time, and at scale, is helping our officers to detect and disrupt all manner of criminal activities.

AI is already giving us more capacity to detect and disrupt new threats at and before they even reach the border.

Our budget and employee numbers will never keep pace with increases in the volume and speed of global trade and travel. But the ABF is well advanced in developing our Targeting 2.0 capability, to incorporate all of our assessments of border-related threats, risks and vulnerabilities along with new data from industry and partners, to support our decision making.

Targeting 2.0 seeks to apply the extraordinary power of AI to complement and amplify the deep expertise of our people, to identify new patterns at speed and at scale, to detect and disrupt crime as it happens, and, in time, to get ahead of the perpetual evolution of criminal activities.

So this is a promising start. As AI continues to evolve we're going to be able to look at an increasingly bigger picture and start addressing problems at the systems level - whether in terms of threat discovery, modelling or disruption.

It seems clear that our jobs and the world in which we operate are going to be very, very different in the coming years because of AI: planning, responding and recovering from crisis included.

I mentioned earlier about the risk of using routine responses in crises. This doesn't mean that we shouldn't have a basic underpinning doctrine for our approach to crisis planning or leadership.

Many models exist, mostly based on the traditional system of Command and Control, or C2, which has been expanded to C3, C5, C+ISR etcetera. We have a plethora to choose from.

In the ABF we use our own C3 model - command, control and coordination.

  • Command being the authority to plan, direct, coordinate and control the deployment of resources;
  • Control being the direction and management of activities, agencies and resources, and
  • Coordination being the unity of actions in the pursuit of a common purpose.

    This model has proven its worth and is fine for managing routine operations and incidents. It helps us maintain operational discipline and predictability the vast majority of the time, but it is inadequate as a model for leaders during a crisis.

    Joe Pfeifer and the Harvard Kennedy School suggest a leader's job is to galvanise others in responding to the novelty of a crisis by quickly moving to a networked system of leadership; connecting, collaborating and coordinating across networks to form an adaptive response. Personally, I think we have to constantly adapt crisis leadership models according to an organisation's internal and external situation. Any C2 or 3 doctrine embedded into leadership training lays a foundation to work from, and I'm working with my teams now to develop a new adaptive model, a C3++ if you like, that can meet emergent and future challenges.

The ++ is about the leader thinking about the adaptation of whatever model they know on the fly. It's about understanding the entropy (or the levels of uncertainty or randomness) and thus the novelty of the crisis.

In preparing for today, I wanted to be less vague about the issue of crisis leadership, to try and codify my thoughts somehow. I turned to a well-known Large Language Model to help me. And whilst I acknowledge that there may be more detailed approaches available, I came up with a formula to explain my thinking.

Here, the effectiveness of the leader's response (C3+) is a function of adapting to Novelty (N), which in turn is a function of Entropy (H) the dimensions of which may have high or low predictability levels (X) which can be subjectively weighted according to judgement and experience (α).

So here you have it: C3+response = f(N) = f(αH(X))

You might look at the dimensions of entropy in various ways. Models and concepts exist, such as PESTEL, which examines the Political, Economic, Social, Technological, Environmental and Legal variables subjectively - and you could even include psychological or ethical variables.

Key for me are the consequences of the crisis itself and of your decisions, or a lack thereof.

The challenge we face is to enhance both our preparedness for and response to crises and the polycrisis world; to enhance the predictability of our plans, responses and decisions. This I believe is where AI will make profound contributions in years to come.

A leader of an agency like mine, during a crisis, regularly meets with Ministers and stakeholders, communicates with the public, often through the media or via social media, whilst keeping a very close eye on the tackling of the crisis itself and monitoring the well-being of the people responding.

We anticipate from the outset that it will likely be exhausting and stressful.

Creating the space for ourselves and our teams to think and de­compress, the headroom for our staff (space, time and permission to do the job) to operate at their best and to innovate requires thoughtful leadership under pressure.

It's important to prepare our leaders in advance for the demands and the stresses of command. And that requires an ongoing investment, whether through study or immersive learning, through tabletop exercising or scenario planning. In this way we can build patterns of behaviour and a culture that is more likely to succeed in a crisis.

Harvard's Dutch Leonard and Arnold Howitt define crisis leadership as: "A good enough decision, soon enough to matter, communicated well enough to be understood, carried out well enough to work".

And I always say to my people that they should never ignore their instinct - it's a survival mechanism. You don't have to follow it, but ignore it at your peril. Human intuition is always going to be a very big part of leading a response to crises.

Tailoring a crisis response means creating a team that is greater than the sum of its parts; a team that can operate across networks and boundaries. Humans are tribal and our organisations are too, people in hierarchical command-and-control-based organisations especially so. Bias in decision-making can flow from this, and these are major risks for leaders.

Creating diversity within the response team, especially its leadership, is therefore crucially important, bringing multiple lenses and capabilities to the table to help solve complex problems, more quickly, with greater success. Groupthink and cognitive bias severely limit our ability to handle novelty and complexity.

And in that regard AI is really going to change the game - whether it's:

  • strategic planning,
  • preparedness,
  • operational planning and response,
  • augmented decision-making,
  • or being able to respond to or get ahead of threats.

So what are we in the ABF thinking about the future?

Whilst we're designing and building Targeting 2.0 today, because we're believers, we're already imagining what Targeting 3.0 will look like. And it will encompass all that we do, harnessing AI in ways that are still out of reach - for example by linking operational delivery and policy design and monitoring.

The concept of digital twins, which are virtual models deigned to accurately mirror a physical process, object or system has grabbed our attention.

And social systems are well in scope, opening the door for policy twins. A digital representation of a policy could include legislation as code, relevant data, modelling tools, impact monitoring and more.

Given the impact of crisis on social systems and the need to make policy rapidly (just look at the COVID-19 pandemic), then this would be a real game-changer for crisis leadership.

AI is only going to accelerate our ability to design and implement policy twins as well as other digital twins. Add in the incredible horsepower of quantum computing, and we'll be able to have digital and policy twins of things as complex as the entire Australian border and all its related infrastructure and systems.

We should eventually be able to model the effects of a crisis across the whole border continuum, more easily, and on an enterprise scale.

There are many other technological advances contributing to the immense power of AI, including neural network architectures, edge computing, blockchain and augmented/virtual reality. I'll single just one more out in order to stimulate your thinking about the art of the possible for border management and crisis leadership in the future.

Let's backcast a little, to the early stages of the Industrial Revolution. Thomas Bayes was an English statistician and philosopher who formulated Bayes' theorem.

It describes the probability of an event based on knowledge of conditions that might be relevant to the event. His theorem was published posthumously in 1763, two years after his death.

To say he was ahead of his time is an understatement. It really wasn't until the 1950s that Bayesianism became an international academic field. Quantum computing is about to turbocharge Bayesian belief networking capability. Whist machine learning will broaden its range of applications.

A Bayesian belief network is an advanced decision-making map that considers how different variables are connected and how certain or uncertain those connections are in determining an outcome. So they aim reduce uncertainty in decision-making. They are like a web that helps to determine the probability of an event based on what we know.

So let's imagine an array of sensors and data feeds, technology stacks with learning ability and visualisation tools; now incorporate digital twins, Bayesian belief networks and quantum computing.

We'll be able to model crises and our responses, with augmented decision-making and the ability to monitor those decisions' impact on complex social systems; during a crisis. Perhaps we could even start to understand and map the global and multi-dimensional interconnections of polycrises. Or maybe I'm dreaming.

But the future is hard to predict, and we always have to factor people in to our equation. For many governments, to gain the social license to implement AI systems like I've described, building and maintaining trust is key.

Building trust in government institutions will require a great deal of time and effort. People must trust that our data is secure, have trust in the information we push to them and they pull from us, trust in our people, trust that we won't misuse personal information, trust that we won't act unlawfully or unethically.

One of the best ways to build trust is to demonstrate, measurably, the benefits to people of sharing their information and data with us.

Take truly seamless and contactless travel through digital borders. To collect the data we need from travellers, we need to emphasise the benefits of people providing their biometrics, for example. Travellers will reap economic and personal benefits like time-saving and convenience.

I think we have to introduce human-centric measures of success into our success criteria, budgeting and operating models, so that our AI systems aren't just based on being good value for money but also having a positive effect on people.

We'll also have to monitor for outputs and public impacts to ensure systems are operating as they should, and not leading to unintended bias or harm.

Good, strong governance will therefore be absolutely crucial in implementing AI, and it was interesting to see the news last month about the United Nations' draft General Assembly resolution on making AI "safe, secure and trustworthy" and the EU's Artificial Intelligence Act. Such measures are, unfortunately, going to be required to build confidence and trust in AI in order for us crisis leaders to benefit fully.

In Australia more broadly, we're currently developing a whole-of-government AI policy and legislation and a consistent approach to AI assurance.

In the ABF we're focused on developing practical and effective AI guardrails and governance, and robust data science.

For us, it's not just about ethical and responsible design of AI systems. It's also about assurance - monitoring of outputs and impact, ensuring independent oversight of our systems, and appropriate transparency measures.

In conclusion, I hope that I've managed to give you some new insights and ideas. Given the threats we will face at our borders we have to start building now to be ready for the future, by assembling vast amounts of data ready to be fed into AI, by getting our people ready to use it, and by genuinely reinforcing trust.

We can't be too future-focused - yes, let's dream and believe, but make sure we're putting the right steps in place now. Those who don't start building readiness for AI into their systems now, are going to have a hard time adapting when it becomes imperative to do so.

We are working to bring all of these things - and more - together in the ABF, and the possibilities are endless.

Thank you - I look forward to the Q&A session later with today's other speakers.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.