Artificial Intelligence: Getting on the front foot

Artificial Intelligence: Getting on the front foot

It’s one of the hottest topics around. The world is starting to wake up to the risks and opportunities of Artificial Intelligence (AI). And yet one place where we are seeing little discussion is the boardroom. In fact, in a recent webinar that we took part in, half of the company secretaries present said their boards hadn’t discussed it at all. Compare that with how one of the world’s most successful investors in disruptive technology sees it: “You ought to ask each one of your companies: what is your AI strategy today? Where is it going? How has it changed? If they don’t have good answers, I wouldn’t invest in them.” (John Chambers, quoted in Financial Times 18 September 2023.)

Here are some ideas to help you start the process of getting your board onto the front foot, if it isn’t there already.

Good practices to consider…

Understand what it’s all about.  A board should at least be discussing right now what it does at this point understand, and what it needs to understand.  

Things to avoid…

Waiting for the picture to become clear before trying to understand it.  It’s moving so fast that it probably won’t become clear, at least not soon enough.  The consequences of AI for the way organisations and their partners and customers work mean that waiting until it’s all straightforward isn’t an option.

Good practices to consider…

Find out what the organisation is already doing about AI.  Where is it already being used?  What plans are there?  Where are we compared to competitors?     

Things to avoid…

Supposing that AI is either already a business asset or that it’s something that will transform everything in the future.  It’s most likely a mix of things covering that entire spectrum.

Good practices to consider…

Find out how all of this is being managed.  Who is responsible?  Is it properly resourced?  Has it got a sensible budget?  Are the right internal stakeholders plugged in?  What are management doing to understand the implications and opportunities?  Who’s thinking about the risks? 

Things to avoid…

Assuming that executives are keeping pace with the shifts – or at least have the mechanisms in place to hear about it from other levels.  As with many other aspects of technological change, a lack of imagination, foresight and strategic leadership might be making it difficult for voices to be heard.  Or – the simplest but most universal problem – the pressures of managing today’s challenges and opportunities might be crowding out the longer term.

Good practices to consider…

Start discussing what it might mean for the organisation.  That’s about looking across every aspect of what you do.  There is potential for the business model to be affected across everything: processes, communication, customers, people…   

Things to avoid…

Being reluctant to speculate.  The future AI-driven world is a classic example of the need to think through scenarios and with informed imagination.  It’s not about a board stumbling on an awayday around trying to map things out on a flipchart.  It is about management doing the groundwork to help the board understand what the organisation could look like, using informed imagination to build on what we already know. 

Good practices to consider…

Get in outside expertise.  There are obvious benefits from involving innovative, up-to-date thinkers to stimulate and inform – or shake up –long-established internal thinking.  And it gives some assurance that our speculation and decision-making is sensibly grounded.     

Things to avoid…

Relying solely on management.  Nobody is expecting the CEO to become an AI expert, or the CIO to have all the answers.  So other experts are likely to become an essential component in getting ready to make the strategic decisions and manage the risks.  Not just for their knowledge but also to generate the excitement (and fear?) that needs to be injected into strategic thinking.    

Good practices to consider…

Turn the discussions into next-stage steps.  That doesn’t mean knowing how to respond and turning it into action plans.  That’s going to be premature before the implications, strategic goals and risk appetite are worked out.  But it does at least involve working out what discussions need to follow, and how AI needs to be incorporated into the board and committee agendas.    

Things to avoid…

Failing to shift from the “that was a really interesting discussion” to the “what do we do next” decision.   It’s easy to imagine an AI item at the strategy day leading to a lively discussion followed by…what?  The board will need to hear from management what is going to happen next, and what the board needs to do.  

Good practices to consider…

Make sure that AI is in the right place on the list of the board’s priorities for the year ahead.  (And if you haven’t got an agreed list of priorities, now would be a good time to start.)    

Things to avoid…

Assuming that ad hoc incremental additions to the agenda used for the last decade will give the best result.  

Good practices to consider…

Work out an early position on where you want to be in this.  A leader?  A cautious follower?  Are you in with the herd?  Or a reluctant sceptic willing to take the risk of being dragged along behind?  It’s not a commitment to a particular route, but it is a message to the organisation around the pace at which you want to start tackling the multifaceted AI landscape.    

Things to avoid…

Waiting until it’s all understood to adopt a philosophy, and eventually a change strategy.  With change that’s happening so fast, there isn’t the luxury of waiting for clarity or to see how others get it right or wrong.  The position you take will influence how the board should be handling it – but there must always be time for regular review of whether it remains the right position to be taking.  

Good practices to consider…

Think about whether it needs a board sub-committee to focus on it.  Does it look like AI is of such fundamental importance to the organisation that it needs a separate forum to ensure it gets enough time and attention?

Things to avoid…

Creating yet another committee because everyone else has one.  It is indeed possible to have too much of a good thing…   Remember that AI isn’t a technology issue – it is a fundamental business issue.  So consider whether a working group would be a suitable, and easier, way to give focus, or maybe whether it’s something that the whole board should try to be on top of (and if it is, how to find agenda time to give it the proper attention).

Good practices to consider…

Think about whether the risk committee is equipped (with expertise, time and information) to do justice to a whole new world of unfamiliar risks.  And whether the audit committee and the control functions are equipped to manage the operational risks as AI is developed and deployed.  

Things to avoid…

Supposing that if AI is on a risk register with a yellow traffic light against it, then adequate risk management will follow.  It probably needs more than this…  For example, who’s thinking about how confidential information might be getting shared with data-hungry AI that you don’t control? 

Good practices to consider…

And what about the board itself?  Look for opportunities to use AI to improve the effectiveness and/or efficiency of the board’s work.  Not to mention winning the Co Sec’s heart by reducing the burden of the minute writing!  

Things to avoid…

Letting jokes about robots replacing NEDs be the end of the matter.  That’s not going to happen within anyone’s nine-year term – it might be an amusing conceit, but it’s no reason for not looking at how AI can be used to help old-fashioned humans work better. 

Download This Post

To download a PDF of this post, please enter your email address into the form below and we will send it to you straight away.



Ready to speak to a board evaluation specialist?

Learn how we help boards to become more effective and have a bigger impact on strategic performance.