RIDL

SHOW ME THE DATA PODCAST

SEASON 3: EPISODE 2

AI Fluency & Ethics: setting the foundation for a data-enabled government

CONVERSATION WITH NATHAN BINES - 27 minutes 23 seconds

What are the current levels of data and AI fluency across the public sector and what does good AI alignment look like?

In this episode, we are honoured to host the head of Queensland Government’s Data and Artificial Intelligence Unit, Nathan Bines. Our guest oversees the state’s budget allocation for data and AI initiatives, leading policy and legislative reviews focused on the safe and ethical application of AI technologies. Nathan’s team plays a key role in shaping the Queensland’s digital economy strategy and have been instrumental in developing generative AI tools for government employees. Join us as we discuss the integration of data and AI in government operations as well as the importance of data literacy for all employees. This insightful conversation also covers overcoming AI skepticism through experience and exposure, the ethical challenges of AI alignment, and the impact of developers’ biases on AI tools. This episode is a must-listen for anyone interested in how data and AI are reshaping government operations and the importance of informed, ethical leadership in navigating this digital evolution.

EPISODE TRANSCRIPT

Rhetta Chappell (host): Hi and welcome to Show Me the Data, a podcast where we discuss the many ways in which our lives and the decisions we make are impacted and depend on data. I’m Rhetta your host for today, and I’m a Data Scientist & Partnerships Lead at Griffith University. 

Today in studio I have none other than the head of Queensland Government’s Data and Artificial Intelligence or A.I. Unit, Mr. Nathan Bines. As a leader in his field, Nathan is dedicated to harnessing the power of AI and data technologies to improve health services delivered to all Queenslanders. I have been really looking forward to this conversation because Nathan is responsible for leading the state government’s data and A.I. policy agenda and are working really hard to shift public mindsets and attitudes around data and AI. So on that note, let’s dive right in to my conversation with Nathan Bines. Enjoy.

Show Me the Data acknowledges the Jagera peoples who are the traditional custodians of the land on which we are recording today. And we pay respect to the elder’s past, present and emerging.

Hello and welcome, Nathan. Thank you so much for joining me today in studio, to start, could you please set the scene for our listeners, and explain how the Queensland State Government’s data and A.I. unit came into fruition? And I’m guessing it’s been a bit of an evolution as things have progressed quite a lot recently and what your kind of key priorities are within the unit and your deliverables. And also how are you guys shedding light on the needs of the sector?

Nathan Bines: So we’re part of what’s called Queensland Customer and Digital Groups, we are distinct business unit within there that’s responsible for data and digital and customer policy strategy and delivery for the state. So earlier this year, the Queensland’s digital economy strategy was released, which is a ten year action plan for all things digital, both for the economy, for the government and for industry. And that’s got about six priorities. It talks about connectivity, digital inclusion skills, contemporary services, and one of the key for myself is around digitally enabled government, and that’s around improving and uplifting government service delivery and the way government operates.

So under that strategy’s about $200 million over the course of four or five years was allocated. A significant chunk of that is for around connectivity and and inclusion and innovation. But there’s some money set aside in there for all things data. So that’s my remit. We initially 12 months ago had some pretty strong plans and vision around a data strategy for Queensland and we kicked that off in March this year with a review of data sharing. So that was policy. And leg review an understanding of the ecosystem in Queensland. Looking at other states that have data sharing legislation, have great data sharing ecosystems and how they have implemented data sharing and then developing a new data sharing framework for Queensland Government some great endorsement from Chris, our Chief Customer and Digital Officer and others in the department, to really have a look at what the policy position is of Queensland Government in response to generative AI particularly, I think it took most people by surprise in terms of the ubiquitous adoption of the technology and GPT4 coming out.

The Commonwealth Government released a discussion paper earlier in the year and that was probably the genesis for our involvement around the safe and ethical use of AI. And we led a response to that and then we have, I guess, kicked on from there and set up an AI unit. We’ve released a generative AI guidance for public servants as well as some work on a platform enabling services to help agencies that had come along the journey in a safe and controlled manner.

But the data work doesn’t stop there, We still have a full intent to look at a data strategy and AI strategy for Queensland Government and move forward with some plans around that data sharing platform and capabilities as a result of our framework project.

Rhetta: As a leader within the state government, how would you describe the current levels of data fluency or data maturity or data literacy, across the public sector?

Nathan: As with any, competency, there’s varying levels. Some of the big agencies are really well resourced and have great capabilities. And then you get down to some of the smaller agencies that are really struggling. You know, data is not the day job of any one particular person. It’s something tacked on to the side. And that’s one of the challenges we’ve always found with with data sharing. I’m responsible for open data policy and the portal for Queensland as well. And again, it’s where it’s not historically been something that’s had dedicated resources applied to it. It’s a real struggle to sort of add something to everyone’s very, very busy day jobs and say, well, now you’re a data custodian and now you’re responsible for publishing data or sharing data. And that’s one of challenges that we’ve identified as part of that data sharing framework review is really around how do you operationalise an ecosystem in data and data sharing. And one of the keys is that people it needs to be invested in. So that’s, I think, one of the challenges that we have across government. I think broadly, and I’ve been sort of in around this role for five years, I think there’s been an uplift in maturity in government that we’ve seen. There’s work to do. We’ve acknowledged that and in terms of some of the findings of our review of Beam that if we want to enact change and culture change, we need to a) communicate with people, get them on board. That’s been a big part of our job is going out and engaging both in the data and AI space through our communities of practice, engaging with leaders, being involved in in some key projects.

To that end, we’re actually looking to set up a data leadership committee in Queensland Government. There is a digital leadership group for all things digital and and as is probably be the complaint of many data people to get swept up in the digital off in and strategies and in tactical responses. So we’re looking to separate that out and have a data leadership committee that we can use as that avenue to push top down change and to really to get agencies thinking about the types of things they need to be doing, the roles they need when they’re engaging in programs, thinking about the data question upfront, and AI for that matter now as well, and understanding what capabilities, what is data literacy look like both at the lower levels. You know the operators the doers and executives.

I use an analogy when you know as an executive I might not have a finance degree or an accounting degree, but I wouldn’t walk into a meeting and say, that’s a financial statement. I’m not a finance guy. So I don’t expect me to read that. You know, it’s an implied obligation of my job that I should understand the budget that I’m managing. I’d like to see the same for digital and data. We operate digital businesses. Everything we do is based on data. And you still, you know, every now and then you get an executive say I’m not a data person. I don’t really get that. I’d like to see it be, you know, as part of every job description, essentially about executive level, you need to understand digital and data as part of you operating a business because our government is essentially a digital business. That’s what we do now.

Rhetta: In your opinion, are the current levels of data fluency, are they enabling or inhibiting the capacity to truly harness the power of data or AI or digital to sustain meaningful impact at scale?

Nathan: As I said, I think there’s been improvements. Definitely. I think the message has been hammered home over the last few years around data being the new oil. I think we’ve heard that many times. All of us,

Rhetta: If I had a nickel… Yeah.

Nathan: Exactly. But the benefit of that is that people are thinking about it. They do know they need to share data and consistently we did a lot of work in stakeholder workshops and executive workshops looking at the data sharing problem particularly, and it was more cultural generally in terms of people’s attitude to data sharing and to the use of data. And again, I think it comes from the top down. I think there’s still still work to be done, but broadly I think we’re getting there. There’s a real understanding. I think there’s actually a real opportunity now with the introduction of large language models and AI. Like we’ve always said with well, back when I was in BI space and data analytics, the first thing that you find when you build a dashboard or an analytic product is how bad your data is. And that’s great because not knowing that the data was really bad wasn’t helping you. so you, you know, that was always a, a journey we would go through with clients was just because the dashboards showing you that you’ve got bad data doesn’t mean it’s a bad dashboard.

You know, it’s showing you that you’ve got a problem. I think AI thought extents going to have that same impact that the adoption of these tools by people right across the business. And when you start to point these language models at your at your data, I think it’s an opportunity for us to bring that conversation to people and say, hey, this is why looking after your data is really important. Managing your data governance, understanding what do you have, making sure it’s of the right quality. So I think that’s that’s going to be an interesting space over the next year or two. But yeah, I think broadly we’re heading in the right direction with data in government, it’s almost feels like it’s we’ve jumped straight into the AI space, but at the end of the day, it’s still it’s all based on data. So some of those fundamental principles stay the same. 

Rhetta: Nathan, I want to pick up on something that you just said, which was this jumping headfirst into the AI space. And imagine in your role, you’re often tasked with explaining complex topics like data sharing, data quality alignment, artificial intelligence, ethics, and so on. So how do you actually approach explaining A.I. to the different people that you work with and the role of government and the role of regulation? Given that the audiences that you speak to, they must just have such varied levels of awareness and understanding?

Nathan: A good way to think about government’s role in understanding and regulating models is that you can think of you know, we talk about hyperscalers, the big cloud providers and their foundational models. What’s the role of government? Well, this I guess the analogy of a car, If you think of a car, the large language model might be the engine.

The product or service, the thing that you’re developing might be the car itself. So the engine runs the car, but with cars we have road rules that govern how you’re going to use the car and what’s an acceptable use of that car. And you can’t drive on the wrong side of the road or speed or not wear a seatbelt, do it intoxicated. We have design rules around how we’re actually going to set up roads. Roads need to be a certain width and they need to be a certain quality. And you can’t have hairpins every and every hundred meters. There’s a whole bunch of standards around how you design roads and then for really high risk use of roads, for example, trucks with explosive material on the back or extremely long loads.

It’s a whole bunch of extra regulation, right? We don’t apply that to you of your standard vehicle driving down the street. They can go ahead and do their business, but we have a whole bunch of regulation and checks and balances. You know, we monitor road trains for fatigue and the amount of time they’re driving. So the example of the extra in society we regulate more heavily based on the risk of that activity. And at the end of the day, you know, government needs to be the steering wheel. We need to decide how the AI is used and how it’s applied in in industry as well as in government and control how that, you know, make sure that people are meeting the guidelines, that they’re following the road rules, they’re driving at the appropriate speed.

And then essentially, you know, like I guess from my mind, a good example of how we might how government’s role of regulation might fit in with the use of these large language models.

Rhetta: When we first met, you recommended a book and it was called The Alignment Problem by Brian Christian. I love the book, by the way, and in it the author kind of expertly unpacks the main challenges and a bit of a history around what we’re doing to ensure artificial intelligence or AI is operating safely in and harmony with human values, ethics and intentions. With the Australian Government strategy, which you mentioned, and with all the advances in the AI space, what does good AI alignment look like? What are we doing to make sure that we stay on the right side of this line?

Nathan: Yeah, it’s the big question on everyone’s mind right now. I’d start by saying what is good AI alignment look like? I don’t think I can answer that right now and I don’t think many people can. And it depends on who you ask. The developers of these foundation models have their own idea of what alignment looks like for them.

I use an example that my team, just to help people understand I guess when we talk about alignment, it’s around the, does the model reflect the values of, in our case, Australian Government, Queensland Government, when it’s providing content or when we’re using it to guide decisions. Now there’s some really obvious stuff like no, it’s not teach you how to make a bomb, which I think was one of the very early examples in OpenAI ChatGPT.

So when people would get it to say all sorts of things and obviously there’s controls put in place, one of the more innocuous examples it probably helps explain the point is that the my team someone got on board and said, give us a recipe for beef, they gave the recipe for beef. Give us a recipe for lamb, gave us the recipe for lamb. Give us a recipe for whale meat. They said now that’s not appropriate, now, depending on where you live, it might be entirely appropriate or your opinion. The fact is that someone and this is using one of the large language models, someone in Silicon Valley somewhere has decided that that’s not appropriate. The fact is, with most of these commercial models, they’re a black box that we don’t have an insight into exactly the rules applied and how they’re trained and what controls are put in place.  So it’s not really enough for Queensland Government or probably any of them in the world to say, well, we’re going to take on the values of Company X, We have an obligation to understand the outputs of these models and ensure that we’re able to overlay a level of control or assurance to make sure that they’re acting or responding in a way that’s acceptable to us. 

So a lot of the discussion paper that was released federally and the comments that we provided back as well that were pretty consistent, I think with many others was around the regulation. You know, it’s it’s everywhere at the moment. The executive order that was signed this week is a response to governments around the world trying to understand how do we regulate this technology. And there’s various aspects to that. I think there’s there’s that broad general regulation that is you can’t do all the obviously illegal things you can’t use it to do illegal things. It’s got to have certain guardrails on what it will and won’t answer, and it’s going to be ethical and the rest of it. But then there’s the application of AI, and that’s really, I think where the regulation will have a have a role to play is specifically that deep domain is how you abusing the AI is important right and what are you doing with it. So it might be domain specific, it might be in health, it might be in transport, it might be in other applications where I think that will start to, you know, industry and government and others will come together with a bit of a consensus on how we’re going to govern these things. As you could say overnight or earlier this week, some of the discussion has been around providing evidence of tests of the use of these models and how they respond and understanding around the training and how the models are trained. I think that that’ll be in consideration. I mean, there’s some people have spoken about the need maybe for a sovereign models, you know, how do you embed Australia’s values into a model? a) you got to know what our values are..

Rhetta: And which governments values like…

Nathan: Yeah, exactly. Yeah. It’s, it’s really, it’s going to be an interesting space. Yeah, it’s complicated

Rhetta: We briefly touched on this before when I talked about data fluency, but I’m wondering now do we have to think about AI literacy, AI fluency, generative AI fluency? What do we thinking about that? Is that a good thing? Is it a bad thing? Do we just stick with data literacy? Because ultimately these are built on data?

Nathan: I think we do. We need to be a bit more specific around that because the application of generative AI, I mean the potential use cases are sort of mind blowing when you think about it. And I’ve even noticed it when you talk to people often though, is it in my opinion is hot take if you don’t think that generative AI is going to be useful and, you know, sort of game changing, you probably haven’t engaged with it enough yet. Because once you do, when you start using it, you’re seeing people who have never really been engaged. I was talking to someone this morning that said, you know, I was a real AI skeptic. And of course they’ve had me hammer it out in the last six months and they’re like, well, I mean, they’re I’ve bought in and they’re using our Q chat tool, which is the AI chatbot that we’ve developed every day. Yeah. So I think getting business users and that’s the beauty of these tools and the fact that they’re so easy for people to use. That’s what’s first on for a lot of them. They’ve really engaged with AI and they’ve seen something that’s real and I can see how AI is going to change the way I do things. But we’ve got to guide those people. We’ve got to help them use these tools. So we’re definitely planning to do that. There’s been discussions recently around how do we provide tailored training for certain types of users. I think it’s yeah, is going to be important and you’re going to find it embedded in every product that we buy and in every engagement with a vendor. So I think it’s the onus is on us to make sure that people understand this technology because how do they know, how do people know that they’re using it in the right way if they don’t really know how to use it, Right? 

Rhetta: And I think, like, how do you teach people to smell the ethical smoke? I guess in a way? Like how how is an organisation or as a government, how do you do that or how do you approach doing that? 

Nathan: So part of the guidance that we released earlier in the year was, you know, it’s the first step towards a policy position in Queensland, but really it was about getting on the front foot and saying, Hey, these are the tools, we know they’re out there and you’re using them. The fact is as a bunch of information management and code of conduct and acceptable use policies that apply to all of our technology stack, so a reminder of that that, you know, the principles of Queensland Government and who you’re employed by, mean there’s certain things you do will and won’t do, you know, LLMs and chat bots like this and make it easier I guess for you to interact and put potentially sensitive government data into a tool that’s going to give you some value back. So it’s a reminder about the specific risks of generative AI, but I think it’s really going to be around communication and that’s part of that policy position.

We’re actually working on an assurance framework as part of a national AI working group. There’s been a commitment by the data and digital ministers which is a group of first ministers obviously from across Australia to have a nationally consistent approach to the assurance of AI, so that essentially if you can implement a system, you’re going to buy a new product, work on a project with AI as well. How do you objectively assess the risk.

Rhetta: With a register or something like these products are approved.

Nathan: More so it’s around taking people through series of questions to actually consider their intended use of AI in a project. What are you using AI, for example, to help assess applications for something, a grant, whatever it is, guiding them through a decision making process that says, yeah, what sort of personal information are you considering? What are the things you’re getting the AI to do? Is it making decisions or is it just a reference point for someone to help them generate some content? There’s a huge scale, you know, from AI doing autonomous decision making to customer service delivery, which we are definitely not doing right now, right through to you it’s just helping me draft some dot points. So that assurance framework is really about helping people, guiding them on that journey and saying, actually you know what, based on some preset variables, yeah, yours is a pretty safe use of AI make sure you keep X, Y and Z in mind or look, there’s some red flags here. You’ve ticked a couple of boxes that indicate there is personal information involved. Then there would be then a formal process to guide them through and maybe an assurance, committee we’re working out exactly what that looks like. The Queensland Government right now there is an existing investment assurance processes already, but essentially we all make sure that we are taking it really seriously and ensuring that any, you know, use of AI by Queensland Government is in line with community expectations and it’s, as we’ve mentioned a few times, safe, ethical.

Rhetta: And I imagine embedding that into the culture is something that we talked about a little bit before is critical. And I was recently at a conference and they had some like one liner was like data culture equals people change management. And I kind of agree with this. I don’t know if you agree with it as well, but I guess sticking with this idea around the importance of culture when it comes to effectively embracing technological and social change, what do you think it means to have a rich data culture or to be truly data driven as an organisation or team?

Nathan: I agree the cultural aspect of it is really important. It’s about people’s trust in the data and understanding. Data is very rarely wrong inherently, but you can choose to say the data is wrong. Well, since it’s a data point, it’s representing what’s in your system or is representing maybe a faulty process. So that’s often, the challenges you get when it comes to sort of data sharing is it’s often linked back to data quality because people are worried that the data, if you get the data, you will interpret in a different way. Because I know all the ways that this data is slightly wrong and I can package that up and give you a data product that accommodates for all the issues, all the variables that are unknown. I think again, that’s a cultural thing. It’s about, again, I firmly believe it comes from the top down and organisation leaders expecting people to be using their data when they’re making decisions and when they’re putting business cases together and sort of being upfront about that with people and expecting that they’ve come to you with evidence back decision making, essentially they’ve used the data that’s available to them to to inform their decisions. And if you set that expectation, I think slowly and you have to provide enablement right? You have to give people access to the tools and the data sets, as we’ve all been through that journey, where the data is not an IT thing anymore, it’s a business capability. I think you’ve really got to open up the doors and get people involved. The business unit is involved in the data and making those data products. I actually think that’s a really exciting avenue that we haven’t explored yet. But in terms of generative AI large language models, querying databases directly, instead of, you know, that self-service BI thing never really took off in my mind, we all thought executives might be there creating their own charts, or at least managing,

Rhetta: Death by a thousand dashboards.

Nathan: Yeah, exactly. All they did was go back and ask someone else to create them a new dashboard. The idea and you’re seeing it in some of the commercial products already, Just ask the question of the data.

Rhetta: Yes. Yeah. That’s built into Power BI like natural language data.

Nathan: And I know. Yeah. Some of the the big cloud platform providers are talking about the fact that you can just do you need a reporting layer anymore or data warehouse. Can you just ask questions of your business systems and get the information you want. I think that’d be wonderful.

Rhetta: If your data quality is good. 

Nathan: Yeah, exactly.

Rhetta: And I guess with more and more people using these kind of low to no code self-service, so to speak, gen AI and data tools, I imagine there’s a bit of a fine line that you have to walk within government between kind of harnessing all of these benefits and avoiding the risks, like, data breaches or unreliable data or I guess spurious statistical techniques. Can you discuss how organisations like where you’re working at the moment are kind of striking that balance between, being savvy, but also being safe?

Nathan: I think the reality is that there’s innovators out there and first movers and people in every business that are going to want to use these tools. And if you don’t and they want access to cloud technology and they’re going to want it, they’re going to want access to the data. If you don’t give them a capability, a trusted, safe environment to do it, they’re going to go and do it themselves. And we want those people. We want them on board. So the approach we took with with generative AI, as we have with most of our data projects, is that we want to have an appropriate level of regulation and guardrails at a high level to say this is what you should and shouldn’t do in appropriate use, but then provide an enablement environment. So we’ve got a tech strain there. And the whole point of our Queensland, our platform is to enable government to have access to some of these toolsets and to provide data products and access directly to data and to share data across government. The same thing we’re doing in the AI space is around aggregating some of the services and then overlaying a common there’s a concept in in Queensland government these are in core and common platforms. What are the things that we can do once for the benefit of many that are other agencies can share and data and AI are perfect examples of that. So when it comes to things like the LLMs, we’re looking at providing what we’re coining at the moment, something like “safety as a service” where we can overlay a series of controls around prompts and responses and alignment to an extent and the government context that people can draw on, they can draw on the code and apply that in their own environment so they can use our shared environments. Because, yeah, I mean, we can’t we can’t just sit back and say, well, we’ll accept whatever the vendors put forward.

And having said that, there’s multiple vendors, all with different products. We’re looking at where can we provide safe guardrails that can be applied across the board? Yep. And the same goes for data governance and around data discovery data catalogs, we’re looking at a closed out a portal for Queensland. We’ve actually done a pilot recently which was really successful. The ability to surface government data assets. That discoverability, for a whole of government. Yeah. Is that first point of entry to understand what’s there, how is it governed, who owns it, who are the custodians. So that’s plenty happening in that space.

Rhetta: I think for the sake of our conversation, I’ll bring it to the end and ask you our last question, which is one that we ask all of our guests, which is what’s the one data set if you could gain access to? And it’s a thought experiment we’re parking costs and ethics and privacy and things like that aside, what’s that one data set that if you could gain access to, you would just love to look at the insights and why?

Nathan: Well we’ll park the fact that I’m a government employee. So what’s my, my preference?

Rhetta: Your personal.

Nathan: Look. My first thought when I thought about this was that Facebook Google the big social media platforms who anyway Google, there’s much more in social media. Right that movement there’s essentially the end of the day when we’re looking at that data analytics, it’s about understanding people and what they do and the decisions they make and the environmental impacts and for governments, how do we serve them. So I think some of the rich information that I have around what do people do with their day to day lives and what decisions do they make and how they interact and the systems they use, particularly in your phone everyone’s got a phone. If you think about not just Google but Apple, you can buy and watch and you can bought a phone and you’ve got a really deep insight into people. I think that it’d be amazing to work there and understand what they’re doing with that information. I think health is huge right? Wearable devices. I think personally that the messaging around a health record wasn’t handled really well in terms of the benefits in having aggregated health data in Australia for research and for better outcomes, because I think there’s so much to be had there.

The other data set that I wouldn’t mind having is the banks. So if you look at the information the banks have and I think back to the work we did throughout COVID and trying to understand mobility now, you guys did some work with some of the mobile phone data the near-real-time insight, the banks have in terms of how much money you’re spending, where you live, who pays you, so who your employers are, what benefits you’re getting from government. Like I say, everything that comes in and out of your bank account, which essentially is everything you do in your life every time you spend money, is coming through the bank. The picture of the financial health of of people understanding the impacts of government policies. We’re spending a bunch of money in a region and we’re giving people grants or subsidies or social services is how is that money flowing through? Again, my personal opinion, I don’t think anyone would be wanting the government to have that level of insight into their daily lives. But at the aggregated level, it’d be really interesting to see that end to end flow of the money in our circular economy.

Rhetta: That’s a great answer. And thank you so much for your time today. Nathan. I guess just to finish would have any final thoughts or anything else you’d like our listeners to know or understand about you? We’ll definitely link to a couple of the discussion papers and other resources that you mentioned. But what else would you like our listeners to know?

Nathan: Reach out if they’re interested in having a chat, we’re always keen to partner with academia and industry is part of our role moving forward in the data and AI space. When we talk about setting up these leadership groups is to include people from outside of government to inform policy and our positions moving forward.

Rhetta: And I can second that, that is a genuine call to collaborate because we’ve definitely worked with you guys and I know people from other universities and organisations have as well. So definitely reach out to Nathan and the data AI unit. All right, Thank you so much.

To listen to more episodes of Show Me the Data, head to your favourite podcast provider or visit our website ridl.com.au and look for the podcast tab. We hope that by sharing these conversations about data informed decision making, we can help to inform a more inclusive, ethical and forward thinking future. Making data matter is what we’re all about, and we’d love to hear why data matters to you. To get in touch, you can tweet us @G_RIDL, send us an email or better yet, follow, subscribe and leave us a five star review. Thank you for listening. And that’s it. Till next time.

MORE EPISODES