From Fear to Framework: Safe AI Implementation for Businesses

June 16, 2025

Watch Episode

Listen Now

Transcript

[00:00:15] Melissa Howatson Welcome to The CFO Show. I'm your host, Melissa Howatson, CFO of Vena. One of the most pressing topics for business leaders today is the safe and strategic integration of artificial intelligence into business operations. If you've tuned in to our earlier episodes on the power of AI in business and finance, this conversation builds on those insights, offering practical tips to help teams get ready to move from concept to execution. Joining us to break it all down, our two distinguished guests. Kaz Takemura is the managing director of the FP&A Technology at Modelcom Inc. With over two decades of experience in the financial software space, Kaz helps organizations optimize their planning processes through modern tech solutions. As vice president of Data Privacy and Corporate Security at Vena, Steve Browning implements Vena's global privacy and data protection strategy. With over 25 years of experience, Steve is an expert in software management, coding, networking, security design, and support. Kaz and Steve, welcome to The CFO Show.

[00:01:30] Steve Browning Thank you. Happy to be here.

[00:01:32] Kaz Takemura Super happy to be here.

[00:01:33] Melissa Howatson So, before I dive into the questions, let's start by getting you both to share with the audience a bit about your background. Let's start with you, Kaz.

[00:01:42] Kaz Takemura Sure, thank you. So I've been in the FP&A technology industry for just over two decades. Currently working with Modelcom, a boutique consultancy based out of Montreal. One thing that makes us unique is that we're able to do all of our projects completely in French. And so yeah, so we're happy to work with French-speaking clients and also divisions of sort of multinational organizations where they have French-speaking staff. So I've worked with companies of all different sizes and complexity and industries. And these days I'm doing a lot of work with Vena.

[00:02:17] Melissa Howatson So you've got lots of years’ experience in FP&A, and also having seen more recently what's happening in this world of AI and security that we're about to get into.

[00:02:28] Steve Browning I have been working in the capacity at Vena as the VP of Data Privacy and Corporate Security for four years. And prior to that, I've spent the last two decades sort of designing and building and ultimately running cybersecurity and information security programs. And so some people say that means that I've been a professional paranoid for two decades. But I feel like I bring a pretty unique experience to the world of AI. And full disclosure, what I will say is that I am a huge proponent of AI technologies. And as a risk professional, I think that we can implement these technologies in a way that allows us to really maximize the benefits of AI while minimizing the risks. And so I'm super excited to be here with both of you today to talk about that.

[00:03:14] Melissa Howatson Let's dive into that because it, you know, risk is definitely been something that has been increasingly top of mind, at least for the last decade for sure, for companies, and it's not going away. And now we've got more recently, although some will tell me, well, AI's actually been around for a while, but it's certainly picking up speed in the last couple of years. So, what do you think are some of the principles that businesses should be thinking about as they embark on this AI journey? We'll start with you, Steve.

[00:03:46] Steve Browning When I think about AI, it falls into the category from my perspective of being like a non-negotiable technology. And I'll tell you sort of what I mean by that. You know, back in the 90s, mid-90s, companies were having discussions, serious discussions about whether or not they should go onto the internet and whether they should have a web presence. And some of those organizations decided at that time that they weren't going to do it. But very shortly thereafter, as we know, the world changed. They ended up going down that path. And I think AI is similar. Disrupting technology that becomes almost, you know, mandatory for organizations to really improve efficiencies. But having said that, I don't think that means that organizations should approach this in a clumsy way. I feel like we need to do it very thoughtfully. And so I think we go back to first principles, which are that, you know, you want to make sure that you understand what are your objectives, you know, what are you trying to accomplish with AI? And then, you know, really work to refine those use cases in such a way that you can get a good view of the risks, right? And so when you begin to understand what you're gonna do and how you're gonna do it, then you can assess risk and ultimately land on some guardrails that will allow you to operate AI in an effective and safe way.

[00:05:00] Melissa Howatson How about you, Kaz? Any other principles that you would add?

[00:05:03] Kaz Takemura Yeah, for sure. I totally agree with what Steve said. I, you know, I really look at it as a project, you know, a technology project. And so putting all the formality around projects, I think is very appropriate. So, you know, defining your goals and objectives, laying out what your strategy is, putting together a team, probably cross-functional. You know, AI is a technology that can benefit virtually all parts of the organization. So you want to make sure that you have that all in place. And then, really, I would take a bit of an iterative approach. You know, start small, build on your wins, see what worked, what didn't work, and then sort of rinse and repeat.

[00:05:38] Melissa Howatson It can sound simple or you know, basic. Okay, we got to get started, for instance. But even getting there, I think, is a huge win for a business because you need to get the leadership on the same page about it. So that at least internally, then you can start to formulate the team and some of these guardrails and start to figure out how you're going to be thinking about this and protecting the business. But you can have differing opinions right out of the gate, I would imagine, in many companies of just how adverse to risk you should be and what you should be. You know, some companies will say, Hey, we got to try to make this work. That is an absolute must-have part of our framework. So show me why we can't, whereas others will, you know, have different kinds of guardrails in place. So I think just getting on that same page is a huge starting point.

[00:06:31] Steve Browning Absolutely.

[00:06:32] Melissa Howatson Speaking of barriers to implementation, then you know, I think there are things that are getting in the way for people getting started. What do you both see from your perspectives are some of the biggest barriers?

[00:06:45] Kaz Takemura So I think that there's still a lot of fear of the unknown out there. You know, I think that AI is a little bit intimidating to some. And so it's just really something that we all have to overcome. So I watch the late-night talk show hosted by Stephen Colbert quite often. And recently had Reid Hoffman, the co-founder of LinkedIn on, and he's a big proponent of AI. And you know, I very much agree with his position that we just got to get out there and try, you know, AI, you know, start to use it. And I remember, you know, when ChatGPT 3.5 first really came into the public sphere, I jumped on fairly quickly, created an account, and the first question I asked it was to write a funny haiku about consulting. It just, you know, start to use it. And you know, I eventually, I got into more sort of business uses of it, but really just getting out there and trying it was important for me.

[00:07:38] Melissa Howatson And it must have been a good enough haiku that you kept on going, so

[00:07:42] Kaz Takemura It was pretty…well second question asked was, what makes a poem a haiku? So I had to get a refresher on that. But yeah, and then I think just in terms of AI in general, Reid was saying that it's probably the greatest learning tool that's ever been created. And I mean everyone knows that AI is developed to learn from us, but we're equally able to learn from it. And I think that's an important point.

[00:08:07] Steve Browning I love that. Yeah, and I 100 percent agree. And talking about fear as a barrier for implementing AI, I think the antidote to that fear is knowledge. And I feel like knowledge comes from experimentation. And so I think it's essential for people to begin using the technology, as Kaz said. And one other thing I would say is when we're thinking about barriers, the there's fear, you know, because it's new and it's an emerging technology. And I would agree with you that it's an emerging technology. I feel like additionally, there's sort of this human dimension. You know, a week doesn't go by where the media doesn't report another large you know, tech organization or just a regular organization that is saying, you know, they've laid off 10 percent of their workforce because of, you know, efficiencies gleaned from AI. And I feel like that leads to fear as well. And the way I look at AI, you know, as a risk professional, you know, but also as a, you know, a very humanistic manager is AI, you know, like any automation tool can help to reduce commodity-based tasks within our workday. And that really should allow or create space to allow, you know, our colleagues to really focus on high-value work. And so I feel like, you know, that's another way of trying to sort of get around this notion that of fear of AI and potentially displacement as a risk.

[00:09:28] Melissa Howatson Well, and you both touch on that, you know, so many fears almost multiplying, I think, for people. The risk of is this safe? What's it going to mean? Will it take over? You know, we've seen movies that predicted. Yes. Yeah, absolutely. And risk of my job, the things I know, you know, is that going to change as well? And, you know, I heard it said not too long ago, AI isn't going to replace you, but an AI-enabled person will. And that sort of helped reframe for me, you know, you think back to things like folks who didn't know how to use Excel, and maybe we're still using calculators or other ways of doing things. Eventually, you can't be that person who is still trying to do things the old way because you'll have obsoleted yourself. And so for me, I think that helps me move it into don't be afraid of it. Actually, the thing to be afraid of is not leaning into the knowledge and understanding what it is and how it could be done safely and how I myself can, you know, move my own career progression through by being somebody who is AI-enabled and is able to leverage this to be even better at the job that I've chosen to do. And, you know, I think as leaders, we have a responsibility to help our teams and our companies also upskill and collectively be able to learn how we can use these tools for betterment of the business. But it's the pace that it's happening that makes it even feel, you know, just feels more rapid maybe than some of these other prior technologies in the past. But it really is no choice. We've gotta start embracing it.

[00:11:05] Steve Browning Non-negotiable. Yeah, I think we have to move forward.

[00:11:06] Kaz Takemura Yeah. And I do think that, you know, the focus needs to be on how it can augment us, not necessarily replace us. I mean, the human element is so important. And I really look at AI as like a super smart whiz kid, right? Super smart and like, is able to do lots of theoretical things, figure all that stuff out, but lacks the knowledge and wisdom that only experience can give. And so as the more AI technology I use, the more aware I am that I need to be watching over it to make sure that it, you know, while it might give a technically correct answer, and sometimes not, but that it might just need to be tempered with a little humanity.

[00:11:48] Steve Browning And if I can expand on that point, I feel like AI in some ways becomes sort of the interface between the creativity that the human brings to the calculus with sort of the specialized technical knowledge. So, you know, nowadays you can ask ChatGPT to develop, you know, a web page for you. And maybe you were never a developer, but you have an idea of what you want it to do. And so it becomes a tool that takes specialized knowledge and puts that into a bucket. And then that tool now, you can direct it using creativity. And so I feel like that is like one of the ultimate expressions of leveraging these tools. It's like, but I don't need to be an expert in everything. I've got this super, you know, intelligent AI, but I can bring my perspective to the table. And I think that's essential.

[00:12:30] Melissa Howatson Yeah, that's a good point, right? It can actually help supercharge us and allow us to focus on the things that are more where we want to be spending our time as well, right? So now let's move into how we can safely onboard AI. So what are some of the best practices or ways you're seeing companies go about this? What teams are they pulling together to figure out how to get started? Steve, we'll start with you.

[00:12:58] Steve Browning When I think about governance in AI, it's not unlike governance that you would stand up for a given technology project. But I think it's unique in a couple of interesting ways. First, I feel like again, going back to first principles, really understanding what your use cases are at a good detailed level. And then you really, once you have that, you have to begin this process of risk assessing your use cases. And Kaz pointed out earlier, having a cross-functional team deployed to start those discussions is important. And from my point of view, you know, you obviously have, you know, finance professionals, but you want to have, you know, legal, you want to have security if you have sort of compliance and governance folks and IT and human resources together. And I think what that gives you is a diversity of opinions with respect to the risks that AI poses, because it may not be immediately apparent, you know, from one subject matter expert to another in terms of like common risk, but I think having that diversity of perspectives is important. And I think once you get a sense of what the key risks are for your use case, that naturally leads into creating guardrails, right? And so guardrails can be technical in nature, like we're going to use this tool and not that tool, or they can be more in terms of usage, like, we're not gonna leverage or use this data within the context of AI, but we're gonna use this data as an example. And then I think those guardrails naturally lead into the creation of things like, you know, either technical controls that you can implement to provide that protection, but also to leverage policies and procedures and guidelines as well to kind of like round out the governance umbrella that you want to put over this risk. The one thing I will say though is a lot of that sounds like you do it at the outset of your AI project, but I really want to reinforce the fact that AI governance is it has to be sustainable over the entire lifecycle of your AI implementation. And the reason for that is that risks tend to change over time. And certainly, as our comfort level increases with our use cases, then those use cases can become more complex. You're going to be leveraging additional data sources, which can increase the risk. And so this ongoing risk assessment is absolutely essential to ensuring that you're not, you know, exposing your company too much risk over a period of time.

[00:15:15] Kaz Takemura I think Steve said it all there. The only thing I'd probably underscore is that again with the cross-functional representation within this project, really making sure that everybody's security and risk needs are being met and then also that everyone can benefit from the learnings and the wins that people get and see how that can be applied to their areas of the organization.

[00:15:37] Melissa Howatson And it's interesting because you talk about, you know, the team and pulling together and the framework. I'd say for me, probably as I reflect now, two of the teams that I initially out of the gate overlooked, but now realize how critically important they are. One is HR, because it didn't come out of the gate to my head as the key, even though you know the people are involved, but the learning and development aspect that also needs to come with that because it's an education that needs to happen as you decide these guardrails and how you're gonna get people comfortable with the tools, but also comfortable with what good solid use looks like that this follows within your risk framework. And the other was procurement, because they're at the front end of helping to be able to identify some of these to make sure then that, you know, those upstream processes you may need to build to have the reviews done that maybe you had before, depending on the criteria, but there's new criteria now. So that you know, there's a couple of partners that I think as this grows, you start with a core and you start adding a couple of others and come to a point where you realize some of them are actually like HR is actually right there as a lead for sure within this, because it's people involved and it's also changing people's jobs and how they're able to do their job.

[00:16:49] Steve Browning For sure. Yeah. And I would say, like, the procurement piece is actually really a good place to start. If you really wanted to understand, like, what your risks are from an AI perspective, there's a couple of things that you want to look at. I think procurement gives you the opportunity to inject almost like an AI triage process within the procurement workflow, so that way you at least have visibility as to what technologies are in the process are going to be implemented. And then you can again leverage your governance, you know, process to ensure that you're minimizing risk there. But also when I think about sort of, you know, key quick wins that you can do is to validate what you already have, right? And I can't remember the study, but there was a study that basically said like two out of every five employees is actively utilizing AI irrespective of their company's position on AI. And it's probably higher now. And so, I think getting a bit of alike an inventory of the AI tools that you're using right now can immediately take you to the point where you have recognition of the risks that you might already have, right, that you just don't know about. So I think the combination of procurement and creating this inventory is really a good first couple of steps that you can take to to start yourself on this AI governance journey.

[00:18:05] Kaz Takemura Yeah, I completely agree with you. I think I read a very similar statistic about, you know, basically ninety percent of people are using some sort of generative AI tool daily. And I forget the number, but it was like sixty percent of the usage is outside of the of the view of IT and security. And that's a little bit scary, especially given that a lot of just kind of lay people don't necessarily know all of the risks involved in putting your information out there. So, like as an example, whenever I'm using a public ChatGPT, I take out all names of people, companies, products and things like that, just to anonymize what I'm, you know, having it learn about what I'm doing. And then I think also I took this training course just recently, and it spoke about when you're in a meeting where there's an AI chatbot sort of recording and transcribing and summarizing, and if you're talking about sensitive topics or information, I mean, it's also learning from, you know, the information that's being shared in that meeting. So something to be I wasn't necessarily super aware of, but I'm acutely aware of now. So some of these do's and don’ts, I think, are very important to get out there.

[00:19:22] Steve Browning Rules of the road for sure.

[00:19:24] Melissa Howatson It's a good point. We're learning as we go, which then keeps informing what might have been the starting point of our frameworks, but adding other things. And I also think it's showing us we have to be open-minded, even we set out, you know, hey, okay, we're gonna pick this tool, but there's humans. It's interesting. So we're talking about AI, but the human aspect of this, you know, if employees are going to be using other tools, it means you may have to be open to, hey, this is the ideal, but maybe we need to add a few others and trial and see so that at least what's happening is being done with your knowledge.

[00:20:00] Steve Browning Yeah. Having visibility is essential. And some of the risks, like, when I think about, you know, what we would call sort of out-of-band use of AI in a company is, you know, if the AI tool is free, then you are the product, right? Like that's essentially what's happening. And so, you know, I think transitioning between this, like, free AI use within the context of your company to like paid corporate membership where you can have more control over things like, you know, not using our data to train the model or not remembering, you know, every conversation and building this corpus of information or whatever the case may be. But I think you, in order to get that control, you need to know who's using what. And then, you know, you can figure out, okay, well, which tool should we select and how can we ensure that we're getting the best terms for our business to reduce our risks.

[00:20:48] Melissa Howatson When we talk then, about an AI governance framework, there's obviously this need to balance innovation with security and balancing our risk. How have you seen companies start to develop what that framework should be to make sure they're striking the right balance there?

[00:21:06] Steve Browning It always starts through that risk lens. Like you have to understand what your risks are. And as a security and risk professional, you never really want to, you don't want to over-index on control. You want to ensure that the control framework that you're that you're designing is facilitating the use of these tools. I mean, guardrails is an apt term, but what it really means is I want you to be able to go as fast as you can within these guardrails. I don't want to hinder or limit your ability to be effective. And so I think the balance comes from, you know, the willingness to do the right thing for your business, but at the same time, recognizing that experimentation is something that is absolutely a part of the process too. And so I think creating guidelines that equip people with a good understanding of the risks is a really good place to start. And over my career, you know, implementing controls, I find that education is it's like a force multiplier. And the reason why is because, yeah, you can inform people about like these are the right things to do versus these are the wrong things to do. But that in my mind, is secondary to really communicating the nature of the risk and what it is that you're trying to accomplish, you know, from a control’s perspective. Because if people understand that, then they tend to, you know, do the right things, right? Because you know, we're treating them like adults and we're saying, like, this is what we're trying to accomplish, and this is how you should be framing your thinking. And when presented with a risk, like, you know, they're in a meeting, and someone starts talking about something overly sensitive, and there's an AI bot in on the call, it's like, well, let's hold on a second. Like, they can understand that. That is one of the key considerations.

[00:22:48] Kaz Takemura Yeah, and for me, I think really when you talk about balancing innovation and, you know, risk and security, I think it has to be a partnership, the partnership that starts out right away. So as you're coming up with what your innovations are and what you're creatively trying to trying to achieve, you know, we need to be doing that hand in hand with the security professionals that are going to protect us from, you know, maybe overstepping our bounds or doing something that, you know, ends up leaking some information that's very important.

[00:23:18] Steve Browning Yeah, absolutely.

[00:23:20] Kaz Takemura Yeah, so I think starting out hand in hand really then just paves the way for a successful journey.

[00:23:26] Melissa Howatson And have you seen any examples, whether it be of your clients or in your own position, where what was originally adopted as the framework needed to pivot because you learned along the way that maybe you'd rotated a little too hard one way or the other in what had been originally identified?

[00:23:45] Kaz Takemura That's the way with pretty much every project, you know, whether it's AI or if it's an implementation of Vena or, you know, some other technology for sure. I mean, we all set out to do a certain number of things and but as we get into the details and as we work through and really learn as we're working through the project, I mean we we always have to pivot. But in the end, we end up with what the right and optimal solution is for that business case.

[00:24:14] Steve Browning Yeah, and that goes back to what I was saying earlier about your governance has to, you know, be sustainable over the lifespan, right? And there's an old adage, and I think it's no plan ever survives first contact with the enemy. And so it's a sort of militaristic term, but what it essentially means is that you have to be adaptable and you have to look at it and say, okay, we came into this with guiding principles, but like as we see this in action, we may need to pivot and change. And I think that's part of the evolution of any project, but AI in particular, because it's moving so fast, like the space is evolving like a mile a minute.

[00:24:45] Melissa Howatson Now, what do you think are the top-most critical security considerations that companies need to be mindful of as they start to embark on embedding AI in their operations?

[00:24:57] Steve Browning I feel like, you know, AI is a technology. And so all of the standard technology-based controls certainly apply in the context of AI. And so you would look at things like access control and data protection and, you know, monitoring and those sorts of controls. But I feel like AI in particular, due to the nature of what it is, you know, data integrity is a very essential part of the control environment around AI. And what I mean by that is, you know, the concept of garbage in, garbage out. Like, if your generative AI model is basing its insights on antiquated or incorrect data, then the actual output will be antiquated, you know, and incorrect. And so I feel like integrity is key when it comes to AI. And I can foresee attacks, you know, happening in the future that seek to, you know, poison the well, as it were, from an AI perspective to put false information into the model and then get you know get to a result. And you could see that happening potentially, but under fraud, certain fraud use-cases, right? So I think, integrity of your data is essential. And to do that, you really want to go back to basics and ensure that you have appropriate access control over your data. The only other thing that I would point out is, and this is more of a human control. When AI and generative AI is going to be used to support decision-making, it's essential for the human operator of AI to really validate the output before going down the path of making a decision, particularly when the decision is material in nature, like we're talking about. You know, it involves a strategy that's going to move your company down a certain path. And so I think it's incumbent upon all of us. And I think Kaz, you maybe, even mentioned this earlier about validating the input. But I think it's absolutely essential, particularly in a business context. You may have a list of things that you're, you know, analyzing and the output of the tool, but you have to take that and then go back to your, you know, standard method of verification prior to moving down that path.

[00:26:57] Melissa Howatson Kaz, let's talk about what role that you think employee training plays in all of this, and how can that training and education, be structured in a way that helps with security but also helps with driving the innovation that we're looking for.

[00:27:14] Kaz Takemura Right. So I think that education is key. There's so it AI is such a broad subject, and there's so much that we all don't know that I think it's important, number one, dip for everyone to get a base level of education around what we're trying to accomplish, technologies that we're using, etc. And then I also think that there's so many from maybe an IT and security perspective, so many things that are pretty, pretty motherhood and apple pie, but really from a just a layperson's perspective, we're just not aware. So I think all of that sort of do's and don'ts in terms of you know how to approach the technology, what technologies we're putting in place, what things you should be doing, what things you shouldn't be doing, absolutely I think are important to be to number one educate the team, but also to ensure that we're not doing anything that's gonna compromise our data.

[00:28:10] Steve Browning Yeah, I couldn't agree more. And I think, you know, back to the statement around sort of, the some of these controls or some of these recommendations that we expect of people in terms of doing the right things. I think we all know, you know, that we have to take our vitamins and eat you know healthy food, but like it's good to be reminded from time to time about that because then it keeps you sort of focused on doing the right things. And so I couldn't agree more. I honestly think it's the heart and soul of any sort of, you know, like program, particularly around AI. And the one thing that I would say is we already have a lot of people who are on our teams that are, you know, absolute, you know, trailblazers from an AI perspective already, and they've adopted it in aspects maybe of their personal life, and they're sort of a champion for AI. And I feel like those folks too can really help us to mobilize you know the this education process for AI because, you know, the folks that they're already passionate about the technology tend to be the best ambassadors and going back to this fear of adoption, like, everyone thinks, you know, the terminators are going to rise or Skynet is imminent with AI. And I feel like having those, you know, ambassadors or those folks that have really already leaned in is essential in terms of grassroots education, if that makes sense.

[00:29:28] Kaz Takemura Yeah, I think that's a great point. And I think they can bring the AI perspective from the lens of where they come to the organization from.

[00:29:37] Steve Browning Yeah, totally.

[00:29:37] Kaz Takemura And, I think that makes a really nice marriage.

[00:29:40] Steve Browning Agreed.

[00:29:41] Melissa Howatson Well, and one of the things I found with that is that often the use case that an employee has gone off and developed can be applicable if you just slightly alter it for your world. And so for me, I think that's where some of my biggest moments have been is when a coworker says, Hey, I just did something really cool, you know, whether it's how I'm scheduling my day, how I'm tracking, how I'm getting my to-do list, whatever, or the meeting notes, you know, or they built something that has simplified the creation of documents, and they take a few minutes to show me and you just have these wow moments.

[00:30:18] Steve Browning Because it's like you don't even know what's the realm of possible. But, you know, you can see what people can do, and instantly you're hooked, and you're ready to push forward. Yeah.

[00:30:25] Melissa Howatson A hundred percent. And then you say, show me how you did that. Right. And so then you're learning along for the ride. And so, so yeah, I do think because sometimes, you know, you said we've got folks that are using it. We don't necessarily know, but when I see some of them, you know, what they're actually doing with it, it is really cool stuff. And we all would want to be able to have a little bit more of that once we learn.

[00:30:51] Steve Browning Hundred percent. And I want to touch on that, like, how exciting is it that there's this whole sphere of technology where these use cases that you're talking about are being created by folks that are just passionate about the technology. But every time you hear of one, at least in my case, every time I hear of one, I'm like, wow, that's amazing. And I want to do that too. And when else over the course of, you know, my two decades, I haven't gotten this excited about technology. I know I sound a little bit like a fanboy on AI, but supposed to be a controls person. But I feel like it's very exciting. And I feel like every time I hear of some of another use case like that, where I can lean into that and make my life better, you know, professionally or personally using AI, it's unbelievably cool.

[00:31:35] Melissa Howatson Yeah, we just have to create the space.

[00:31:37] Steve Browning Yeah.

[00:31:38] Melissa Howatson And the education, give people the arm people with the tools, the frameworks, etc., and then let them go.

[00:31:44] Steve Browning Exactly. Guardrails.

[00:31:46] Melissa Howatson Exactly. So thank you so much, both of you, for sharing all of your insights. Really enjoyed this conversation. Now, if you were to just give two pieces of advice, something you'd want the listeners to remember from today's conversation on this topic, what would it be? Start with you, Kaz.

[00:32:03] Kaz Takemura I would say just give it a try. So really get out there, try to use it, you know, be creative and see what you can do. And then, second thing probably, is don't forget the human element. So, really absolutely, you want to make sure that your knowledge, your experience, your wisdom is layered on top of anything that AI provides you.

[00:32:25] Steve Browning Yeah, I 100 percent agree with everything Kaz just said. And I guess what I would add to that is, you know, don't be afraid, equip yourself with knowledge, understand what these tools can do and what your use cases are. But then I think most importantly, really start with understanding the risks because I think that gives you comfort. Like, once you get to the point where you believe that your risks are mitigated, then it sort of opens the entire playing field to allow you to experiment and really do a lot of the exciting things that we've been talking about.

[00:32:55] Melissa Howatson Yeah, it's that fear of the unknown. And so let's get a little more known and then.

[00:32:59] Steve Browning Exactly. Shine the light on it and figure out what the issues are and fix them and then go forward.

[00:33:04] Melissa Howatson So before we wrap up, whenever we have guests on, we do have some rapid-fire questions that we'd like to ask. So, are you ready?

[00:33:12] Steve Browning I think so.

[00:33:13] Kaz Takemura I think so.

[00:33:14] Melissa Howatson You got this. Okay. First one for you, Kaz. What is the hallmark of a mature finance organization in terms of how they operate?

[00:33:22] Kaz Takemura Oh, gosh. Well, you know, I'm working with a company right now. And what I really, really admire about them, so the CFO, she's assembled this really, really strong team. And when I see them in action in workshops and things like that as we're working through the project, it's a really collaborative environment where she's given them the ability to speak up and, you know, and contribute. And it is this collaborative environment that eventually gets to the right answer.

[00:33:53] Melissa Howatson That's a good one.

[00:33:54] Steve Browning I love that, being authentic, like being yourself. Yeah, I love that.

[00:33:57] Melissa Howatson All right, Steve, what is something that you do during the workday to help you stay more productive?

[00:34:05] Steve Browning Okay. So I think for me, when I approach the work that I have to get done in a day, it actually starts the day before. So at the end of every workday, you know, I tend to take a bit of an inventory of what I'm going to be facing the next day. So if I have certain meetings or if I have tasks that I'm trying to accomplish, and I put those into, you know, into my brain, like, so I make sure I understand what's going on so that I can approach the day equipped with purpose. And then at the beginning of the day, what I'll try to do is prioritize what I want to get accomplished over the course of that day. And, you know, sometimes you're between meetings, and then you take a look at that list, and you're like, oh, okay, I can bang that off right now, and it's a send an email to somebody or do something. So for me, it's really about setting the intention for the day and ensuring that throughout the day I've got this like punch list that I can continually go back to and close it off. And then by the end of the day, I feel like I've had a productive day.

[00:35:02] Melissa Howatson That's a good one. Get started a day before.

[00:35:04] Steve Browning Get started before the day starts. Yeah.

[00:35:06] Melissa Howatson Very good.

[00:35:07] Steve Browning Exactly.

[00:35:07] Melissa Howatson All right, my next one is going to be for both of you. Steve will start with you. If you had to recommend just one book that it has had a big impact on your life or your career, what would it be and why?

[00:35:21] Steve Browning There's a great book called Essentialism written by Greg McKeown, I think his name is. And this book essentially challenges us to prioritize the most important aspects of our lives and to focus on those aspects. And so I think, you know, as a professional, and we're all in the same boat, like there's a tendency to want to do everything perfectly. And so we really there's a lot of churn. We're trying to get everything done exactly the way that we've set the standard for ourselves. And what essentialism, you know, teaches us is that you have to, if you're gonna do something really well, you have to prioritize, and you have to look at, you know, where can I devote my attention to which is gonna have the most impact in this moment? And then it's a continual adjustment over the course of the day. It's like, okay, I'm focusing on this, I'm gonna get this done, and then I'm gonna move on. But it's about clearing your life of all of the, you know, the stuff that isn't probably essential. So I found it a really good book.

[00:36:25] Melissa Howatson Sounds like applicable to both work and personal. So you had a book that covered both.

[00:36:30] Steve Browning Yeah. See, I tried.

[00:36:32] Kaz Takemura I'm gonna go the opposite end of the spectrum. So, one of my favorite books, I used to read it to my son at bedtime, Shel Silverstein's The Giving Tree. It's this very short and deceptively simple book, but it has these very complex themes about unconditional love, the nature of sort of giving and receiving, you know, and it's a different perspective when you're a parent, when you're a child. And I'm looking forward to seeing my son's reaction now that he's in university, and as he gets older to the book, maybe when he reads it to his children. So yeah, that's one of my favorites.

[00:37:13] Melissa Howatson Well, that's a good one. You make me want to go home and pull that book out. It's been a little while since I've read that one. My kids are similarly a little bit older, but that one is timeless for sure.

[00:37:22] Steve Browning Hundred percent.

[00:37:23] Melissa Howatson Well, thanks again. Really appreciate you both coming and having this conversation with me today. I've really enjoyed it.

[00:37:30] Steve Browning It's my pleasure. Thank you for having me.

[00:37:32] Kaz Takemura Yep. Happy to be here.

[00:37:35] Melissa Howatson If you enjoyed this episode and want to learn more on this topic, sign up for our free newsletter, where you'll get exclusive resources designed to help you grow your leadership skills and drive strategic transformation across your business. Visit TheCFOShowPodcast.com to sign up now. For The CFO Show, I'm Melissa Howatson. Until next time.

About This Episode

In this episode, host Melissa Howatson welcomes Steve Browning, VP of Data Privacy and Corporate Security at Vena, and Kaz Takemura, Managing Director of FP&A Technologies at Modelcom, to explore how finance teams can adopt AI safely and strategically. 

Steve and Kaz share practical, cross-functional strategies for embedding AI into business operations, from building the right teams to creating scalable governance frameworks. They also unpack the emotional and organizational barriers to AI adoption and offer tips to drive both innovation and risk management.  

Whether you're just starting your AI journey or looking to optimize existing implementations, learn how to move from fear to forward momentum, without compromising security or trust. 

Discussed in This Episode: 

  • Why AI is now a non-negotiable technology 
  • The importance of cross-functional teams in AI governance 
  • How fear and job insecurity are major blockers to adoption 
  • The risks of shadow AI usage and how to spot it 
  • Guardrails that enable safe experimentation, not restriction 
  • Why employee education is the secret weapon of successful AI programs 

Episode Resources: 

Related Episodes: 

Latest Episodes