How Finance Teams Are Learning to Lead with AI
Watch Episode
Listen Now
Transcript
Transcript
[00:00:15] Melissa Howatson Welcome to The CFO Show. I'm your host, Melissa Howatson, CFO of Vena. Artificial intelligence, or AI, has redefined the very fabric of how we do business and manage finances. Algorithms can process millions of data points in seconds, and business decisions can be informed by AI agents that can simulate countless market scenarios in seconds. You may be just starting your AI journey or already well on your way. Grappling with questions about how to implement it effectively, use it responsibly, and develop the skills needed to harness its full potential. In this episode, we will unravel the complexities of AI, from its basic functions to advanced concepts like AI agents. We'll explore how these powerful models are trained, discuss best practices for working with AI, and look at how finance teams are leading the charge in AI adoption. Joining me today is Anton Medvedev, Senior Product Manager at Vena Solutions. With over seven years of finance experience, Anton leads AI-driven innovations at Vena, enhancing financial planning and analysis. His expertise in AI, machine learning, and Power BI has helped businesses streamline workflows and optimize decision-making in today's data-driven world. Anton, welcome to the show.
[00:01:43] Anton Medvedev Thanks so much for having me. Excited to be here.
[00:01:45] Melissa Howatson You've had a really interesting career. Can you tell us a little bit about your background?
[00:01:51] Anton Medvedev I've been at Vena for almost six years. Before Vena, I spent about seven years in industry, working in a variety of FP&A roles, so traditional analyst roles. And my last job was a controller, but I was more working in econometrics, so trying to build machine learning forecasts to predict our orders and sales using economic indicators, using our own internal indicators. So I really got to get deep into data structures, data science. I couldn't think of a better place to join than Vena, and really kind of draw on both my data knowledge and my finance knowledge. I have a CPA and a CFA, and I've had the privilege of working in AI, so it's been a fun ride, I would say.
[00:02:32] Melissa Howatson And, who says that accounting careers can't be exciting and take some interesting new and different paths? Did you know that getting into the product side of things was where you wanted to go next?
[00:02:43] Anton Medvedev Oh no, I really didn't. I think after working in industry, I really wanted to get exposure to more kinds of environments. So consulting was the next step for me. So I joined the services team here. And after being in services, it really became clear that I really wanted to build product to really impact as many customers as I possibly could. So that's where I've been for about five years now, building different products, building different things to help our customers really get the most out of their data and get the most out of their accounting systems. It was a bit of an evolution.
[00:03:13] Melissa Howatson That's great. So we're going to talk about AI today, and there's a lot of talk about AI. Can we maybe just start with some of the basics? You know, when you think about it, how do you define what is AI and its primary functions?
[00:03:28] Anton Medvedev AI is really the branch of computer science that's really responsible for mimicking human behavior. So I'm sitting here, I'm talking to you. So I know that you're a finance expert. I'm taking a show about AI. So I have these kind of stimuli coming in to cue me to talk about AI, AI for finance. When I go back to my desk, I'm going to go talk to my engineers. So I'll be grounded in the engineering side of things. Assuming we have these stimuli that are always coming in. And our brains are working to reconcile these stimuli with the knowledge that we have about the world. Well, it's a very similar thing for AI models, right? They're trained on very large quantities of data and as we prompt them, we give them specific kinds of instructions, they really start to try to guess and anticipate what am I to do next, right. So it's kind of this, they start to anticipate what kind of response is this person looking for me.
[00:04:21] Melissa Howatson You mentioned prompts, and that's certainly something people are talking more and more about. What are prompts, and what should people who are using AI know about them?
[00:04:34] Anton Medvedev Prompts are really inputs into the AI brain. They're inputs that tell it what to do, so it can be a question, a set of instructions, any sort of guidance that will kind of tell the model where it needs to focus its attention, right? So, when I think about what's an effect of prompt it really is grounded in the outcome that you want from it. So it triggers the right part of its brain or the right from its knowledge base. So again, drawing this example, if I was an AI model. But being grounded in the conversation on AI and finance, that is the part of my brain that's lit up right now. And AI models, those weights would be activated, so to speak.
[00:05:15] Melissa Howatson When we think about prompts, because it sounds like finding the best way to communicate or to be understood what it is I'm trying to ask, what are some of the tips that you would offer to do that well?
[00:05:30] Anton Medvedev Think of the AI as another person. Think about what information would you want to answer this question effectively. So a lot of the times that's, you know, entering background information, that's entering the persona that you want the AI to take on. That is really about giving a context as to the output format you would want. So, you know, if you want bullet point, if you want pictures or images, and you know when we talk about prompts and inputs, it's really important to recognize it's not all text-based anymore. Now we're having models that are capable of following us along as we work and looking at our screens and taking that as inputs into the model to figure out what's actually relevant for me at the time that I'm typing this in. Because if I'm working in Excel, if I am building a forecast, chances are I would want some information grounded in that forecast. But If I'm writing an email, that's probably a different part of the AI brain that would want activated at that moment. So as AI models evolve, the nature of prompting evolves greatly just far beyond text inputs to just really following us around and getting in-context injections.
[00:06:42] Melissa Howatson It's a fast-moving area, it seems, although it's actually been around for quite a long time, but it feels like it's picking up speed. And yet we hear often that AI is a bit slow to be adopted. So, what do you think are some of the reasons that it's taking a little longer?
[00:07:00] Anton Medvedev I want to kind of look at it through the lens of personal use cases versus professional use cases. I think when ChatGPT first came out and had this big splash event, most of my friends, I think, most of my family went and they tried it just to see what it can do. I remember trying, I remember asking, like, can you tell me about the moon, assuming that you're based in 1850? It knew what we knew about the moon in 1850.
[00:07:24] Melissa Howatson Wow, interesting.
[00:07:26] Anton Medvedev That was really cool. It's like, wow, this is so fantastic. Let me see how far I can push this. But when it comes to our professional lives, there's an additional requirement there. It's not just general data that the model needs access to, but it needs access to our own organizational data. And that's really where the challenge lies, how do you get that organizational data into these really large AI models? How do you do it in a safe way? And there's a couple of different ways to do this. But only a few of them are really possible for companies like us because we don't have the resources to retrain a whole model. That's where it's difficult to a certain extent, because we need these models to know our organizational data, but getting that knowledge in can be quite challenging.
[00:08:11] Melissa Howatson How can businesses help to train the models and help us understand more about what these models are?
[00:08:19] Anton Medvedev Without getting too technical, really, the idea is that tools like Microsoft Azure, they really have ways to store our organizational data and make it available to the AI model to access on demand. For example, if I am a finance user, I need to load my hierarchy. Our hierarchies are very important for finance users because it's granular level at the transactional level. Data is useful, but it's really hard to do analysis at the transaction level. We need to know a gross margin. So we need to know how is the gross margin calculated for new hierarchies. So by loading our hierarchy into Microsoft Azure and asking questions about it, what we can do is effectively train the AI model on our hierarchy. So it's really about figuring out how to use all of these database tools that are available out there to augment those existing models.
[00:09:13] Melissa Howatson And when you talk about a hierarchy, what would be a simple example of that?
[00:09:17] Anton Medvedev I always think about gross margin. If you go and ask an AI model, “How do you calculate gross margin?’ I'm sure it's going to know how to do it. But then if you have your own hierarchy where revenue is in account 1,000 and gross margin is in account 4,000, for it to know to do that calculation needs to know those two things. It needs to, like, what is your North American operating entity? So for these tools to really become relevant for us in our workflow, we need to expose that data to these models.
[00:09:49] Melissa Howatson I think that companies get nervous about how to expose it, how to know it's going to be safe and that this doesn't just run out of control or you've exposed data that you shouldn't. How are you seeing companies get beyond that concern, and making sure that they're doing this in a responsible way?
[00:10:07] Anton Medvedev I think the AI ecosystem is evolving very rapidly. So you have companies like OpenAI, like Google's Gemini, that are really specialized in training the models. And we can talk a little bit about what training the model actually mean. But then you have companies like Microsoft, who has Azure. You have AWS. You also, of course, you have Google's cloud systems as well. But their value prop to the ecosystem is making sure that the trained models are actually accessible in a secure way. And that means that number one is that your data is not used to train the master model, right? Like that's number one, is you can't have any data leakage. We also need to make sure that not only is the data not trained, but there's also a level of safety that we apply for our users. So we don't want anybody asking anything that's inappropriate or anything that, you know, we don't want our model to answer. And so, a lot of these tools come with content safety filters, and a lot of vendors actually add their own content safety filters to anything that builds on top of said things. So really there's kind of three tiers that have been forming, which is one is the underlying model, and two is the distribution platform like Azure. And then three is the specialized vendors that sit on top. So that's how I've been kind of thinking about this problem.
[00:11:29] Melissa Howatson Companies should look at all three of those layers to assess that they're making sure that there's controls in place at each of those layers so that they know how this is ultimately working.
[00:11:41] Anton Medvedev Yeah, and I think that can be a pretty tall ask for core security teams to assess all three layers. I think so we have to just pick and choose the partners that you trust. So ultimately, I think from most consumers, it's really about, does my vendor guarantee that my data is not going to leave? And it's up to the vendor to make sure that, that Azure or AWS doesn't leak. And it, kind of like this chain effect all the way through, because it can be pretty tall task having CorpSec go through all three layer.
[00:12:08] Melissa Howatson And you talk about training these models. So, and I think that's one of the things too that some people are just well on their way, and have figured out how to do that, and then they're getting the benefits, and others don't necessarily know where to start or really understand what that means. So can you explain a bit more about how that's done?
[00:12:28] Anton Medvedev I think there's two layers to it. One is the general model level. So let's use OpenAI as an example. So the way that OpenAI trained its models is by really, they use the word tokens, but let's assume that token equals a word for the time being. So they make, they feed in millions and millions of articles and different things from the World Wide Web to make associations between words. So let's say the word lion comes up very often with the word safari. So at a certain point, the model is going to say, okay. I think that these two are related because the weight between those two concepts is quite high. Let's say, for whatever reason, one article mentions the word snow. Well, lion and snow isn't necessarily something we think about. Because the frequency of occurrence and its training data is quite low, it associates lion more strongly to the word safari. Right, so maybe it'll start to understand that a lion lives in a safari and if the word, you know, zebra came up and somebody asked, what's the relationship between lions and zebras, right? So let's say for now that the model was not taught the lions eat zebras. But by knowing that lions are kind of at the top of the food chain and that they both live in the safari, the model, like that's where the intelligence comes in. The model should be able to extrapolate. Okay, so both of them live in a safari, and lions are top of the food chain. Therefore, lions probably eat zebras. That's where it starts to get into, how does it benefit businesses? Well, going back to our example, right? How does the model know that my revenue is account 1,000 and my gross margin is account 4,000? Well, you know, maybe there's enough context, maybe in the way that you call that account for the model to be intelligent enough to go and pick it up. And I think that's where it's really a matter of going in and seeing where the model works well for your use case and where the model starts to fall over. So I think this concept of play is really important, and it's very different for everybody because AI models are built to take associations from natural language, right? And it's really important to understand for your business how far does that go, right? And I think really good tools and a lot of the really great vendors out there, they're starting to recognize that. So they're building their own mechanisms to append, I would almost say like append company-specific knowledge to the general AI model. Really, it's all about pushing the model and seeing what levers do I have to pull to make sure that this model knows how to do this specific thing for me.
[00:15:05] Melissa Howatson And really, if you think about it, that's no different than how it is in our intelligence grows, the more we're exposed to the concepts and the information, and it sort of feeds itself, right, the, that greater intelligence. Let's switch over for a second to agents because we also hear a lot of talk about agents. How should we think about those?
[00:15:28] Anton Medvedev I think when AI first came out, I think we were all kind of under this impression that this one AI model can do all of these things for me. And I can just give it a simple prompt, and it'll figure out how to have this specific execution path. But we've kind of started to realize that these models need more context. It always goes back to context, right? So let's say we wanted to have an AI model that's capable of producing a financial forecast. Well, it's quite a tall task for it to just do it in one shot to kind of do the entire cycle. But with agents, it's much more likely that we would have a system of specialized agents that are responsible for executing smaller tasks as part of that, you know, your request. For example, if I ask you to create a forecast, I might want an agent to process my request to understand what kind of forecast do I want? Than an agent who retrieves historical data that's relevant for this particular question. An agent to actually go and mold this data into a way that an AI model can create a forecast and then potentially another agent to do the forecast and explain why it came up with that forecast. By being able to build these specialized kind of almost tools in that ecosystem, we can take a very broad, very complex ask and create very specialized entities that can actually execute smaller chunks of that request.
[00:16:50] Melissa Howatson And, we want to have more of them breaking this down because it's too hard for one agent to go broad. It's quicker to have these tasks broken out. What's the big reason why you need more agents?
[00:17:09] Anton Medvedev There's kind of two things. Number one is because it is generally very hard to execute a very complex task in kind of in one shot. So we've seen the advent of reasoning models, like, you know, O1, which actually takes a step back and starts to think about what are the steps that are needed. But that being able to put it into an agent infrastructure, those steps are more hard coded. So, for purpose-built tools, it's actually much more likely that the order of operations is going to be what we expect it to be rather than just relying on the AI brain to do it. And secondly, a bit kind of intuitively, but some of these agent frameworks are actually quicker than just one big request. And that's because the models that are required to perform one small task, that can be much smaller and much more purpose-built than just having one huge large language model running on the entire request. So it's cheaper, and it can be quicker as well.
[00:17:59] Melissa Howatson And so when people think about agents, there's frameworks and certain parts of this that can come purpose-built, then there's the aspect of how we ourselves interact with the agents. So where do we as the humans come into play here, and what should we know about getting those agents to work more effectively for us?
[00:18:23] Anton Medvedev Good AI tools orchestrate agents on their own. So as a human, your input will still be, hey, create me a forecast for gross margin for 2025. But in the background, it's really the AI brain that's more intelligently routing the request to the appropriate agents in the appropriate order. And that's what we've seen, Salesforce, they named their event AgentForce. So they're really doubling down on agents and a lot of software development kits like Microsoft Azure, they have infrastructure for putting up agents. So that's really meant for software development teams to go and create this agent orchestrator patterns. But as humans, we should be interacting with one entity.
[00:19:06] Melissa Howatson And so for us as the users of these tools, we want to understand some general knowledge of how that's working to figure out then how we can interact with it. How are you seeing folks that are using this, that are getting more advanced in finance using these tools? What are some of those key skill sets they're gaining and how are they going about getting that skill set?
[00:19:31] Anton Medvedev I want to use the analogy of a race car driver, because they say that the best race car drivers, they really know which part of the tarmac has the most traction. And the reason that they know that is because they've probably gotten it wrong a thousand times and skid all over the place. So I would say it's a similar thing with AI models. It's really about getting in and pushing the boundary of it, seeing where it falls over, seeing where that traction lies, and how can I as a user augment that. Right? What kind of context is the model missing to execute my request properly? And, you know, that might seem like a tall task, but the moment you start to kind of play with them, you can, especially when it comes to our day-to-day careers, we start to see where the models are going off and where they need more context. So it's really about learning how to prompt these tools. And that's really an iterative process of seeing, okay, I can go this far, but to get it a little bit longer, a little further, I can add, I have to add this additional piece of context. And, and that's where vendors come in, is they try to automatically inject that context for you based on what you're doing in their tool.
[00:20:32] Melissa Howatson So how much patience do you think that this takes for the user? You know, you've got to, sounds like you have to be afraid to, you can't be too afraid of failing. You have to willing to take a chance, and then, you're going to put in some of the effort to keep working with it because it's going to get better, alongside obviously what your vendor can do to help you accelerate this. So, how are you seeing people go about this and be successful?
[00:20:59] Anton Medvedev Well, I think it's really about how much value-add is on the other side of it. And I think if there's enough value-add for finance teams, they're willing to put in a little bit of upfront effort to really kind of teach the model, the uniqueness of their businesses, right? I think good vendors, I think the way that vendors are going to shape up, is we're going to have vendors for every vertical, right? So, you have a vendor for finance, potentially a vendor for HR or supply chain. And they can really inject a lot of domain-specific expertise into the models. So that takes a lot of the burden off of the end users to really go ahead and provide that. I don't expect it to be a lot of patience or back-and-forth without users. That these models should be very aware of what's a common metric for the industry. And you should just have to tweak it the last little bit for anything that's really IP or anything that specific to your company.
[00:21:56] Melissa Howatson I've heard that some companies that are being successful with this have identified those individuals internally that are willing to be earlier adopters like with anything. And then they help to show others, and you get this excitement that can almost get generated about look what it's doing because it's giving me enough value on the other side, like you said, when I did this thing, and then look at the output of what I got. How important do you think those early adopters are right now?
[00:22:26] Anton Medvedev AI, like any other kind of big transformation, it's not just a tool in its own, but it comes with a little bit of change management, right? So I think those SMEs are really important to really kind of tweak them on the last little bit that it needs to be tweaked for their particular industry. But more so, they're willing to really champion the change effort that's required to really drive adoption of AI. And I think finance teams are really well positioned to do both because one, it's their SMEs in their industry in the vertical, and two, is because there's just really big benefits to be reaped on the other side of that equation.
[00:23:01] Melissa Howatson Now, you get to in your position, you get interact with a whole bunch of different finance teams and see the ways that they are leveraging AI. What are some of the most impactful ways that you're seeing finance teams leverage?
[00:23:17] Anton Medvedev When I think of somebody in your position or somebody in a VP position, efficiency is a big topic, efficiency of finance. But I think, by making the finance teams more efficient, we're only just scratching the surface of what's possible. We really want everybody in the business to be able to use AI tools, right? So this kind of democratization of data is really important. You know, if you think about how much time the finance team spent retrieving data for themselves versus how much they spend retrieving and analyzing data for everybody else, really, the time savings is making sure that everybody else in the business has their own FP&A analyst, so to speak. Really, the most successful customers that we've seen so far are doing two things. So number one is they're making these tools available to everybody in their business, right? They're learning to play with AI, making sure that they're comfortable with, you know, the answers that it generates, that it has the right context to really answer the frequently asked questions. And then they're distributing it to the C-suite. And a lot of our customers are not just targeting the C-suite, but they're targeting our operational base as well. So, we have a number of customers in the hotel industry. So for them, a key metric is cost of cleaning per room, right? And we've seen a number of customers teach Vena Copilot, how do you calculate the cost of clean per room? Which is a pretty cool metric, if you think about it, for AI to be able to calculate.
[00:24:46] Melissa Howatson That would be like, for instance, in a hotel where this is something, and they're constantly updating and getting real up-to-date information on what that cost is, if that's one of their key metrics.
[00:24:58] Anton Medvedev That's correct. And now there you know their operations, people can just ask Vena Copilot and say, “Hey, what's my cost of cleaning per room in May? And what is it in June?” To see how it evaluates. And, you know, if you think back to before AI, that would be like somebody asking their analyst to do that analysis and then the analyst coming back a day or two later. And then you having to make a decision at that point. But now, like the time to action is much shortened. And I think that's where the value of AI is above FP&A, being able to do more data querying.
[00:25:30] Melissa Howatson If you had to give advice to people about the top two or three best practices for leveraging AI, what would that be?
[00:25:38] Anton Medvedev Yeah, I would always think of, like, what information would I need to answer this question, and try to provide that information to the model. I think that's first and foremost. That's, it's really important that the model has the right, the requisite context to answer. And number two is just to play with it. See how far you can stretch the model, see where it falls over and see how you can augment what's required to make it actually run properly.
[00:26:05] Melissa Howatson That's really interesting, Anton. When I think then about vendor selection and how important that decision is, what would you give as advice to finance individuals when thinking about which vendors to choose?
[00:26:20] Anton Medvedev There's two parts to it. And number one is, is my vendor, did my vendor actually somehow train or did my vendors somehow prepare this model to be used specifically for the finance use case? Because what we've seen is that these models are very intelligent, but they kind of fall short a little bit when it comes to actually doing these calculations and performing these analyses when it comes to our professional lives. So number one is all about, is there industry-specific knowledge that the vendor injected into the AI model? And two is, is there a mechanism for me to inject my company's specific knowledge into the tool? Everybody's a little bit different, right? We all have our unique KPIs. We all have unique ways of doing certain calculations. To really drive meaningful adoption, the AI tool that we're using needs to be able to perform those tasks. So I need to be, or it needs to understand how to do those calculations, and therefore it needs to somehow glean that context, because it's just not going to be able to do it based on just a general body of knowledge that is trained on.
[00:27:24] Melissa Howatson As the CFO or the finance team making this assessment, how can we tell that's actually going to be the case that we are going to able to inject a certain amount of what we're doing, because it's easy to hear that. But how do we validate it?
[00:27:41] Anton Medvedev Yeah, I think it's really all about, you know, really trusting the partner on the other side of the table and really being direct with your question saying, “Can your model handle our specific customer, our specific requirements? How do we actually get our specific requirements into the tool?” Because at the end of the day, there's only two ways that we know of right now to get those requirements, and that is really kind of tuning the model or performing RAG, which is Retrieval Augmented Generation. If you can ask those two questions to see if the vendor does that, then they should be able to facilitate your use case. But if they say that they're not using any one of those methods, chances are they won't be able to do your own unique calculations.
[00:28:23] Melissa Howatson When it comes to those subject matter experts, the people who are willing to be your early adopters, one of the things that I've found myself as a CFO is, sometimes I didn't even realize what people on my team or my peers around me were already piloting. And when you ask and take the time to look at what it is they're doing, you really can open your eyes to some really new, cool use cases and that curiosity to want to learn more, and then you can start to see how applicable what one individual is doing to another part of the business, and how if you can take that knowledge, and just stop and take a bit of time to apply it in other parts of the business, how it can really give a boost. And give a boost to that employee to have some recognition for having taken the chances and learned with the tool and driven a more efficient process or gotten a better answer or outcome because of the work that they did.
[00:29:28] Anton Medvedev Yeah, that's really interesting. And I think for a lot of our leaders, it's really about allowing this kind of grassroots effort to see how AI is useful for them. But as leaders, it's all about drawing the box of what's acceptable from a security perspective.
[00:29:41] Melissa Howatson You know, it's interesting because as leaders, we spend a lot of time thinking about how do we get our teams to leverage AI? What is it? You know when I reflect on myself as a CFO, and what do I need to do with it, I think we've also got to lean in to learn it, not be afraid to take those risks, which can be really hard to do as the CFO who really is that steward of risk and, and also acknowledge what it is we don't know and learn from those folks around us who are leaning in, whether that's our peers and being curious about things that they're doing with AI and what results they're getting our employees, both supporting them, but also being vulnerable enough to learn and improve our own skillsets because we're all in this together. This is new, and there's this learning that is happening across, and we don't always necessarily have the tools to teach because we're in that boat learning right along with our teams. And in fact, some folks that maybe have been using this in other parts in their career or in university fresh out of school have a lot to offer us in terms of helping us get our own skill sets up, so we have to lean in just as much as we're talking about and encouraging our teams to do. We've got to look in the mirror and make sure that we're doing it too.
[00:31:08] Anton Medvedev Yeah, absolutely.
[00:31:10] Melissa Howatson It's been great to dive in a little deeper on AI and just unpack some of these key concepts and how finance teams are leveraging and learning about it. So really appreciate you sharing your insights with us. Now, whenever we do have a guest on the show, we do you have two rapid-fire questions. So are you ready for those?
[00:31:31] Anton Medvedev Yep, definitely.
[00:31:32] Melissa Howatson What is the hallmark of mature finance teams in terms of the way they operate?
[00:31:37] Anton Medvedev I think really knowing their stakeholders and knowing what their stakeholders need, right? I think finance is actually set up as a services function, right, and you have your inputs, your data, your stewardship, and making sure that you can get the stakeholders the data and the insights, not even the data, but some of the insights and the strategic partnership they need to draw those correct conclusions and fulfill their requirements.
[00:32:00] Melissa Howatson And you've been able to sit in a couple different seats within a business to really see how that can be beneficial both as the recipient of the information as well as the FP&A individual providing it. What is your go-to strategy to stay productive during the workday?
[00:32:18] Anton Medvedev It was a great question. I kind of used to operate in this way that I would save complex tasks until I had like an hour or two hours to actually do them. But what I started doing lately is timeboxing myself to like half an hour and just saying, how much can I possibly do in half an hour? And I found that helps me stay focused much better than the alternative. So that's something that I've been doing this year, and it's been working out really well.
[00:32:41] Melissa Howatson That's a good one because sometimes in finance when tasks seem daunting, those are the ones we keep putting on the list. So great advice there. Again, thank you so much for joining me.
[00:32:52] Anton Medvedev Thanks for having me.
[00:32:55] Melissa Howatson Want to learn more about today's discussion? Check out the resources in our show notes. And don't miss out on future episodes. Subscribe to The CFO Show on your favorite podcast platform. For The CFO Show, I'm Melissa Howatson. Until next time.
About This Episode
Are you leveraging AI to its full potential in your finance department?
In this episode, we dive into the practical application of AI in finance with Anton Medvedev, former Senior Product Manager at Vena and CPA/CFA. With his unique blend of finance and tech expertise, Anton offers a roadmap for effectively adopting AI in your financial operations.
Anton shares insights on training AI models, understanding AI agents, and effective prompting techniques. He provides real-world examples that demystify this fast-moving technology. Whether you're experimenting with AI in Excel or trying to choose the right vendor, this episode offers practical strategies for getting started, scaling smartly, and ensuring your team stays ahead.
Learn how you can transform your finance team from AI novices to strategic innovators, harnessing the power of AI to drive value in your organization.
Discussed in This Episode:
- The core components of a good AI prompt
- What AI agents are and how they improve speed, security, and execution
- Why early adopters are key to driving internal change
- How to evaluate vendors on AI readiness for finance
- Common missteps teams make with AI and how to avoid them
- Why CFOs must personally engage and lead the AI learning curve
Episode Resources:
- The State of Strategic Finance 2025 Report [Vena in partnership with BPM Partners]
- Excelerate Finance Summit
- What Are AI Agents? Here’s How They’ll Transform FP&A
- Will AI Replace FP&A Jobs? The Real Impact of AI on FP&A
- Should Finance Teams Use Generative AI? How To Implement AI Securely
- The Smart CFO's Guide to AI in Finance
- How AI Can Power Finance Productivity—and Why Doing More With Less Matters
- How To Use AI for Financial Modeling and Forecasting
- 100+ AI Statistics Shaping Business in 2025
Related Episodes:
- From Fear to Framework: Safe AI Implementation for Businesses
- The State of Strategic Finance 2025: AI Adoption, Collaboration and Technology Gaps
- How AI is Reshaping Work
- AI-Powered Finance: Developing Your Implementation Roadmap
- Navigating the AI-Driven Future of Business with Microsoft
- Driving Finance Innovation With AI
- AI Revolution: Unlocking FP&A Potential