Video: While Others Simulate, We Deliver | Duration: 2036s | Summary: While Others Simulate, We Deliver | Chapters: Introduction and Overview (5.76s), Webinar Agenda Overview (94.5s), AI-Powered Search Evolution (177.29s), First-Party Data Strategies (419.435s), LLM Research Findings (630.25s), AI-Powered Real-Time Infrastructure (928.985s), Amplifying Customer Data (1033.55s), AI-Driven Performance Improvements (1198.77s), Advanced Commerce Features (1374.32s), Future Platform Enhancements (1533.485s), Q&A and Conclusion (1931.71s)
Transcript for "While Others Simulate, We Deliver": Hello, everybody. Good morning. Good afternoon. Wherever you are joining from, my name is Jordan Roper. I'm the general manager and head of product over discovery and search here at Bloomreach. I'm gonna be your host today. So excited to dig into this material with you. So thank you for everyone who has come so far. We will be recording this. Let me go ahead and get my screen, starting to share. Give me just one second. Alright. Thanks to, thank you to all of you for joining today. I'm grateful for all of your time. Please feel free to forward this recording out to any anybody on your team who isn't able to make this afterward. And what we're gonna be covering today is a pretty interesting topic that has been very timely in the market, around, you know, what to do with real versus synthetic data within commerce search, kind of the pros and cons, and most importantly, for for all of you, of the approach and strategy that Bloomreach is taking regarding this and how we are focusing our r and d efforts around providing value to our customers and and what that looks like, within the platform today. It's been a it's been an interesting time in in commerce search. It feels like every every couple months, there's always a new huge new development, whether it's a, you know, new tactic, new model technique, new enhancement. And I hope to kind of cover this during the course of the of the webinar today. I have a bunch of material, but I'm planning to keep this tight and clear. And so we'll be hopefully spending about thirty minutes of time today, And we'll be having some q and a right at the end of this section. So if you have any questions throughout the course of the webinar today, please drop them in the chat, or in the q and a section, and those from our team can can make sure that those get fielded. And we'll take those questions, right at the end. In terms of agenda, what I wanted to start with today is a brief walk through, search and Loomi AI. Most of you have probably are probably familiar with with Bloomreach. You don't need another company pitch, but there are some changes. I have three or four slides here, and I want you to understand how we are thinking about, our strategy evolving, particularly around AI and first party data, and what that means for this topic that we're discussing today. Then we're gonna go into real versus synthetic, understand where first party data matters, what our approach is, what some research is saying about the the divide between the two, and how we're picking our way through this. Third, I wanna talk about how you can get value today and talk about some results and some, ROI that our customers have seen from features, including some recent features that are in the platform right now. And then lastly, we'll wrap this up with what's coming next so that you can understand, with this context today of what's coming in the road map and what else is coming during, h one. With that, I'll dive right on in. So in, Bloomreach and with search specifically, things are continuing to evolve very significantly. And what I wanna talk about a little bit is some of the strategy backdrop behind some of our product road map and the investments that we are, that we are making related to, related to search and Lumi AI. Now when we say Lumi and we we say search, it's clear to all of you. It's clear to us that search is really no longer just a simple lookup function. We've seen this within our query patterns within the the interactions goes on within our data, within our customer site. You know, understanding much more about the queries and the ways that people search is the the moment that we're under right now. Making sure that we're able to understand differences in intent, differences in, you know, adaptation to be able to apply consistent relevance across the, across the journey and to be able to answer long tail questions like this, waterproof jacket for New York winters under $200. You know, a lot of search platforms like you can see here in the screenshot just don't do a very good job at returning results for those those types of queries. And that's the, the thesis or the ethos behind the investment that we're making, right now as search becomes a much more, strategic and important aspect of your site experience. Now within that, obviously, the modalities are changing too. The classic kind of keyword search type word into your search box on the desktop web, that experience is still very prominent, and every one of our customers has it. But it's obviously adapting and evolving significantly as things become more conversational and as the interfaces change. We've had a lot of success with with customers who have, added some of these new elements to their site, whether it's conversational interactions or, even just now starting to pilot with some of these new interfaces of making your, your context and your products available within ChatGPT or within other agentic, commerce mediums. So this dimension, this change of of modality is is a big focus area for us because we know that we need to obviously provide ROI and address, you know, address use cases as they come up, but help our customers make this transition to become more conversational, to become more, agentic overall, throughout their site. So within that, obviously, Bloomreach is very, very focused on AI. It's always been our our area of investment within search specifically since the very beginning. And our commerce search commitment to use AI more broadly and more fully just continues to deepen year over year. And, a lot of the things that you see over here on the right, some of these are improvements that we're making throughout the course of this year. Some of these are investments that we made last year that will, make the platform more powerful and more capable. And I'm gonna walk through some of these, throughout the course of our session today. But I think it's important to understand this context and even this next slide as well. Because increasingly, as we think about Bloomreach, many of you know us just for the course of of search and categories, maybe recommendations. What we've really built underneath the whole platform is an AI engine that has a full breadth of customer profile information, understanding about your products, a wide variety of apps that you can see right here, where you can make those that data available to to send out through, whether those are marketing oriented apps like email and messaging, or whether it's web oriented apps like cat categories, search, content. And on tops of all of these, there's a series of agents and AI upgrades that make these, make these apps more capable and more powerful. And so as we go forward, we're, of course, focusing a lot within making each of these areas the best thing that it possibly can be. And you'll see some of that in the the details that we'll walk through today. But we're then additionally making more connective tissue between the apps across the platform and more kind of what we would call compound value, from using more aspects across our agentic personalization suite. So with that context, though, I'll I'll shift into the next topic that we're that we're talking about today, which is first party data and what do you do with real data versus synthetic data. There's a lot of new possibilities and new things that, that were not achievable, even twelve months ago, three months ago even. They just continue to evolve within, within ecommerce, within search, and within optimization and personalization. And what I wanted to share with you today is a little bit of how we're we're thinking about this and what our approach is to to amplifying first party data and using that to drive ROI as much as possible throughout your site. Now when you come to the question of real versus synthetic data, there are definite pros and cons to to looking at both of these, approaches. I think that there's, there's been a lot of traction in the market from other vendors in this space, but also other ecommerce companies. And even outside of ecommerce, there's a lot of other martech companies that have been using synthetic data to create, you know, training or inputs into their models to improve the effectiveness of their offering. And there's a lot of things that synthetic data really has great. You know, the first biggest thing is the the instant ability to generate it, particularly around, you know, getting that, cold start problem solved and starting from scratch with a a good bed of information. Synthetic data really can, do a great job of of filling in some of those gaps. But there are challenges, of course, to this as well. And we'll talk about this in these next two slides too with some of the the research that we've been considering and, some of our, our aspects or our work that we've done as well. And the the accuracy and the representativeness of that synthetic data is often lower than what we would consider as, the bar that you need to be at. Real behavioral data on the other side, though, you know, there there's real pros and cons to that as well. The accuracy of it is really high. It is your customer's information. It captures all of the messy, all of the the real world human intent, but it can take a little bit more time to build up whether you're collecting that, that behavior data for longer or whether that requires, you know, ingesting some of that data from a third party system like a data lake or a data warehouse. There's pros and cons to both of these sides. What we've been doing at Bloomreach is, you know, based off of the the the platform context I just shared with you previously, we've obviously made a real significant investment in, a real time behavioral data backbone within our platform. Within Lumi dot ai, like the slides I shared previously, there is a a treasure trove of rich first party behavioral data that's not just clickstream data, as I'll show in just a second. It's it is information about your customers, their, you know, behavioral characteristics, how they're engaging with your brand, their loyalty status, their context, their intent that is the bedrock of our platform. At the same time, we've looked at synthetic data across the board of how can we use this to either amplify, you know, our existing, signal that we're getting into the platform. How can we use this to bridge other gaps of quality improvements across the platform, like catalog metadata enrichment, etcetera? And, our position on this and what we have found from a lot of our, POCs and a lot of our investigation is that taking real behavioral data and doing smart amplification of it gets the breadth and the reach that the synthetic data promises, but it grounds it within the real, customer behavior and the real, you know, intent and the messiness of that intent at times that occurs within live shopping on your site. And I'll share two research examples that, that sort of represent our our perspective on this and what we found as we've gone through the r and d process. And this first research and I will send this deck out afterwards, but this was a a research that was concluded by Columbia University last year. A study called LLM generated persona persona is a promise with a catch. What they found in the course of their research is that as, you know, synthetic personas get created to simulate existing buyers, And, yes, it solves the, you know, the the data scarcity problem very quickly. But what it suffers from is as the as those personas start to interact with marketing, you know, contexts, what starts to happen is that the the models really start to oversimplify reality and miss some of that shopper nuance. And you kind of think of it on a, you know, a bell curve spectrum where there's a lot of, you know, data on the edges, a lot of edge cases. If you think about these synthetic personas, what Columbia found is that, it just became a little bit more homogenous and uniform where a lot of the, responses were kind of in the median. And it didn't really capture a lot of those, you know, messy, diverse, multidimensional aspects of, of what customers' real journeys look like on your site. And so then when you use that to kind of push forward and project forward to, you know, applying a personalization tactic or using that as about a bedrock of your ranking models, it also becomes challenging and difficult to, to extrapolate, and the the results aren't as, as effective as you may think. The second piece of research, supported this conclusion, but it actually, it kinda came through a different a different way where this, marketing insights and analytics agency, they did this, did this research kind of through the lens of how do synthetic users impact Martech and, the use of, you know, the use and the effectiveness of those of those Martech, offerings. And you see here even in this top graphic on the right, a lot of that same bunching happened where there was, this statistical middle where the LLMs were really good at at providing what, you know, what would be kind of a a generic vanilla response. And we all understand this. Right? Like, if you if you talk to Gemini or GBT or, Claude even, they all have their own personalities, and they all, in many ways, kind of mirror the the response or the the tone of voice that you respond with. They they gravitate towards your interaction mediums and give responses accordingly. That's not to say that they're not very grounded in the realities of the world. They'll give good answers, but I think what you miss here is a lot of the high intent extremes on both sides or some of the the actual logical flow. And if you know, some of the things we investigated was if you are able to create GBT box that can then simulate a user journey, if they come into their site and they, you know, flow through your site, are they actually clicking the right types of things that, human would likely click on? Or if you give it a smaller subset of data, you know, is that even a representative sample? If you if you let the LLM choose between 10 different products kind of as a a separate ranking mechanism, is that even represent a real life buyer's journey where they only have a few products just in isolation? So our our feeling in all of this, and I guess this is the the the kind of conclusion of this and what I'm leaning towards is our strategy through all of this is to first capture the widest breadth of first party signals from your customers. We think that depth and breadth is both important to having that, you know, intelligence within your ranking and personalization initiatives. And our second, kind of bedrock strategy is to amplify that impact across your site through whatever medium possible, ranking, personalization content, etcetera. Third, synthetic data is useful, and it's used, especially if if you're focusing in small select use cases. We, at times, have used synthetic data to help, bootstrap some of our offline model training or to, you know, get some additional, you know, offline analysis done through through a new ranking initiative or a new, you know, neural network that we're trying to explore. And it is useful. But I think our our analysis and the the time that we put into this has has led us to be a little more cautious about using this in broad, very holistic scenarios, especially if it becomes part of the base ranking or the base, the baseline of of what your how your site performs. And the last piece in all of this is that, fast testing and fast feedback loops is even more important than than all of these above. It's great to be talking about data and how you, you know, pull that data into your stack and what you do with it. But above in all, above above, and beyond everything, I think it's most important to have a very quick iteration period so that you can go through and test a lot of individual variants and find the optimal set of performance or algorithm settings for your particular site in your particular vertical. So with that, I wanna show you a little bit more of of how this unfolds and why we, why we feel this way. Within the platform of what we've built, we have built, at the core of Lumi dot ai is a set of, first party data that spans across your customer, your product catalog, any sort of marketing campaigns that go on, any business context that's occurring. And that's wrapped in a set of real time infrastructure so that the the data that gets ingested can be acted upon within milliseconds. And that's important when you're thinking about taking better use of your existing customer information, both in terms of using that in real time ways to influence that session, but also in terms of, you know, the breadth that we're talking about here, the ability to ingest all these different data points and take use on them, whether that's through ranking, through personalization, that real time infrastructure is at the core of of how we're able to deliver that effectively. Now that real time infrastructure is wrapped in a series of AI models. We use mostly, video and Google, their Gemini suite. But we also use many open source models as well to that are purpose built for, search specific things, like applying, you know, product understanding, query understanding as it comes into the system, applying multiple different ranking models to, adapt to different scenarios or different verticals. And, of course, all of this gets pushed out through all of the different channels that you saw previously in that in that slide, through a lens of continuous optimization, like I mentioned, having real time analysis, intelligent testing, and proactive optimization. Those are probably a lot of buzzwords that I wanted to that I shared, but I wanna, like, peel this apart a little bit, especially around the first party context, in these next few slides. And so when you think about first party data, you think about what you can do with it on your site. And what you see here is that there's a a lot that you can do with just the clickstream data that comes into the system. And this is frankly a lot of what the, the alternatives in the market offer that they, you know, they use clickstream data. It's a great place to start, and it is the bedrock of of providing excellent search ranking across your site when you're able to click and understand what people are adding to cart and converting. But what you're missing is all of these other aspects of customer context, whether that's the behavior of that shopper, the context of where they're coming from or how they're shopping currently, as well as, of course, all of the conversation and interaction data that they're having with you as they're interacting more fully with your chats and your conversational agents across the site. What Bloomreach does is it provides the ability to track all those additional, capabilities. And what we're working on, actively at the moment is fusing these even more deeply into the stack to take better advantage of this rich, treasure trove of data and make it possible to then upgrade your personalization, your ranking, how, you're able to drive optimization across your site. And so this will help you to, not only make a more personalized experience, but we view this as a ROI positive experience. And I'm gonna show you a couple examples of how that would be. One of the first examples that is kind of analogous to this is along the lines of, you know, using real data and amplifying it rather than using synthetic data. This first feature that is, right at the end of beta and about to GA in this next quarter is one called performance sharing. And this is a great, example of using smart AI to amplify your existing customer learnings and spread that across more aspects of your site. What we see with performance sharing and I'll load up the example here. This is a clear wall art print. And imagine you're a home goods, home decor sort of company. What performance sharing will do is it will go through all of the queries that you have in your system, and it will look up similar queries like wall art or wall print, some of which, in this case, have had conversions and revenue attached to them. What performance sharing will do is it will find and identify those similar queries, and it will distribute the learnings from the more popular ones down to the tail. So in this case, wall art print, which has pretty low traffic, honestly, is gonna be gathering ranking learnings from both wall art and wall print so that you can see that the results get boosted or changed. The ranking gets changed accordingly. This has been a really effective upgrade, and it's transparent and happening in the background. And, you know, we've seen customers see anywhere from, one, one and a half percent up to six, 7% improvements in RPV just by amplifying your existing, ranking learnings across your site. And it really you know, this is where I I really like to separate the or try to separate the the hype from the reality. A lot of our focus within our r and d road map is on AI innovations that directly contribute to the bottom line. And, yes, there's obviously architectural differences or tactic differences for for how, you know, you know, all of the vendors get there or how even you yourself would build this if you were to build this in house. But our focus is is very squarely on, you know, let's separate the noise from the results. And, yeah, let's consider interesting architectural approaches as to how we should be, you know, tackling our search, but focus directly on how can we build as much value as possible in the form of version rate improvements and RPV improvements across your site. And you'll see that here in this give value today section. I think that, you know, many of our features that we've released, whether those, you know, or some that were released in the last year, search plus, search plus for multilanguage, segment segmented merchandising, learn to rank. You'll see some some medium performance stats here from customers who have been testing this and seeing positive uplift and value after using these capabilities. If you haven't yet turned on or enabled these features, I strongly encourage you to do so. So please reach out to your CSM and figure out how to get these live on your site right now and, implement these into your workflow. Additionally, outside of this, there's a lot of improvement that comes from the conversational aspect of this as well. I mentioned Clarity, our conversational assistant. Here, you can see two customers who went live with us last year using the conversational assistant, paradigm, like you can see here with Vans or these, you know, these conversation starters that can get injected either into your PLP pages, your search pages, you know, like you see here on the right with Harley Davidson. The good news for clarity is that it also makes a pretty significant difference too as customers interact with this. You can see some examples here of stats of customers who've seen benefit from interacting or adding Clarity to their site, whether those are AOV increases, add to cart increases, RPV increases. You know, these are, you know, very significant improvements overall. And, you know, especially some of these some of these are are, you know, those customers who interact directly with Clarity, but a lot of these are, you know, overall RPV increases that get applied across your site. And so when you're trying to make decisions around what you want to do or what's the the highest priority for, for your brand and your use of of Bloomreach or, you know, other tools in the market, I really encourage you to think through this lens of trying to understand of all the offerings that are out there, many of which, you know, that Bloomreach provides. Am I taking the best advantage of all of the, you know, all of the the r and d and all of the goodness that, that we're bringing to market? We're really, really excited about these stats, and I'm even more excited about what's gonna continue coming this year within both the core search road map as well as the clarity road map to to make these numbers continue to grow and add additional use cases, we're able to to go live with. Outside of those, there's a couple other value, getting value today items that I wanted to call your attention. One of these is personalized media and grid. This is a great, you know, example you can see here on this, animated GIF where you can see content items to get injected directly into the course of that particular site. You have a full set of UI to be able to do this, to be able to drag and drop and, you know, reserve slots, for that content to be injected. And you can also personalize that content as it comes in with information about that one specific user and their profile. This is a great way for, especially, you know, b to c fashion oriented teams to create some storytelling within the context of their grid and help to improve RPV, on those on those key pages across your site. Additionally, I wanted to mention LumiConnect, since I shared upfront these kind of new interaction modalities. We've released this feature, and we're working with some early, customers right now who are building, ChatGPT applications where you can see within the context of the chat g b GBT Convo, having your products be surfaced directly there within GBT rather than GBT making the determination, knowing your catalog and knowing your site and making a judgment call to say, hey. This product is most, related or relevant. These are calls made directly into Boomreach and products that are shown with all of your ranking learnings and merchandising controls and personalization upgrades present, being included here within, within ChatGPT. I expect that this area is gonna evolve significantly throughout the course of this year. We're investing heavily not only in this one, paradigm to be able to inject products directly into GPT, but also open up a variety of MCP services to support more and more agentic commerce as things unfold rapidly throughout the course of this year. And I'll show some other aspects of this in the, in the road map now. But if this is of interest, we are working with, First customers right now. So please reach out. We'd be happy to put you in the loop and work together with you to get this live on GPT. The last thing I wanted to call out here is another feature, another improvement around variant slicing that gives you more control around how, those products appear within your grid and also gives the ability to create variant specific ranking learnings. This was a p a key piece of feedback from many of our customers that they wanted to, you know, split out an individual sweatshirt like this in individual color ways, but then they also wanted to grab ranking learnings and data from each of those, each of those different items and have the ranking algorithms learn more effectively. So this has been a great upgrade. We've had good feedback from our customers on this, both on the merchandising controls as well as the the variability that they see within the ranking. Now all those are, ways that you can get value today out of the platform. And I wanna maybe spend the next four or five minutes talking about what's coming, and then I'll answer some questions before we wrap up today. So in terms of what's coming, I've I've given a highlight here of a couple other, RPV features that are coming both within core search as well as conversational. The first one that I wanted to talk about is improvements that are coming to our one to one personalization models. You know, last year that we released a one to one personalization with a attribute affinity based model that would detect a user's preferences and say, hey. Jordan's back. He likes the color red, so let's boost red products to him. And that was useful in some contexts, but we're right this month on the on the cusp of bringing one to one personalization, a new sequence based deep learning model to general availability for our customers to implement and, incorporate across the across their site. Where we're taking this next is we're using that same deep learning based model and incorporating a lot of the the customer profile elements that I showed in those previously previous slides about personalization. Taking a lot of the behavioral context, the nuance, the conversational information from your customers, and bringing that into, the one to one personalization that occurs on your site within the search and categories context is the next chapter for us here as we bring this to life in a in a fully productized fashion, across the platform. We're working with our first customer during q two to alpha this, and we're anticipating a general available release in the back half of this year. The next improvement we're making is around personalized recommendations. And what we're doing here is we're adding a lot of advanced models on top of that same rich behavioral data to amplify whatever clickstream plus behavioral data comes into the system and use that to power personalized product recommendations. Rather than, you know, simulating traffic to identify what a customer would be most likely to be interested in, We're using more sophisticated models to take more nuanced, view of what they're interacting with and then take that to, provide more personalized and tailored recommendations to them that ultimately is able to drive our PB more effectively. We're making upgrades to our bank, our ranking models, and we'll probably do three or four iterations on this throughout the course of the year. The first, task that's right underway is to improve the and solidify the base ranking model that exists within the platform, particularly adding new ratio based signals of adding things like ratios of views to add to cart, clicks to conversion, so that we can then capture products that have really solid fundamentals but aren't performing as strongly or don't have as much traffic as many of your main, top sellers. And we're also upgrading the query understanding service as it comes into the platform to use large language models so that we understand much more context and are able to do much more, intelligent multi attribute queries and understand, you know, specs and product codes, etcetera, coming out of the out of the system to make sure that we're doing a better job at matching and having higher relevance, within the within the platform. Next, we're improving search plus to add a feature called dynamic vector cutoff. This is, likely to GA during q two, and this will make it possible to have kind of variable temperature settings throughout the course of your site rather than picking a single temperature that applies to all of your site. Having, you know, search plus determine when it should be the recall and when the recall that comes in from, the rest of the stack is sufficient. And a lot of this kind of comes together in what I'm seeing here is or what we're calling here is self optimizing search. And behind the scenes, we're building a ton of agents to analyze your traffic patterns, identify your trends, surface any challenges that occur within your integration, but then suggest areas of opt optimization back to you. And we're currently right now working through this with our team in the back end to kinda dog through this and figure out, you know, what are the most information rich or important insights that need to be surfaced. I think the ultimate vision of this is, one, you know, surfacing these things or these recommendations directly in the dashboard back to you so that you have some control and interface over how you should or how you want to optimize your search. Or in some cases, for many of our customers who are interested, they say, you know, we want you guys to just take the wheel and drive, the optimization journey and put these things into autopilot. So if the agent is able to identify an area for optimization, you should propose a test, and you should go prove that out with a small set of traffic and let us know if that was effective or not and do that at scale much more quickly and effectively across your site. Moving gears a little bit with conversational discovery. We are changing the, you know, like, the screenshots that you showed previously. Customers who've interacted with us just in the assistant mode or in the kind of PLP, PDP button phase, we're upgrading that experience to what we call AI mode. We're working through this with some early customers right now to make an integrated side by side chat and PLP experience, both for web as well as for mobile. We're opening up an MCP service on top of our conversational capabilities so that you can build custom agenda gaps, whether that is directly within GBT, or we also have several customers who are building their own custom agents on top of our system to transform kind of their buying journey and their buying process. And we're also adding additional channels for our conversational offerings to, to be made possible. And the the ones that we're working on that will be coming out in early h two are WhatsApp and overall mobile messaging channels like SMS and RCS. Also, we're we're wrapping up an insights dashboard here so that you can not only provide these great experiences to your customers, but see what they're chatting about and see what the trends are of how they're discussing and talking with you. And this is another way to kinda really amplify the impact of real world first party data. We've had a lot of customers who've come to us with, you know, really interesting learnings of the kind of voice of the customer style learnings, what they're getting and hearing from their customers that change the way that they wanna, you know, make their experience happen on their site or they change the the types of configurations they want within the platform. And this is coming live very shortly, and this is a a upgrade for existing customers who are using us and working with us on on conversational commerce. The other piece to this is transcript analysis so that we can go through the all of the chats that, that come into your, into your website at scale so that we can analyze those and identify trends and patterns and pull out even phrase the customers specifically say. At this point, I'll probably pause. It sounds like, hopefully, this was helpful for you to understand the context that we're that we're coming at this from and how we're trying to take first party data and amplify that as much as possible across the site. I'm assuming, you know, there's probably individual questions for for each of you who are attending. And so I'm happy to take some of those right now, during a q and a. But, also, I guess, I just wanna share this that if you if you wanna discuss this of how this may look for your site or your brand particularly, please just reach out. I'm happy to have conversations with you and your team and, give you a little bit more insight as to how we're, understanding and looking at this change, but also kinda pick through some of your particular issues and challenges, see if we can get some personalized or individual suggestions back on how you can address those and solve those, and ultimately drive some more business performance going forward. So I'll go ahead and stop sharing here. I'm gonna switch back over to over to the website view or the GoldCast view, excuse me, and go into, q and a and open it up in case anybody has any particular questions. Alright. Doesn't look like any one of those came through. So if you have any other pieces, or any discussion or any items you wanna have, please just reach out to me. My name is Jordan Roper. Again, you can reach me at jordan dot roper at, bloom reach dot com. So feel free to reach out to either me or to your CSMs, and we're happy to have, more conversation about this. Thanks again for attending, and, looking forward to talking with you all soon. All the best.