Video: What’s New in Starburst: Highlights from Product Management | Duration: 2708s | Summary: What’s New in Starburst: Highlights from Product Management | Chapters: Welcome and Introduction (8.64s), Starburst's AI Strategy (307.71s), Starburst Enterprise Updates (526.105s), Enhancing Engine Performance (754.425s), Enterprise Context Layer (1013.885s), Trusted Agentic Interface (1323.695s), Exploring Data Products (1637.765s), Data Catalog Insights (1770.99s), Data Products Overview (1888.235s), Data Product Enrichment (2019.03s), Showcasing AI Integration (2181.63s), Performance and Comparisons (2414.12s), Conclusion and Gratitude (2524.43s), Multi-Tenant Cluster Considerations (2560.41s), Concluding Resource Management (2638.05s)
Transcript for "What’s New in Starburst: Highlights from Product Management": Everyone, welcome, to our webinar. My name is Lester Martin, your friendly developer advocate here at Starburst. Starburst, Trino, all that kind of good stuff. And I've got some great folks here from our product management team, Dan and Zach, who'll jump in here in just a little bit. They're so kind to let me kick us off today. And and they're also so kind to let me do a demo today at the end of the session. So, hopefully, that'll go equally well. And what I wanna do and I'm gonna yeah. Let me turn the page and let you start looking at this. In this kind of what's new in this series, we often talk about, hey. Here's this exact feature, and here's how it works, and get people really excited and that kind of stuff. But I think, as we talked about this, Dan, Zach, and myself, and, hopefully, I'm on the screen there. Yep. As we as we talked about this, we really started thinking, you know what? It's not that we're at this epoch or this pivot here exactly. We're in the middle of this change. We're in this middle of this change for us as data folks, as data engineers, data analysts, you know, data leaders, all that good stuff where, guess what, AI, generative AI is really, making a big change and a big impact on a lot of us. What will it be? Another year or two or three? I don't know. But I can say it is gonna be, and it is making a change to all of us. So what we wanted to do is take a little time and talk about where we are as a company and how we're gonna help, you be super successful in this AI space. And I'm realizing as I share my screen, I can't see that slide all that well. So, I'll have to wing it a little bit here. What you should see this on screen, though, is this notion of data roadblocks and aspirations. And I really, really wanna see it bigger, so, I'm a find out how to do that here. Oh, I think I can drag on the screen since I'm not sharing my screen. There we go. Apologies. Alright. I can see it. I'll read it right in front of me. And what, I wanna take a moment and just kinda say, guess what? Your boss, your friends, your peers, someone is probably in the upper right hand corner. They're saying, you know what? I can type all this stuff at the house and get these great answers. Why can't I do that here at the enterprise? And you know what? In many ways, they can. But they had has to know, you there has to be some things in the way, the roadblocks as you listed here. And what we would like to say, and I don't think we're saying that all by ourselves, is that, you know, to get the best answers out of these generic questions, of these, these large language models is we need to bring the magic word of 2026. We need to bring some context, and we need to bring some structured data that has some rigor to it, that has some documentation, has some structure to it. So we see a lot of that as the roadblocks. Do you have the data? Is it in good shape? Is it well documented, well curated? And if you have those things, then guess what? Those tools really do kinda out of the box, really start to make a difference. So we wanna make a point that we've been focusing on that for as long as we've here as a company, Treno, Starburst, building that kind of foundation of data access. Now we love to talk about data lakes. That's where we really expend our effort there, but, you know, we reach into datasets all over the place with our connector architecture. We've enhanced that and and built upon that to look for more faster performance scale. Sometimes this is cool caching frameworks, etcetera, things like that. But, you know, it's always a focus to make that environment, that backbone environment scream and fly and and scale. But as you get to that third tier, which says enabling curated and consumable insights, this is what we've been calling, data products, you know, overloaded term, of course. But to us, a data product, and I'm gonna demo that at the end, you can hear Dan and Zach talk about it off and on throughout their sessions here, their presentation bits. But, really, what that is is a space, historically, for kinda like humans to really say, hey. Don't show me a thousand datasets. Show me something that makes sense. It's in my business domain. It's in my subject area. Well documented, well explained, uses examples, discussions, all this good stuff. And we realized as all this kinda AI trend really hit us that, cool. We already knew that people or processes could use that. We did a lot of data applications working against these data products, but guess what? The coolest data application is these GenAI tools. So you'll see, coming up, on the fourteenth of next month, our our CEO come out, Justin. We'll give you some links to that at the tail end. Great webinar, showcase our own products in this space. What does our, you know, a agent look like? What can you do with your own agents and that kind of stuff? So we're gonna be tapping on that all day today. But the main point I wanna say is we've worked really hard historically to build this awesome foundation, and that foundation is really the thing that's preventing some folks to not make great strides in the space. And we've seen some really big customers of ours do things with us that they were amazed. And I will go ahead and say, as a person who knows what we do, it wasn't all that amazing. It was just the the output of a lot of hard work, a lot of years of practicing, and getting ready for something that kinda just hit us here. Alright. So that's that slide. There's a lot going to it. Beyond that, I'm sure our product manager folks here could always say, express a little better. But with that said, I think I'll, kinda hit this next slide button, and I'm gonna zip my lip, my colleagues kinda introduce themselves and jump on in. Perfect. Thank you, Lester, first and foremost. I don't think I could say what you said any more eloquently than you already did. So, I appreciate the, the love and the shout out, but you did a fantastic job. As we move forward into the presentation, so myself, Zach Hanson, Dan Cataloupe, we're gonna talk a little bit about some of the things that are coming up in our road map. But to highlight what Lester said, there is so much that underlies supporting AI at scale in the enterprise. And to do that, if you tried to hit everything you need to do as a company at Starburst, we'd be boiling the ocean. So myself, Dan, the product team, our engineering team have looked holistically at everything we need to deliver to enable you as business leaders in your own domains, in an eloquent way. What we've done is we've really broken down what we want our key investment areas to be, and that's where we know we are going to put dedicated engineering resources against these areas to make sure that we can deliver on everything that Lester just talked about. Now I'll take you briefly through each layer, and then Dan and I will dive in deeper and kinda touch on some of the things that are coming up. And, of course, if you have questions, if you want more information on any of these things across both Starburst Enterprise or Galaxy, ask in the chat. Follow-up with us afterwards. But, you know, for us, the four layers are flexible data foundation starting off. Right? This is everything about how you interact with Starburst. That could be Starburst Enterprise, Starburst Galaxy, other future deployment options that might be coming out. This includes stability, resiliency, reliability. Everything that goes into a mature platform that you would expect out of Starburst is being invested in. Then we kind of go up to the performance analytics engine area. This is everything around Treno, queries per second, making sure that we can handle complex workloads, make sure that we can meet you at scale so that you can enable AI in the future, if not already today. And then going up further, we're going into the enterprise context layer. Lester mentioned it, but context is king in the world of AI. And between data products, materialized views, our Starburst data catalog, these are areas where we are putting a lot of time and effort. And, again, Lester said he's gonna do a demo at the end to do a recap on where we're at with data products today, but this is something that we are highly focused on and we believe is the future of how we support AIS scale for businesses. And then lastly, the trusted agentic interface. We're gonna talk and touch a little bit today on what's coming out on April 14, which Lester also mentioned, through a webinar with our CEO, Justin Boardman, around what our NLP, natural language interface, is gonna look like for interacting with your data through AI within Starburst, in the coming time. So with that, what I'm gonna do is I'm gonna hit the next slide and hand it over to Dan, and we're gonna start diving into that first layer, the flexible data foundation. All right. Thanks, Zach. I just want to take a minute to thank everyone for joining. I mean, see people from all over the world, from India to The UK, spread throughout Europe, all all over The US and North America. So, very excited to see everyone, join us for this presentation. A lot of the market's focus today is on AI, and for good reason. But the foundation that Lester was talking about really starts with how you deploy and how you sustainably offer their service to your users. And it's all about making it as easy as possible to implement and run Starburst in a way that's best for you. But not only just deployment, but ensuring that we can offer the scalability and the reliability that's necessary to run AI workloads on Starburst. So one of the things that we're really excited to be, rolling out in Starburst Enterprise over the next few months is native a a native routing capability. And, it's essentially, contained in an admin only control plane to manage query routing across Starburst Enterprise clusters. This offers an option for achieving higher scalability, higher reliability of your solution, for self managed deployment deployed Starburst. And whether you're looking to load balance your workload or isolate your workloads, you're able to achieve that with this native routing capability. And a a little bit further in the future, we're looking to roll out, the initial steps for high availability with Starbase Enterprise. All the while, you know, keeping in mind that this is serving our AI journey. So we're also thinking along the way of how can this be managed with agentic services, with agents? We're we're shifting the way we work. And not only do we need to account for a an end user, but we also need to think of agents as an end user. So that's just embedded in the network and in the in the net of, how we develop at Starburst. And Zach here will cover some galaxy foundations. Thanks for the handoff, Dan. And, you know, everything Dan said still stands for Galaxy. But for some of the things coming up in the Galaxy in our SaaS world, I wanna highlight that we have been hyper focused on listening to customers. So if you are an existing Starburst customer or you're a potential Starburst customer, hopefully, this will resonate with you. But one of the things that we've heard over and over again is, you know, sometimes when you start up your own cluster, it can take a while. And, you know, that's something that has been known, something that has been asked for. So we've made strong investments, and we'll have the availability to have warm pooling where we can pull those cluster start up times down into that sub one minute range so you can have, you know, better, more consistent ability to run your queries when and how you want to. And on the flip side of that is some people have complex workloads, or maybe you have simple workloads today and you're looking to scale or become more complex in the future. So we're also introducing something called smart load balancing where we can route queries in a meaningful way so you avoid queuing or stalls or anything else that might happen in some of your instances today. And, again, it's all a reflection on us listening to customers and evaluating and trying to deliver the features that are most valuable to you, our end user. So with that, I'm gonna hit the next slide. And, Dan, I think it's back to you to kinda introduce us to that second layer. Yeah. So now you have a a strong foundation of where you deployed Starburst, how you are managing that solution, from an operational perspective. Now we're looking at how do we get the most of our out of our engine, the Starburst engine, to generate the greatest performance output that we can possibly get. And that's what this layer is all about. It's about powering your data operations at speed and scale, beginning with access to all enterprise data. So ensuring that the federated access does not go away. We're still very invested in being able to give you all, access to all the data that you have within your network without having to move it. But we're also investing in the iceberg space, ensuring that when you do want to move it, you landed an iceberg for the most performant outcome possible. And then just making improvements to the engine. So let's jump into it. I'll click the next slide here. So the Starburst engine based on Trino has already been improved to be up to two times more performant than open source Treno, and we intend to continue that upward trend of investments throughout the the next few months. We're focusing on general query engine enhancements for greater efficiency across query analysis, query planning, and query execution stages that come straight out of the box when using the latest versions. That's it adds to the improvements of aiming towards, higher higher throughput, of lower latency queries, effectively improving your overall experience with working with Starburst. And, also, we're excited to already be offering the common table reuse, feature as public preview today as of our April version, in Starburst Enterprise and in already in Galaxy, if I'm not mistaken. That significantly accelerates our query performance, and that we're looking to take that GA fairly soon. So a lot of work is being invested into the Starburst engine to bring the most of what we can offer to market. Perfect. Dan, you wanna hit the next slide for me? So I get excited about everything Dan just talked about. Performance is king. That's how we win in the market. That's how you win with your workloads and just be able to scale really infinitely. And it's exciting to hear those things. But Dan also touched on our investment around iceberg. You know, you'll hear our term ice house, our iceberg lake house. This is our end to end view of how we expect things to kinda go through the Starburst ecosystem from, you know, your Kafka stream, your files, and your sources all the way through what we have in Galaxy today coming in the future for Starburst Enterprise around our ingestion engine, but then going through iceberg table maintenance and all the way to data products and then ultimately feeding your analytics and your AI, that you're gonna be using within your company. But along this way, there have been improvements that we've made too, which is gonna be continual. Again, this is something you're gonna see quarter over quarter from Dan and I across these layers. But in the ingestion front, now we're able to support Avro schemas as well. We're going to be bringing forward very soon the ability to support CSV ingest. And then on the table maintenance front, you know, right now, it's all jobs based, which is great. It helps out a lot, but there's a lot of requests from you in the field, people who are leveraging Starburst to automate this. And it's very exciting, and it's gonna be available very soon that we do have serverless maintenance, coming forward for Iceberg, which is gonna alleviate a lot of pain that's felt by our customers. And then, ultimately, in our data products, right, we have full material view refreshes, but we're also gonna be introducing incremental material view refreshes. And all of this just combines to give you a better experience in this iceberg lake house ecosphere that we support within Starburst. And, you know, both Galaxy and Starburst Enterprise are both leading the charge here, looking for feedback from you and making sure that we wanna support you in the best way possible for your journey with Iceberg. Alright. So now we're getting into the meat and potatoes of what makes up. You you've seen articles in the in on in the news about how 90% of AI projects fail. This is the keystone with which why those AI projects fail. Organizations are not giving AI the proper context. It's hallucinating. It's grabbing whatever it can, whatever straws it has access to, and making assumptions of what you need. And without the proper context, AI will not serve you in the way you need it to. And so that's what StarVerse is investing. Let me move slide here. Okay. There we go. And the enterprise context layer at Starburst is made up of three critical areas. First is the keystone of the context layer, which is the Starburst data catalog. The Starburst data catalog houses the technical and business metadata that AI will leverage to span the network of information to find the most accurate piece of, metadata that it can to answer the question in the most succinct manner. Second are the data products. Lester touched on data products earlier, and we'll give a demo of data products, later in this in this webinar. But, effectively, this is where a lot of the business metadata lives, and this is where you can really refine how a human and an agent can access the information with the context necessary to describe that that data captured within. So it's critical to invest the time necessary to build the proper metadata business metadata context at this layer. And then third, we are understanding of the fact that you have data products living elsewhere, and there are solutions like Calibra or Unity Catalog or others that you're you've built similar, but maybe not quite one to one mappings of data products and that contain business context that you don't wanna replicate. Well, we wanna build third party syncs that source in that context for our AIDA agent, our AI data assistant agent, to be able to understand the, the context of what you're asking it and be able to navigate that network of information so that it's providing the most accurate information at any one time. Okay. And so one example of a feature that we're investing in in this quarter and this half is to I mentioned data products, but we're also extending business context to the query endpoints, to the to the actual catalog, schema, tables, views, and columns, without having to create them into a data product. In an instance where your AI agent may not be able to find the appropriate data product for the question and need, it can default back to the query endpoint. And by adding additional business context to that query endpoint or those various query endpoints, you are providing the agent insurmountable amount of context that provided the appropriate insight into serving you the proper information for your question. And that's where, holistically, we're looking at end to end throughout every stage of our product, every layer of our product on how we can offer context so that your experience in working with AI and just holistically, if you're using it even as a user, you know what you're accessing, and you have trust in your data. And Zach will cover some improvements to data products. Perfect. Thanks, Dan. Yeah. Again, I think that's just the the thing we're hitting on. Context is king, ultimately. And, Dan, you had mentioned you I don't think data products is a new concept to most people listening here. Right? People are doing it all over the place. But I think what is probably not as known is that we, at Starburst, have our own native data products feature. Right? And that's one thing that Lester's gonna touch on at the end and do a demo. But what you're seeing on the left hand side is our existing UI for creating data products from your catalogs within Starburst. We have a lot of capability already baked in to elevate your catalogs to a data product. You can augment and start to enrich it with metadata via AI and start to leverage those data products in every way that Dan just talked about. And that's awesome. But we've also gotten a lot of feedback from our customers that they'd like to do that in a more programmatic way, which is why, in the next few months, we're gonna be introducing data products as code, where you can essentially have your own YAML file where you can add, edit, delete, do everything you need to do in a programmatic way so you can integrate it within your own CICD pipeline and just make it more functional for you if that's what you'd like and you don't want to operate just through our UI. But, again, these are some great things that are just coming around this whole layer of shared enterprise context that is a huge focus area for us here at Starburst. Okay. Now to the fourth layer, the crown jewel of it all, the trusted agentic interface. And this is where we are focusing on delivering an intuitive agentic interface built on the foundation of performance and context, allowing any user to quickly access and use the data they need with the trust that it's correct and governed. That's the important piece. Okay. So let's let's jump into it here. We are excited to be announcing you may have seen some, news articles or or LinkedIn postings from the Starburst account that we're, announcing our AIDA agent. It stands for AI data assistant, and, it really is changing the way the market we are seeing that the way we are interacting with our data is changing. And AIDA is a way to accommodate that and to actually accelerate that. We can now converse with our data with natural language through AIDA. We eliminate bottlenecks and processes, to get answers to your questions in seconds and minutes rather than weeks. Right? You're you're not as reliant on others for the information that you need to know to make decisions, business decisions. Not to mention, this solution is customizable so you can brand it to your own, your own organization. Or if you don't wanna use our AI data assistant, we actually also offer an MCP server, a way to integrate any third party agent, with our ecosystem to gain access to our resources, our tools, that are available for an agentic experience, whether that's through a Claude or ChatGPT or Gemini, for example, or a home built one. And, lastly and most importantly, we provide governance from top to bottom. So from governing your LLM to governing your data access, we ensure that security is top of mind. And when it comes to governing LLMs, we give you the option to govern and manage the cost, the the model access, the model usage. You wanna make sure that you don't want one model to access certain information or or however, that suits your needs. So ensuring that we're we're providing that holistic end to end experience to be trusted by design. That's what's very important so that when interacting with a business user or a technical user is asking questions of their data, they're not having to question, is this correct? Is this do I is is there a security risk with me asking this question? It's it's something that is handled on the background, and you don't have to worry about that. You just have to worry about what is the answer to my question. Perfect. And then I had a great teacher one time who had told me, you know, when you wanna do a presentation, right, you tell people upfront what they're gonna learn, then you tell them everything, and then you say it again. And that's what we're doing here. Like, this is where Lester started this conversation. Right? Everybody wants to get to the right. They wanna have automated decisioning around AI. And what Dan just touched on is we're there at Starburst. Like, we're gonna be releasing AIDA very soon. There's a lot of stuff coming out around it. You're gonna see a lot of webinars. But we wanna reiterate that the reason we're able to be there so quickly is because of everything to the left. We've been here since Starburst was a company around establishing a foundation of data. You know, building these materialized views, these data products all in product, that is how we're able to support ADA and do everything that Dan just talked about. And the interesting thing is it all ties back into our investment areas. Right? So, again, quarter over quarter, you're gonna be seeing Dan's face, my face on a much more frequent basis as we talk about what we're releasing to support you on that AI journey, and everything's gonna tie back to these layers of our foundation, of our performing analytics engine, investment in the enterprise context layer, and then ultimately our trusted agentic interface. Now with all of that, I know Dan and I would really love to thank everybody for listening and thank Lester. And I'm gonna pass it back off to him because we've heard us talk about data products, data products, TATAProducts, supporting ADA. And Lester's gonna give us a little bit more of a deep dive around data products as they exist in Starburst today. Yeah. Lester's gonna find his mic, mute button too. So let me go ahead and stop sharing. I'll switch over to my, desktop. It'd be a little easier to show what I wanna see there. And, there's a great question in the chat, just gents, just to let you know. Here, I wanna take a moment and say, one, first and foremost, we are, you know, stepping on our holding our tongue so hard to not show you some really awesome demos around our AI agent software, this AIDA thing you keep hearing about. Mainly because we wanna give our CEO the the the opportunity to kinda show the case that for us. You know, it's nothing cooler than watching your CEO do demos and that kind of good stuff. So I'm gonna, again, go back to those foundational things. And I wanna say, here's what I see. Here's why data products as a human and as a human interacting with something like this or any kind of query editing tool makes sense. And like I said, at the end of that, hopefully, you kinda see that, oh, this additional information that's in a data product is gonna help those AI agents as well. So here's an example of maybe I'm in this system. Someone gave me an access. Hey. Go go look at stuff. And I might say to myself, wow. I have a whole bunch of different, datasets on my left here. I got this cluster called this free cluster that I let, shut down after five minutes being quiet. And I might go, hey. I'm looking for certain information. So it might look like a lot. And in fact, this is very little. There's, like, ten, fifteen, catalogs here going to various things, a little snowflake and postgrads and that kind of stuff. Your environment might have might might be tens or hundreds of different environments, settings. You might have access to all those, and often that's a lot. The good news is we have search buttons, so we can type all kinds of stuff. We're looking for FAA datasets and things, but that gets you so far. That helps you find what you're looking for. So I will say, okay. As a human, maybe I found what I'm looking for. I'm I realized eventually either through hunting around or through someone tell telling me, hey. Under this My Cloud catalog, there's this one called the aviation, Lester. Someone stood this up already. And it has tables like, like, astronauts, admissions, or it has some kind of NASA information. It has some airports, and, and this is kind of a US centric kinda FAA type information if you're familiar with our Federal Aviation Administration. And, you know, maybe drill around. I might just start to find out that, yeah, cool. There's tables in here like you see here. Hey. Show me against our plane, you know, some details, like, tell me what are the various manufacturers and how many planes in this database or, you know, of that manufacturer. That's neat. That's cool. That's pretty normal stuff. But I would start to say there's still more to that mix. Me looking at this table astronauts, I might start to think that this table ought to be a list of them astronauts. Well, truth is it's not. How might I know that? Well, in our system, it might be because that data catalog and this is just a part of what, Dan was talking about us. But if I look at this kind of catalog interface and I drill down into that that that cat that connection, my cloud, and I find that schema aviation, and I find that table astronauts, I might kinda figure out pretty quick here when I click on something like this that, hey. You know what? This thing is not really, a list of astronauts. It's more like an entry for flights that an astronaut was on. So if Lester went up into space five times, it'd be five entries in there. So I would have maybe named it astronaut flights. Well, why am I making that point? Because in my experience, in my thirty something years in in this kind of space, people name stuff all the time, and the name itself doesn't always mean what it really is. And when you rely only on the schema from an AI agent, guess what? It's gonna make the assumption this table is a list of astronauts. So it's already thinking, you know, something that's fundamentally wrong. While I'm here, you might notice I put things like tags and stuff. I put contacts, put links. As I drill into some of these activities, maybe I'll go back and look at aviation more time and look at maybe this view here example. You might realize that there's all kinds of stuff. I can describe these views. I can describe these columns around this. This is a just to have a little fun, it's a little system called Robotech, anyone remembers that. And there we go. What's a faction? Well, there's a details of a faction. These are good for explanations, but they're really good for when you have a thing called the whistle indicator. Why wouldn't you not say, well, the whistle indicator is the indicator of blah blah blah, and values like x mean this and y means that and z means that? That'll be that context that you as a human, great pulling that in, help you, and definitely that AI tool. So this was just showing the catalog. The reality is if I go back to my query editor, and I wanna not let my cluster die, so I'm gonna hit run the query one more time. And I say, you know what? This is still too much information. That was great that I found that, but what else could I do? Well, back at our top level, you can go and find our list of data products. Now I very purposely, in the lester.galaxy site, have data products. I don't wanna get overwhelming here, but it's very likely in your environment. You may have tens or hundreds or more. You can have domains, and then within those domains, these various data products. And data products, kinda like this Pokemon GO, and should be some kind of subject area within a larger domain. And we have the opportunity to kind of describe what's going on. You have this one. I'll show you my other one. Maybe I have a little bit more details. But a data product really is a list of datasets, a list of tables and views that fit into that world. And, hopefully, as you drill into there, like, this one doesn't have a lot of details. Let's go look at let's go find one that has good details. But there's at least the top level information what's going on here and a cool feature that we call uses examples. Not only do we tell you about those datasets, we give you this. So in many ways, a data product is just a heavily marked up set of datasets. What are these datasets? Well, in fairness, the reality is that we build because we don't wanna invent something new. We build these data products around a schema. So this errands this aviation schema got promoted. There's a button, so just promote it. So you wanna curate as a data producer. You get a schema that you like. This is the one I want the world to see. You can a variety of ways as code, a variety of ways in the UI, you say elevate that to a data product and voila voila, what you end up is a data product like this one, Air and Space. And, again, you can go edit, enhance this stuff. Often, those top level page just, you know, what, when, where, why. If you have your own internal wiki, of course, write a link up to it, point you there, maybe some support information, who, when, where, why. Again, those datasets are right there. I went ahead and put a few in there, and that astronauts was a good one to kinda focus on. Again, the table, here's what's going on that table. There's the tag space. But you might notice that I've only filled in a handful of these. I did this as a human, updating that. I won't demo it for time purposes, but I could go in under this enrich with AI and let AI take a look, make a stab either at a column level, at a table level, at a schema level, and let it kind of fill in answers. And then you as a human, and I would recommend if you did do that, which I do recommend you do, is you take a moment as let the subject matter expert do that and let them review those outputs because it may not always do what I you know, I was practicing one to show you, like, mission number, and it says, oh, this is a sequential number of missions. It made a logical assumption. It wasn't exactly right. So, again, as a human, we wanna come back as an expert and add those back in. Other better ones are the Mecca table. You know, all those details are kinda populated when we're y, maybe the the the planes dataset. I think I've got all those fields kinda articulated what, when, where, why, such as, like, this this deal called issue date. Well, what the heck's the issue date? Well, it's the date the FAA issued the tail number to that aircraft. I don't think our AI tool or any AI tool would just figure that out. It probably would've said, oh, that's the date the plane was manufactured. Well, no. There's a thing called you're built and that kind of stuff. Alright. So you get all that going. You with combination of human, AI, and really articulate that. You may even go a little deeper and say, like we said, supply me some let me hit this query so it doesn't quite ask on me. You might say, yeah. This is what I wanna know. I wanna know, hey. What does this one say? I get the pop up to to help out. We'll do this one. Common airplanes on long flights. Right? They I wanted to know how many, on long flights, maybe long flights as my notes said you can adjust this 1,500 or something, would help us out. So what you could do is just simply say, show me this because this is a markup. This isn't the data. And when you really, really need to query the data, the good news is we're gonna use the framework. We're gonna use the techno all that stuff that works there. When you say, hey. I wanna wire up a BI tool. They're gonna go straight to Starburst Starburst TreeNote and run something. And there we go. What does it tell me? It tells me that the Airbus three twenty in this dataset has the most number of flights over 1,500. And and then if several of seven fifty seven thirty sevens and seven fifty sevens look like kinda take up the rest of, the top five or so. And, ultimately, that element, all those things I was trying to showcase there are what we call a data products in a in a quick, fast drive. And as I was trying to say, this is great for humans. It's great for processes, either procedural processes or our AR tools. And all this information, including these uses examples, are being bundled up in that context behind the scenes in our conversational AI to say, hey. I have all of this context about this domain. So you're gonna see when you come watch Justin showcase us, we start with our initial plan, and Dan hinted around this, is we want folks to kind of pick that domain, that data product that they wanna ask and interrogate against. Why? Because we wanna get pull as much of this as we can. Absolutely, we wanna support. Okay. Just look at everything there ever was. But when you don't you know, when you go further and further and wider and wider, who knows what answer you necessarily can. And what I really love about that UI, it's gonna say things like when you start asking a question, it might realize, hey. That's a pretty good question. Do you want me to turn around a backfill and add more uses examples, to the ones that are out there, which, again, filled the AIs next time around, and they're gonna feel, fuel humans that wanna come along and go, yeah. That's a great question. I need to ask that all the time. So I think with that because we can always talk about it for a 100 times, I think with that, I will stop my share. And what I will do is put back on the screen this page right there. Perfect. And, just simply say and reiterate because we've hunched it around it a few times. On the left here, please click that QR code or come to starburst.io/events. And, there are a whole bunch of events that we're doing, but the one I'm really pushing and recommending right now is show up on April 14 and really see firsthand demo heavy CEO of the company driving it, showing you why we feel this is gonna shine through. And I'm gonna tell you a secret. We didn't invent this and never use it. This is what we've done the last year plus at some of our biggest customers who said this is incredible, and we did this stuff really, really fast. So we productize what we did as professional services engagements, and we feel very confident in what it can do for you there. On the right, if you want some, you know, feedback more than what you can find on the web and all that good stuff, if you're a current customer or prospect, reach out to your account team. If don't have an account team, you know, obviously, maybe other ways to get there. You can find me on LinkedIn. I always try to be a conduit to to this company here. But there's a nice contact page. Just give us a little bit information. Let somebody reach back out to you, and I'm positive we can get the right person and the right group of people talking to you. Alright. With that, I'm gonna start looking at the chat, and I'll see if Dan or Zach have any thoughts. They may have been kinda glance and see if there's any open, questions before we wrap it up for today. Yeah. And as folks start to, submit their their remaining questions, I do wanna plug that in in a couple weeks, our CEO, Justin Borgman, and VP of AI products, Matt Fuller, will actually give a a pretty comprehensive demo of our AIDA solution. And it'll explain if if you joined this one and and understand how a data product is created and managed, how that goes into usage, by ADA so that, hopefully, you see the end to end experience, from every how every stack layers up to to offer that business user experience, through ADA. And second plug I wanna make is DataNova, our Starburst conference, is held at the May, I believe May 27, if I'm not mistaken, in twenty eighth. I added a link there, and there's a link in the docs tab. Feel free to take a look at that, and if interested, sign up, where we'd be more than happy to meet you in person at DataNova. Perfect. I. think that looking at chats. I love the answer to Minh's question, which was, let's do a comprehensive compare and contrast with the the two giant beast out there, and that might be a better way. We have some resources too that we could direct you to on our website, do some comparison. But, yeah, I I think you're right, Dan. A good webinar just honing in on a compare and contrast in those players and us would be awesome. I love the q and a about Delta versus Iceberg. The quest the answer was really taking you taking this, question down. Is there a difference? Okay. I was just gonna make the point that data products and our whole framework isn't iceberg only, isn't Delta only, and, in fact, isn't only data lakes. Obviously, we we know, the framework is designed inherently ten plus years ago around data lakes. But the the other connector architecture, we can support. And my Pokemon GO data product was a good example where some of that stuff was data lakes, some of that stuff was in Snowflake. It's it was a culmination and assemblage of information from multiple sources offered as one single data product. And to someone looking at that, it looks like a scheme of information, and they don't need the necessary word when and where it came from. So with that said, I think yeah. Yeah. Yeah. Yeah. I I. see I see one more question about, good. Yeah. Please. jump on in. If I'm not mistaken, I believe our performance product manager is planning on having a deep dive into performance optimizations made to the engine where he can answer more questions about that. I don't know the date of that webinar yet, but, stay stay posted on our on our webinar page to keep an eye out for when that does drop. But I anticipate. that should be fairly soon. Alright. I think it's drying up. Zach, Dan, thank y'all for showing up today. You're making my life easy. I got to be quiet and let y'all do all the heavy lifting and get people, focused on what's out there. And, again, for everyone out there, plenty of events and plenty of other things. If you ever need some help, find me on LinkedIn. There's only a handful of Lester Martins. I promise you. I'm this guy with this beautiful face right here, and I am always glad to answer any question. There's some dev advocates out there that go, ah, everyone asks me questions. That's my job. Please ask me some questions about this company, how I can help you, and we'll get you there. What about multi tenant clusters multi tenant clusters? Sure. If you're saying multi tenant is like, I'm a, I have a cluster up, and I want two large different telecoms together. Are you talking about, I'm a telecom, and I wanna be on two different environments like, AWS and GCS. Multitenant to me sounds like the first one where I have multiple customers together. So maybe if you give it an ounce if we're I'll hang around for a second in case you have some more feedback on that. But, arguably, that environment I just showed you, Starburst, yeah, Alex, that control plane is a heavy, heavy multi tenant environment. Lester, everybody in the world kinda comes to that same environment. But a cluster itself usually is gonna be for the purpose of someone, and that's good. We don't have to go back. I spent years at Hortonworks and Cloudera, and then I'm I consider myself a pretty strong Hadoop expert. You know? And we started off years ago with this notion of maybe a multi tenant cluster makes sense. We wanna have the biggest cluster we could and find ways and slices, dice it. The the world kinda shifted along that journey as to go back to separate storage and compute, get rid of noisy neighbors, have, you know, isolation from each other. So at the cluster level, we gotta be careful there. So, Sheriff, Sheriff, you you did in in Starburst Enterprise. Great. Within my company, we have two big applications hosted on the same cluster. So so there are resource management tools in Starburst Enterprise, and I would say that's generally nothing wrong with what you just said. I get two big apps, two different departments. Can they be on the same cluster? They sure can. And deployment wise today, I would say it's probably the easiest for you. In the future, it might be easier to have their own isolated clusters from each other. But, yeah, it becomes noisy. We become stepping on each other, then you can enable these resource management controls that say, hey. Just like Hadoop used to do, say, you get this slice, this amount, I can have that amount, and I'll be glad to kinda reach out and point you to specific examples if your account team can't help or something. So, that's I think that's the answer you're looking for or hope it is. Okay. Alright. I think that was the the last one. That's just my no alibis. No last minutes. Okay. Thanks everybody again. We're signing off, and we'll, talk to you. all next. Thanks for joining. Thank you. Bye bye.