Talk Description
Real-time web data is one of the hardest data streams to automate with trust since web sites don't want to be scraped, are constantly changing with no notice, and employ sophisticated bot blocking mechanisms to try to stop automated data collection. At Sequentum we cut our teeth on web data and have come out with a general purpose cloud platform for any type of data ingestion and data enrichment that our clients can transparently audit and ultimately trust to get their mission critical data delivered on time and with quality to fuel their business decision making.
Additional Shift Left Data Conference Talks
Shifting Left with Data DevOps (recording link)
- Chad Sanderson - Co-Founder & CEO - Gable.ai
Shifting From Reactive to Proactive at Glassdoor (recording link)
- Zakariah Siyaji - Engineering Manager - Glassdoor
Data Contracts in the Real World, the Adevinta Spain Implementation (recording link)
- Sergio Couto Catoira - Senior Data Engineer - Adevinta Spain
Panel: State of the Data And AI Market (recording link)
- Apoorva Pandhi - Managing Director - Zetta Venture Partners
- Matt Turck - Managing Director - FirstMark
- Chris Riccomini - General Partner - Materialized View Capital
- Chad Sanderson (Moderator)
Wayfair’s Multi-year Data Mesh Journey (recording link)
- Nachiket Mehta - Former Head of Data and Analytics Eng - Wayfair
- Piyush Tiwari - Senior Manager of Engineering - Wayfair
Automating Data Quality via Shift Left for Real-Time Web Data Feeds at Industrial Scale (recording link)
- Sarah McKenna - CEO - Sequentum
Panel: Shift Left Across the Data Lifecycle—Data Contracts, Transformations, Observability, and Catalogs (recording link)
- Barr Moses - Co-Founder & CEO - Monte Carlo
- Tristan Handy - CEO & Founder - dbt Labs
- Prukalpa Sankar - Co-Founder & CEO - Atlan
- Chad Sanderson (Moderator)
Shift Left with Apache Iceberg Data Products to Power AI (recording link)
- Andrew Madson - Founder - Insights x Design
The Rise of the Data-Conscious Software Engineer: Bridging the Data-Software Gap (recording link)
- Mark Freeman - Tech Lead - Gable.ai
Building a Scalable Data Foundation in Health Tech (recording link)
- Anna Swigart - Director, Data Engineering - Helix
Shifting Left in Banking: Enhancing Machine Learning Models through Proactive Data Quality (recording link)
- Abhi Ghosh - Head of Data Observability - Capital One
Panel: How AI Is Shifting Data Infrastructure Left (recording link)
- Joe Reis - Founder - Nerd Herd Education (Co-author of Fundamental of Data Engineering)
- Vin Vashishta - CEO - V Squared AI (Author of From Data to Profit)
- Carly Taylor - Field CTO, Gaming - Databricks
- Chad Sanderson (Moderator)
Transcript
*Note: Video transcribed via AI voice to text; There may be inconsistencies.
" Now we'll just transition our way over, huh? Mark. How you doing, man? I'm doing great. It's a great talk. Always love hearing from 'em. So, um, I'm super excited to have our next person come up.
Um, some quick backstory. Um, I was at day-to-day Texas presenting on data quality and had a kind of a group discussion. Sarah was one of those people and she was describing how she transformed this company and started going further and further left of actually fixing the people and processes of data quality.
And immediately afterwards I was like, I have to have you at this conference. So I'm super happy for her to be here and kind of share what her work is. Thank you so much for that intro. Um, I did prepare some slides for you guys. I'm just going to try and figure out how to share those. Uh, just a moment.
Okay. Share. There we go. Good. Okay. So, all right, so shifting left with sequent, right? So a little bit about me. So as Mark said, um, I really have a lot to say on this topic and I love that uh, you guys have organized this first shift, left conference. Shifting. Left is a term that, uh, we talked a lot. A lot about back in the nineties when we were switching from the waterfall methodology of testing certifying software over to, um, agile XP and then agile methodologies.
Um, and as you can see from my little stack of books that I'm sharing here, all my old besties, um, I've spent a lot of time thinking about this stuff over the years. I'm gonna share some of my stories with you today, but I'm also gonna walk you through at a high level, um, how I think about applying, uh, all this experience to the world of data.
Um, and then if we have time, we can dive into a little bit about, uh, you know, how exactly we did it at Sequent. Um, I'm not sure if we're gonna get, uh, uh, to all of that. Uh, so a little bit about me. Um, uh, you know, so I'm running this company Sequent. Um, I basically came along and found they had what was for me the most enviable tech, right?
I had been working in very, very large scale automated operations for decades, you know, with browsers and all of that. Um, but what they had was a low-code interface, right, which is incredible for driving efficiency. Um, and they also had a custom browser, um, which helped in the scraping world, um, to get around all the blocking.
Um, so those two things were really critical. And I joined this company in 2017. Um, they had been at it since 2008. Um, so really, really mature tech, um, and a little bit more about us. So we go to, to market in, in basically four different ways. We take that, you know, the core of, you know, web scraping or, or web data pipelines as we sometimes call them.
Um. We make it available through on-prem software. So like healthcare companies that are working with regulated data or government agencies that are working with classified data sets, right? They will do, set up all those pipelines, probably not web data, but just regular data. Um, they'll use our on-prem software for that.
Um, we also have a cloud, uh, version of our software. Nothing to install, not even an extension. Um, pay as you go, jump on, everything's there, servers, proxies, everything's integrated, your custom browser, um, low-code interface, et cetera. And then we have data as a service. Um, and this is really for us, our headlights into, uh, what are all the difficult blocking scenarios?
What are all the interesting new multimodal data types that we need to support? Um, and, uh, you know, it's sort of our headlights into where we need to go with the product. Um, and the last thing is our intelligent agents. So, uh, Chad PT is probably one of the best things that happened, uh, to sequent in the last decade, right?
Opened up whole new avenues for us. We're not just doing data now, we're doing ai. Um, so we have been running, you know, profitable, uh, autonomous intelligent agents for clients since, uh, basically Q2 of 2023. Um, so a lot of excitement there. Um, and then we also, uh, because, you know, we're incredible nerds, um, in the, in the area of compliance and, and, uh, you know, 'cause, you know, risk mitigation, it's not just quality, it's also compliance.
Um, you know, we've, we loaned our sequent operating guidelines and worked in with the finance industry, uh, with SIA all data council to publish the web data collection considerations. Um, which aligned, you know, basically the, the finance industry, the all data community, and doing data-driven investment decision making.
Um, and now we're working with the Alliance for Responsible Data Collection, bringing that to data industry at large. We're, um, uh, defining, uh, an extension of the croissant data, uh, dataset, documentation, um, uh, standard. Uh, we're working with open ai, common Crawl, and others, um, to basically put out there, uh, a methodology and audit, um, uh, method, uh, capability.
Um, so that datasets can be, uh, certified as kosher, right? So the AI that's consuming them can be certified as kosher. We're hoping that that is a, um, effective strategy for dealing with all the lawsuits and a lot of the, uh, bad actors, uh, that are in our space. So what are we doing? So, web data, you know, it's notoriously difficult.
This is the hardest area to automate with reliability and compliance and trust, right? How do you actually do that? Well, you have to take all of the best practices from 20 years of testing, you know, doing security functionals, you know, um, uh, performance testing on software, every single kind of technology.
Um, and applying it to, um, you know, how do you actually ingest the data? And then how do you make sure that what you're ingesting in real time is meeting your expectations and your rules. So it's, it's shifting left in the sense that the entire system is built from the ground up to make sure that nothing bad ever, ever enters the fray.
Um, right. And essentially it's change management at scale. Right? I'm gonna, I'm gonna try and stay high level and then in the end I'm happy to go into details. But, um, right, the three levers that you have, people, process, and tools, right? Um, and the four, you know, areas that you basically wanna think about is first and foremost, your methodology.
Um, have to make sure everyone's aligned on that, knows what it is. Um, then you wanna, of course make the problem manageable in every way possible. Um, make the entire operation transparent, um, and empower people to jump in and do something. At the moment it's needed, um, right. Methodology wise. Um, right. Uh, I don't know how many people, uh, here are in startups, but I suspect you are.
'cause our industry's pretty new. Um, but Right. You wanna clarify the problem. Um, and you know, for example, uh, you know, I worked at one company, um, we were putting ads in video games, and so we had, um, 120 AAA video games in our lab at any time. And if any of you are gamers, you know, that, um. Every single one of those games is completely different.
Uh, the media formats are different. The way they're built is different. The gaming engine is different. Um, you don't have cheats, right? It's incredibly challenging. Um, and, and, but the same approach applies, right? When you're collecting requirements, you templatize it as much as possible. You make sure that you're defining that acceptance criteria upfront.
And in our case, we communicated it to the publishers and developers at the contract level, right? Um, you wanna make sure there's a review and approval process so that, uh, you know, if there are things that are not gonna be fixed by the time that game has to go out the door, um, that there is an approval process, um, and people are signing off on that, right?
Um, 'cause it, it can affect, uh, commercials. Um, but also there may be stakeholders along the way that say, you know what? Um, that is just not okay. Right? Um, and then you treat every problem as an opportunity, right? Every time you have in a release, every time you have a big issue, you do a postmortem, right?
There's no, uh, shame in having problems, right? This is what we're doing. We are building value and we are gonna come across problems. And every single one of them is an opportunity. And so you're always gonna wanna have A-C-I-C-D roadmap, right? Continuous improvement, continuous deployment. You wanna have that roadmap for how it's gonna be better next time, right?
So you identify all those issues, right? Don't be shy. Some of the conversations are difficult, right? Um, you have to trust your team. You make sure that they can speak up. Um, you have a list of all those issues and you basically stack rank them. You have, you define the severity, you define, um, the, uh, the likelihood that it's gonna happen again.
And then you define what that mitigation is, right? Is it people? Do you need people with other skills? Do you need better training? Right? Do you need a, a process to make sure that that doesn't happen again? Um, or do you need, uh, tooling, um, you know, is there something that you can build to actually bulletproof it, right?
And then you just stack rank it based on the severity and the likelihood that it's, it's gonna happen again. And the difficulty of of building that mitigation, always have your CICD roadmap. Don't pretend that you're ever gonna get to a place without that roadmap. You're always gonna have your list of, ugh, if I just hit this, it would get so much better.
And in fact, in the case of that, uh, ads and video game startups, when Microsoft came to acquire us for hundreds of millions of dollars, that document was core to that discussion, right? Because they looked at it and they saw hundreds and hundreds of lines of bulletproofing, bulletproofing, bulletproofing.
And that's normally how they negotiate down the price of a startup. So, you know, when they raised that hood, they didn't take a dollar off that valuation. Um. So it really adds up. Um, okay, so manageability, right? You wanna make sure it's manageable, right? You know, in our world of web data, right? It's like, oh my God.
Like if you just write one Python script and try and pull the data down, that's great. It's gonna run once, but try running it every hour of every day, 24 7, 365 days a year, and have some quant, you know, algorithm trading on that data, you know, downstream, right? That little script that you wrote is not gonna work, especially when you have thousands, uh, of these things running, right?
Make it manageable. So you want reusable atomic building blocks, right? So in our world, we have like a URL command, right? And there's all of the settings and things that you might want to configure on that URL command. It has the basic URL, but it also has things like, you know, how are you dealing with, you know, are you rotating the proxy when you're loading it?
How are you, you know, how long do you wait? How many times do you retry? There's all, there's, you know, 60 settings on that URL command that you can configure. And for us, that makes it bulletproof because all the things that typically go wrong with URLs are baked in to the command. They're always at play.
Um, and then you wanna prioritize, of course, composability, you wanna have low code point and click as much as possible because it's just gonna be faster, um, and more intuitive for teams to, you know, you know, build and deploy quickly. Um, but then you don't want to create barriers, right? So when something comes out, some new technology comes out, that, that your point and click hasn't built support for yet.
You can always go to the scripting engine, right? It's quantum. We support, you know, JavaScript, C Python, regular expression, you know, these are normal programming languages. No proprietary languages, right? Nothing new to learn normal languages that people know that they're comfortable with. You know, let them upload files and library, third party libraries, you know, reach out to these brilliant AI services that we have now, um, to augment, you know, the intelligence of the workflow.
Um, you know, just allow it, right? And, and for the things that you use over and over create templates, right? We have templates. Um, we have a big philosophy to create templates with everything that we do. Um, and you bulletproof the template, right? You don't wanna maintain thousands of agents. You wanna, you wanna maintain smart templates.
Um, and I'm, I'm sure that this has been touched on here. I'm not gonna go into it, but, you know, infrastructure is code. Use automation everywhere. You can apply the same principles to how the, how the sausage is made and how the software and the platform gets, uh, gets out there and how the jobs get, get scheduled.
Um. So transparency. Another really, really, really big, big factor, right? You wanna break down the silos bet between teams in order to drive trust, right? If the, you know, the data engineer who gets a very specific technical problem, right? Hopefully is alerted in real time to exactly what the problem is and where the problem is, right?
Without having to hunt for it. Um, you know, if when that portfolio manager whose, uh, signal is now, it's, instead of saying, yay, these sneakers are amazing, Nike, you know, invest in Nike, he's all of a sudden getting a negative signal, right? And he's like, you know, it's, it's not true. The stock is going up. Why, why, what happened on this date?
Right? And when he drills down into the system, he can actually drill down in, 'cause it's all low code reusable building blocks. It's very easy to document. These agents are automatically documented. He can go and see exactly what happened when he can go back and look in the versions, he can see who changed, what he can see the ticket that has the detail of the change, right?
This, this is all important, right? He can see, oh, oh, we were trying to save money and we changed the AI service we're using to assess sentiment. We went from the expensive one that understands when someone's like, whoa, these sneakers are mad cool. Right? And then that's positive. That's not negative because mad is negative, right?
Like, we changed to the cheap sentiment service and it thinks that that's negative now. Oh no, that's terrible, right? So then, is that the data engineer's fault? No. Right? The data engineer is doing the right thing. Um, that transparency really drives trust and it really helps with all of those relationships that can get really tough when things don't go well.
Um, right? You need all the, all the bells and whistles, sock to type two, audit accredited. All those badges, you know, the, the, the formality and discipline like that, those things are always worth, uh, the investment. Um, and they make everybody more sane. Um, but you wanna empower your team, right? So when something goes wrong, you really want it to be assigned to the person, not the senior person who then has to call the middle person who then has to call the junior person.
You don't want any of that. You want it to go straight to the person who can fix the problem. We are in the business of real, real-time data, right? It's driving business decision making, right? There's no downtime, right? You can't have downtime, but it's web data. Things go wrong. So the second something happens, right?
There's very, it's everything is secure. Everything's encrypted in all the right ways. You have your SOC two, you know, you're meeting all your SOC two, um, standards, but the person who's responsible, who can actually affect a change is immediately empowered to do so. Download the latest version, point, click, fix, deploy, kick off that run, right?
Don't allow gaps in that data, right? Because time series, you don't want gaps, right? Um, make sure all the proper audit trails are captured. Um, make sure you have role-based authentication so that if that person is not there, someone on their team is also equally empowered. Um, but every change has an owner, right?
Every change gets checked in, everything's tracked, everything's audited, right? Anyone can see the audit trail at any time, right? You're separating dev, QA, and prod, right? That's, that's a whole nother big area, uh, that's worth investing in. Make sure that you have all that segmentation done properly. Um, and in our case, there's a lot of detailed requirements with the automation platform that have to do with driving that shift left, uh, sort of methodology and, you know, transparency and, you know, all these good, good best practices.
And then there's also a lot of specific data quality, um, uh, sort of measures that we have that sort of come standard out of the box. Um, you know, and of course we can customize them to every data set. Um, but I, you know, I don't know if we have time to go into all of these details, but, um, but, uh, happy to take it offline with, with any of you.
If, if you wanna know more about those. Um, does anyone have questions? Always. So many questions. Excellent. So, uh, I'm, I'm not sure if Mark's gonna jump up on the screen too, but I am going, I'm searching through the questionnaire, so gimme a sec. And I imagine people are gonna pop in some questions into the chat right now too, as we realize that it is ending.
Sometimes you know that something is, you know, the talk is ending and then other times it just ends abruptly. And you kind of caught me off guard. I'm not gonna lie. I didn't realize we were ending right away. Oh, sorry. Yeah, yeah. It's not your fault, it's my fault I'm here. I. Yeah, I mean, I can, I can also share like some more stories as well.
Like, I'm sure that you guys have, you know, um, these terrible stories. But I like to think of shifting left as really a way to sort of augment the joy in the team, right? Mm-hmm. You're basically, you know, all the pain. We all know the pain that that can happen on these teams, right? When things go wrong, um, and when things go really, really wrong, like, you know, people don't behave, always behave their best, right?
It's very stressful. It's very painful, right? And so the more of this kind of rigor and discipline, um, risk mitigation, you know, that you can put in, you know, shifting left, you know, um, you know, if you, if you have a system, uh, like putting in a contract, you know, that's great. Um, like that's, that's just going to help spread joy, right?
Because ultimately, uh, the devil's in the details and those downstream data consumers, um, you know, they don't necessarily have any idea what happened, um, you know, when the data came originally from that source. Um, but, uh, you know, if, if all those best practices are in play, then you know, every little thing that goes wrong, the impact seems to be, you know, tends to be smaller, um, easier to fix, easier to describe what went wrong, right?
And it just, and faster to fix, right? So the more rigor and the more discipline, um, you know, the stronger the team, you know, the better the outcome, the more joy, right? And sequent, we actually, uh, a lot of us have worked together for decades. We just choose to keep working together and find ways to work together, um, and to pull in all our besties.
'cause we all really believe that, um, you know, doing things the right way, um, you know, it's just a, a joyous, joyous way to be and, and creates enormous value. So I actually have a question for you, uh, Sarah. Mm-hmm. Is, you know, you started off saying that Shift Left isn't really new. It's been around for 20 years, and that actually aligns with Chad's recent article on it where he is like mm-hmm.
Gimme all these different examples across the industry for that. And, you know, shift left data is just another iteration of that mm-hmm. Of this pattern. Mm-hmm. I guess given that you have this historical context, I guess, what were the early mistakes of early shift left and what can we learn moving forward, applying it to data?
Well, I think, um, you know, agile, fragile, right? Like, people were like, oh, let's just move really quick and break things, you know, and, but they didn't actually have rigor and they didn't have automated testing and they didn't have acceptance criteria well-defined. And they, you know, they were just, you know, sort of throwing stuff out there.
And it wasn't, I. Uh, disciplined or formal. And it wasn't actually agile at all because then they ended up fighting more fires again. So that was probably the biggest one. But even back with, um, xp, um, extreme Programming, we wasted an enormous amount of resources, you know, with people sitting next to each other and trying to pair program and trying to sort of nip those, those bugs at, uh, you know, at the bud.
Um, you know, it, you don't need to have two people sitting next to each other. Like that was an idea that that really caught on for a while and it really slowed down everybody, right? It was no good. Um, 'cause you know, there's 20 ways to fix a problem and it doesn't matter if you do it, you know, method One or Method 20.
The point is if it's a passing acceptance criteria or not. And then, you know, so there were a lot of things like that in the beginning. Um, in the nineties, you know, when we had this waterfall methodology and they would throw this software over the, uh, yeah, I do not miss extreme programming, that's for sure.
Um, I recognize the name Pete Cooney. Uh, I wonder if, yeah, anyway, we'll, I'll have to touch base with you, um, after this, but yeah, no, in, in the days of Waterfall, it was terrible. I mean, they used to literally throw the software over the fence and we would try to find all of the issues, and inevitably we wouldn't.
Of course we wouldn't. And then we would get blamed for not finding them. Why didn't you find them? You tested the software for a full week, right? And then, you know, it was just so painful. There was no joy at all. Um, and of course they, you know, if they had any delays, it would come outta the testing schedule.
'cause you had to de you had to deliver the software on a very specific date. Um, you know, one terrible story that happened, um, one time is the, uh, it was a, a Walker digital Disney startup. And this was early in my career. This was only my second job@a.com. And I, there were a lot of lessons that I hadn't learned, and one of them was, if you really know that there's a best practice that no one is following, um, don't just put your hand down because you don't, you know, 'cause no one's listening to you literally jump up and down and bang on the table and stand on the table and screen from the rafters, right?
Mm-hmm. So this company, they weren't doing performance testing and the business goal was 50,000 concurrent gameplay a minute. Um, and then they would, somebody, it was games of skill. Somebody was gonna win a million dollars every month. So it was such a fun, fun, uh, fun startup to be at, but nobody was doing performance testing.
And I, I, I was very concerned about this. I'd built a whole model and I'd gone around the whole company and gotten everyone's input and everyone's review, and everyone had signed off on it. We had, I had all this detailed breakdown of the resource model and the capacity model, and, you know, and I built all of these performance testing scripts and automated all these games, you know, winning some, losing some, you know, doing everything very, very carefully.
Um. But, you know, and I kept saying, when are we, we have to do this testing months before release because we're gonna find bugs. And it's, these are hard bugs to fix. Like, you can't just come through together at the last second. And they were like, oh, you know, uh, young, inexperienced person. Very nice. Yeah.
Haha. And they, nobody listened to me. And I just, you know, was like, all right, well, I guess we're gonna be late releasing, right? Was my assumption, um, not gonna hit our dates. And, uh, and so it came into, uh, test and they, um, you know, we, I ran my scripts and it, it failed, you know, stack traces everywhere at 25 concurrent users.
And I was like, oh, okay. I'd seen this a million times. Um, and I was like, well, there's a bug when you, you know, roll, you know, this is, you know, basically what's happening. And they were like, oh, the testing doesn't work. And I was like, well, that's, that's a possibility. Uh, it's only 25 users. Let's just, um, let's just do it ma as humans.
And right, there are 99 people in the company. Uh, we went slowly up on a Friday afternoon, got up to 20, you know, the exact same errors, stack traces, you know, it didn't, whatever, it's the first time we did integration testing, right. There were errors, it didn't scale. Um, but they shut the company down because they were outta money.
So Monday we came in and we were all laid off. I was like, wow, next time I'm gonna speak up. Next time I'm gonna speak up. How you work late on a Friday, then laid off Monday up, you, now you're a CEO now. So all worked out. Yeah, I know. It's, uh, it's, it's, it's really all about, um, you know, making sure people are empowered, making sure that you're listening to that, that voice that's saying, you're doing it all wrong.
Right. Putting rigor and discipline into how you approach it. Um, shifting left, try to nip it in the bud every time there's some issue. You don't have a solution to people, process, tools, get your mitigations in place. Early shift, left. Right. Um, yeah. And that acceptance criteria, which is, I'm calling it acceptance criteria.
That's an agile term, but you guys are, you know, you've coined a whole new thing in the data industry of these data contracts, like Yes. Right. That's a great term, right? Those are, you make sure you have those rules defined and agreed to and everyone's aligned, right? So when it doesn't match, you can do something about it immediately.
Right. And one, one last question in that, you know, you talked about, you know, you being early in your career and no one really listening with you being a CEO role now Yeah. And there lot of other leaders in the audience. How do you create a culture where, you know, people who do see something but may not have the seniority can really bring that up and people listen.
Yeah, I mean, I, I think, I don't always get listened to as the CEO, right? I mean, sometimes you, you, you need to create an environment where people are willing to speak up and willing to, uh, you know, just not take what you say as dogma, right? But then you also, you, you need to be able to defend the things that really matter and you need to pick your battles, right?
I mean, it's, it doesn't matter what your title is, it doesn't matter where you sit in the organization, right? What matters is you need to, uh, find a way to break through the problems that really, really matter. Excellent. Amazing. Well, there was one question that came through in the chat that I wanted to ask you before we jump and, uh, to piggyback on it, one person was asking, Hey, can we see that second to last slide of yours again so we can take a screenshot because it went by too fast?
Yeah, sure. Of course. Let me do that all around the data quality requirements. Yeah. So, but I would say there's, there's two things, right? Mm-hmm. So, um, this is the, the automation requirements, right? Like, you need to have all of these, you need to make sure that you can vary, you know, you have that easy to maintain system that you can vary the input data, vary the output data.
You wanna make sure you can vary your delivery endpoints. 'cause those are always changing too. That should not throw anyone for a loop. Um, you want that real time quality monitoring, right? Um, you want some process automation to detect and handle errors when they come up. Um, you want that transparency, all that good stuff.
So couple this with all those checks at the, the data level. So for us, this is like the generic outta the box checks. And then we also have, you know, all ways to, to customize. So we can set up custom buckets that do all kinds of custom, um, measurements. And we can have, like, you know, a column can be null 5% of the time, but Yeah.
You know, uh, and we can set thresholds, right? And there's of course, um, we have, uh, you know, we, we have the strict, you know, we, we very strict rules around schema and data type. Um, and then we define key fields, um, so that, you know, it's very easy to configure on a dataset by dataset basis. What are the 📍 DDU rules?
What are the change tracking rules? Um, you know, so it's just, it just makes it easier. And then of course there's the web portal and you can always see the KPIs over time. And you can see, um, you know, everything's very transparent. So when something goes through a hiccup period or a period of change, when things aren't that great, everyone can see that.
Um, incredible. Yeah. Well, Sarah, thank you so much for coming on here. We're gonna keep it moving. We've got another panel coming up, and as you all know, I am keeping the time. I'm the timekeeper of the day, so.
We're gonna bring on our panelists for this next session."