Brainforge x CTA: Weekly!

Date: March 6, 2026 Source: Granola Meeting ID: ea19f1bf-0e33-49c4-b98f-dddfef47d074 URL: https://notes.granola.ai/t/ea19f1bf-0e33-49c4-b98f-dddfef47d074

Participants:


CTA Data Pipeline & QA Process

  • New three-environment structure in Snowflake
    • Dev: local development testing
    • QA: PR validation and data quality checks
    • Prod: production deployment (triggers on merge + daily refresh)
  • Database organization by layers
    • Staging, intermediate, and marts layers
    • Schema divisions (CRM, reports, CS tables)
  • GitHub Actions workflow
    • Auto-triggers QA checks on PR creation
    • Requires reviewer approval + passing QA before merge
    • Production deployment on main branch merge

Semantic Layer Models & Views

  • Created views instead of additional tables for report queries
    • Standardizes calculations across use cases
    • Prevents logic duplication and inconsistent filtering
  • Base model: CES attendee overview (per registrant level)
    • Contains company, program codes, attendance flags
  • Semantic views handle specific aggregations
    • Registration by industry, attendance by program code
    • Industry attendees by department segmentation

Data Quality Assessment Results

  • Prepared QA document comparing Snowflake outputs to audit report
  • Most numbers closely match historical audit report
  • Key discrepancies identified:
    • AI product category: 46,677 (Snowflake) vs 43,672 (report) - 5 record difference
    • Fortune 500 logic needs company-level tagging vs individual attendee matching
    • 2026 preshow data missing (no on-site flag available yet)
  • Verified attendance logic implemented from Kyle’s 2025 calculations

Outstanding Data Requirements

  • Fortune 500 company list
    • Need seed file per year with Boolean for CES presence
    • Historical scraping required (may need paid access)
  • Product category mapping
    • Use “field value” vs “corresponding TMA name” for consistency
    • TMA name for standardized/marketing-focused categories
  • Industry/media by department query needed
  • Interbrand flag can be removed (no longer needed)

CTA Team Feedback & Use Cases

  • Jackie (VP Conferences) impressed with current work
  • New analysis request: track whether paid conference attendees actually attend sessions
  • Potential sponsor prospectus use case
    • Similar to membership/sales dossiers for Gary
    • PDF/Word document artifacts from data pipeline
  • Fortune 500 attendance valuable for prospecting lists

Next Steps

  • Wednesday 1-3pm: Deep QA working session scheduled
  • Katherine and Kyle to do first-pass QA review
    • Compare Snowflake logic with Word doc queries
    • Identify specific discrepancies and required tweaks
  • Awaish to create Asana tickets for outstanding data requirements
  • Uttam testing models in Cortex using QA environment tables

Transcript

Me: Hello. Them: When are you? Tom, how you doing? Me: Hey. Good morning. How are you? Them: Not too bad. Me: How was the week? Them: My perspective, not too terrible. A lot more of, like, ad hoc stuff and actually doing some more data science stuff, so I’m actually kind of nice. It’s letting you guys kind of handle more of the engineering pipeline building right now, and then just kind of helping basically ad hoc analysis where we can. Me: Yeah, definitely. Hello. Them: How’s everybody doing? Me: Morning. Good. Them: Katherine. Small anecdote. I just got off a call with Jackie, who is the VP kind of under our conferences team. And Kyle, she was very blown away by what we put together, so thank you again for that. And she had a really interesting question that was like one of those things where I’m like, again, fascinating that. This has not been possible historically. She would like to do some analysis on people who pay for the conference tracks. Whether or not they go to the sessions they paid for. Interesting. Okay. Yeah. And I was like, actually, I like that as a question. And she’s got a whole bunch of places she wants to go with it, but I was like, fascinating that that’s not something she’s actually been able to get at before. Me: Interesting. Them: She’s going to send a whole thing that she’s got, like a bunch of different pieces of the question, but I don’t know. Anyway, we’re doing the right work. I was just telling Utah that’s mainly what we’ve been doing, which is the actual in depth analysis or getting started on some of that, showcasing what we can do and then letting them kind of do the back end and creating the structure for it. She did also have a use case that is similar to those, like, dossiers that the membership and sales teams put together. For Gary, when he goes, like, meet with somebody, I think it’s like a similar sort of thing, but more of a prospectus around track engagement for potential track sponsors. And so I think if we start, I mean, not right now, obviously, but, like, as we start to kind of build out a pipeline that’ll take the data and create, like, some of those more PDF Y word dockey kind of artifacts around it, that’s another good one to put in the queue. But anyway, I digress. What shall we talk about? Me: Nice. Yeah. I mean, I think today we could probably spend time just in Snowflake and just talk through kind of the models we pushed and then path towards qa. So I think maybe a waste. Do you want to share a Snowflake? And then we can talk through both what the structure and stuff looks like, so. Where we’re going to be doing qa, how things are going to move to production. And then let’s talk a little bit about just the models that we need some feedback on. Them: Okay? Yeah. Me: And we have, like, I can take some notes if there’s questions per table, and then. I can. Them: Okay? Yeah. Actually, I’m realizing Zoom AI companions. Kind of a great fit for, like, doing a QA meeting, right? Me: Yeah. It’s like, okay, this table needs us to still exist. Them: Yeah. Like, part of me just thinking, to be honest, right now, I’m like, maybe I should do that. Even if I’m, like, solo qa, like, I could totally, like, start a zoom and, like, record myself looking at stuff, like. Yeah. Talking to myself. Me: Talk. Them: I don’t know. Me: I think I was talking to someone this week, and I don’t talk to myself. I’m, like, in my head. So people, I think, talk out loud and get through problems, they’re probably more suited. Them: Because you would recommended that whisper and it is really good AI voice to text, but the same thing. It’s like I’m just not in the habit of talking out loud. I start talking, it’s like I say different things. It’s very strange. Me: Yeah. It’s kind of weird. Yeah. Them: My brain is odd. Okay? We can look at the catalog to basically see all these databases and tables. So the last changes we made, basically we just have now three environments. One is dev and one is qa, and then there’s one called prod. So, and for each environment, we have multiple layers in dbt, where we split the databases, so we have a staging layer, intermediate layer and the margin layer. And dev is more like for the ones when we are doing development on our local machine, we want to push some changes and want to test out. Then it goes to when we create a PR it goes to the qa. So it’s when we are going to basically like have all our changes in a PR and then it’s going to generate some models based on those changes. And we, we are basically using that for two purposes, basically to validate the PRs that they are executing correctly and also for the doing the data QA at that level that the data looks good. And then once it passes both these checks, we can merge it. Once merge, it will trigger. The jobs in cloud. So we will be have basically same tables in the production. So in prod we trigger two levels basically. One is when anything is new is merged in the main branch and then we are going to trigger to recreate these models. But all. And then also it works like once a day. Basically to refresh the data. Okay, okay. And for each of these tables databases, then the, the second subfolder is basically becomes a schema. So we are dividing, for example, and if we look at Mars, we are dividing it into multiple layers, like CRM reports. And that’s how, like the cs, all the tables related to C is going to be there. And inside CS we have these tables, basically. So the, the latest changes which we have made are basically in qa, so we can maybe look at that. Me: And you want to show the PR2, and that way, I think that’ll just help show when the PR gets created. Them: For example, qa, we have this. Me: How it ends up landing here. Them: Yeah. Like, here’s my PR, which has, like, 95s19 files there, and it says. So, yeah, like, if I show. Basically, I think I can show. In the workflows where we basically define like what happens on hpr. So when a pull request is open against main, then we are basically going to start to run. So this is how we have defined our workflows in GitHub Action. So this is for QA, which basically triggers on each commit on a PR whenever we add a new commit or create a PR, that’s when it is going to execute. This reads from environment variables the secrets and then basically runs the DBD command. And similarly second view is this is for prod. For prod is basically when anything is pushed on branch main and then also on a ground schedule. Then going back to the pull request, this is where we are. We have added some checks. So it is like kind of we have some checks before it gets merged. One, it’s. It needs to be approved by a reviewer. And also it needs to pass this QA check that I was talking about. So once everything gets executed successfully, then we are then only then we are able to merge the spr. There are some changes in here. It’s basically all those models which I have created in the semantic layer. These are. I have named them as. I have created them as views for now. That we can discuss if we are just want to keep them as views. Because this is all coming from. If we go back here in the semantic layer and the views. So this is all originating from a base model? Which is basically. The over registration attendance overview table which has the data and per utility level. So it has like for this attendee, what is the company name, what is the program codes and all these things. But the. The tables, the data we required in the format in our. In our audit report. That depends just queries on top of it. So it is not a single model, which, like, we don’t need an extra models for that. So we have Bayes model, but we actually need some queries and for each use case there is a different query. So the only thing which made sense, like according to me, was to create some views where we basically define the logic, for example, which standardizes the calculation, like the registration by industry, for example. And we have, for example, this one, if we can look at it, says industry attendees department. So it’s like. All the attendees in the industry. Like the section and then we have the segmented by the attendees of the departments. So this is like if there is any logic required, any filtering required, that happens inside of that. So anyone then needs this same table anywhere in the C report or anywhere else. Just going to reference this view instead of writing the query again from the base table. So that way we standardize all our calculations. Like the. The few things that you mentioned the last call that. Okay, well, I need to calculate something and the. Kyle. So we. When we write the carries from base. Then we might miss out on some filters or miss out on something and numbers become different. So this is where we are going to still rise all our logic so. If we. If anyone, needs attendance. By program code just going to come in and use this view instead of writing a query by himself again on the on the base table. Nice, okay? Cool. So this is kind of where we’re, you know, sort of parking the right answer, so to speak. Okay. Okay. I do. I just. Going back to the, like, environments and DBT layer database cleanup. I do really appreciate it. I went in there to, like, look at the stuff. I was just like, oh, my God, I. Like so many things. I can’t. What. Where do I go? So this is. Honestly, this is much, much easier. I. I see a future where maybe that level of complexity makes sense. But for the moment, I really appreciate the kind of, you know, narrow consolidation. So, yeah, my. My tiny little brain will appreciate being able to move. More fluidly through this, but that’s cool. Ok, so those QA mar will ultimately wind up existing in prod marts as well. Okay, okay, okay. Okay, cool, cool. Me: Crack. S. Yeah. So what it basically allows us to do is. Now because nobody is depending on anything in ProDmarts. It’s sort of safe. Like, we could have pushed it there. But as things get hooked into other systems or are available through Cortex, Them: Yeah. Me: We just don’t want to push without getting a QA and then QA on our side. So I will be reviewing that. I’m looking at the logic. I’m going to go look at the tables and make sure there’s no duplication. Them: Yeah. Me: Like the data exists and then. But it’s really, I think, is this the right number aggregation that we kind of need QA from the CTA team on? And then, yeah, I think this view layer is actually one step further. Like, hey, the tables for the report are like all these joins, instead of just creating tables. For each of those, we can just create views. Them: Y. Eah. Me: And then eventually we’ll just have this flexible model anyways. Them: Y. Eah. Me: So we’re not, like, fixing the whole model just for this outcome? As we talked about, we’re creating the foundation models, and then we’ll just create the joins if needed. Them: Yeah. Me: And then I think that’s really where we kind of need qit. Like, did we do the proper joins and as output sort of matching? Them: Yeah. I think that’s perfect. I think. Yeah. And then to your point, too, like, we are still so early that I think it’s, it’s part of why it’s good to be kind of figuring out exactly what we want the CES data to be like as the foundation, because once we start bringing more people in, and there’s more things built to your point, right? That means more refactoring and stuff like that. But I think this is. I think this is absolutely the foundation that we need to be building, so. This is cool. This is really cool. Okay. Yeah. Based on these models, I actually just prepared this. This document that will actually speed up the QA process. If we want to do so. Like this is the Something, the snapshot from the audit report. And we running my queries on top of the view I’ve created. And then we have these actual results from that table. And if we compare that, like we can see, for example, industry 2026, we have 86, 607. And here in the from report, we have 6, 86,600, 679, which basically there’s a mismatch of few attendees. But it still looks much better closer to the number. So that way. I’ve been queuing this and I have queued for all the tables we have in that audit report. And for most of them, it makes sense. The number does match what is coming from that audit report like registrations, attendance. But there are a few like the open questions on. Like the the task number three, where we are seeing. The preshow. Like which? Like this logic. This depend on on site flag and for 2026 we don’t have that data yet. So it says or as maybe official because we don’t know on site and our flag just uses the default values so it marks all of them as such. Yeah, so, but this. These are the actual numbers. What is report numbers like? We can actually QA the task if. If you want to go in details. Here on this, in this call. I admit I’m very tempted to say yes. I’m also thinking it could be a deep rabbit hole. Probably makes more sense to do asynchronously, but maybe we could work through, like, one kind of together. Me: Yeah, let’s do maybe one. Them: For the, for the verified piece, I was curious, did we kind of decide if we wanted to using the logic for verified, like, make it a flag on the data, similar to how it was in the past? Or is it we’re kind of just keeping the. The definition of verified and then using that, like, in a view? Like in a calculated field. The attendance field is basically a flag in the model. So it’s something I received that logic from from the Kyle regarding how he’s calculating 2025. The data from P36 and the data for the years before that. So that exists in the model because that is dependent. Individual. Our model is like, per registrant level. So at that level, we can actually say, like, for this event and this year, this person, if he’s attending it or not. That kind of logic can just go in the model. And all the aggregations which you are doing here, these are happening in the View. Okay, gotcha. Cool. So yeah, we are not now depend on the verified flag. This is something logic came from the Kyle. It’s in the model and then this view is just using that model. Like this is basically just a carry on top of the view and this view has the the logic defined basically. If we can just go here and in the Q semantic layer. So it’s. Yeah, it starts with the intermittent layer that you guys bit created with the CES attendee base layer, and then it goes down to, I believe, the overview or the wide or something, and it goes then eventually goes to the overview and then eventually goes to these semantic layers. Me: Yes. Them: It was the table. This one again. This is basically the the logic for this view. So as we can see queries here directly that what I’m saying, I. ‘m just based on the report. I want to try this for years. 24, 5, 6 and these registration types and then this attended flag. So here we actually don’t have a logic for flag, it just says flag, true, false, if the person attended or not. And the logic for that basically is in the model itself for each year. Yeah, yeah, that makes sense. That makes sense. Yeah. You can verify here if that carry for the view looks fine if we need to modify it. And then we can do that in DBT and then in this. Yeah, this is just reading from the view. So this very simple thing. Yeah. I mean, if you want to pick one and we can go through the QA on it. I don’t know if there’s any one that’s better than others to pick, but I’ll let you decide. The one quickly that I saw was slightly off, I think was Fortune 500. Just looking at that, and I think it might be because we’re. I’m not looking at it now, but the number is really high. Like 20, 24, there was, like, 800 companies. So, like, I mean, unless it’s looking at The Fortune Global 5 Global 1000, it’s per. It’s probably. It might be right, but it can’t be 800. Yeah, I think the Fortune 500 stuff, that’s why I think we’re just going to have to take it away from the, like, being a flag on the individual registrant and instead, like, tag it at the company level. Right. And then the identity stitching to find attendees that were part of that Fortune.500 company because, yeah, it’s just going to be too messy otherwise. Me: Yeah. Do it ourselves. Them: And maybe we look at the overview. Fortune of 100 flag is there from the wide. Me: I wish. Wish table is the Fortune 500. Them: Yeah. Right now, the, the logic is that we have Fortune 500 table in the snowflake. So from the attendees, we just get the company name and then we compare with the first 1,500table, the companies at that Fortune 500 table. And if the name matches, then we basically say just flag it as true. That it attendees from Virtual 500 company yeah. See, the trick is that we have I mean, obviously we know the company data is messy at the attendee level. We also have some situations where Fortune 500 companies, we decided that they were there because, like, somebody attended from a subsidiary company. And so, like, the name will never match, but we’ve said that that company was in attendance at ces, and so that’s why it’s like, we really are going to have to have the Fortune 500 list, and the company’s like, canonically parked somewhere, and then the join out to the attendees can be a little shakier. Because we’ll have the correct answer to how many Fortune 500 companies were at CES each year. And then if we’re using it to, like, find those people, it’s okay if we miss a few or get a false positive here and there, but that the number that gets published will be correct. Because it’s protected. Okay? Yeah, we can take this one. Maybe this is more like on a product category. It requires a little bit of logic in there comparing with product code mappings. And we can go in here and actually. Run this. Okay. And basically it says, Product category, Artificial intelligence. And this total number is for 2026 only, because, like, that’s what I saw in the document. And these are the percentages per year. So this is number for artificial intelligence is 46,677 if we look at this report. Okay, so in this report, it says 43. Ousand six hundred seventy two. So there is a little bit of variation in the numbers. And that’s. I don’t know, like, it’s because. The report is the is from past or is like we need to. Yeah. I mean, they should match exactly. And we can basically also look at the. Trying to remember. Well, okay. Actually, though, let me think about that because. They should match exactly. However, we know that there were the little bit of truncation happening on the product codes. So it could be that there were a few people that are now, like, now that that field’s been kind of restored, are coming through. Although I’m assuming those product codes are listed in alpha order, so you wouldn’t expect artificial intelligence to be the one getting truncated. Okay? Yeah. Thing is that if. If that the string is long. Right. If there are multiple codes attached to the CN rest. Right. Right. But I’m like, if they’re always in alphabetical order, artificial intelligence should never be the one at the very end getting trimmed because it begins with an A. But they might not be enough. I’m actually not sure if they come through that way or if they come through in, like, a totally random order somehow. And this is the logic for calculating that. We can see it here. So this is, this is confirmed, both industry and media, and then we are just using product category. Getting product crediculous from here. And then it has been mapped to the C product interest codes on the ID column. And so that it it will now have map to the in the names. And for then, yeah, we are using this name. Corresponding TMAs. Or there’s the other one field value. Yes. Yeah, it is. Then there is a name, this one corresponding TMA name, and there’s one called field value. Like we should probably use field value. Let me take a look at it real quick. So I make sure I say the right thing while I pull it up. The TMA name thing is so we have The TMA stands for Targeted Marketing Alert. And we have a small number of these product categories that we do send, like, specific marketing pieces around. And so the corresponding TMA name is populated for the ones that are kind of like, most important and have that additional marketing happening around them. I think field value is the. Yeah, it’s kind of like the raw ones year by year, so you could probably use corresponding TMA name. We’ve probably renamed it, is all. Like, maybe if we just called it something, like, you know. Standardized code, I guess, or something like that. Because there is the variation against the years, but not necessarily. We don’t make sure we report it as exactly as it was, then we just need to make sure we can consistently tie it together historically. So we can use the TMA name one. We probably just want to rename that field. Yeah, that field. Just like in the end, in our view, it’s called product category. We are just then wearing it on the product into score and then we use count on the emails. We can also count on registered ID when? Maybe I use email here. Yeah, I should know. Register an ID and email should both aggregate the same 99.9% of the time when I’ve randomly decided to check and see if that’s still true, it has been true. There were a few brief days where there were somehow duplicate emails with different registrant IDs, but then those have gotten cleaned up. And so I think for the most part, it’s totally equivalent to count distinct email versus count distinct register. An id, like it should be safe to use either. Okay? Yeah. This is just the logic for calculation percentages and the sum for each year. And that’s how. Okay, so if we wanted to pin down the Delta of the, like five records for artificial intelligence for 2026, between the word doc report and this, we would go to the attendee level, I’m guessing. So, like, making sure that we’re not missing anybody. Which number was higher? Sorry, I already forget. Snowflake had the higher number right in the report. Five lower. Yeah, okay. Me: Yeah. Them: So. Yeah. So then I guess what we would want to do is make sure that. Whoever was considered industry or media and interested in AI in the query I used for the word doc is coming through in this here. And so we could look at this like instead of the aggregated view, we could pull the like raw row by row and compare them like that’s how we could QA this. I’M guessing. Am I going? If you have the Curie for the word doc, like, you can just send us that and I can verify what’s the difference between. Me: So I feel like this QA piece is really, like, the primary. Them: Okay? Yeah. Me: Thing we kind of need some iteration on. So what do we think is the best. Cadence. You think like more working sessions. What do you think, Katherine, to kind of get through it? Them: Yeah. I. I mean, I. I’m totally guilty as charged. I love a good working session. Woof. My calendar next week looks terrible. I want to unsee all of this. I mean, there are some spots in here, mostly in the afternoons, it looks like, but. Yeah, I think probably what we can do is, like, maybe me and Kyle. And Kyle can divide and conquer and, like, do a first pass at, like. Okay, what do we have in Snowflake? What’s the logic? What did we have? In the word doc. I mean, also, just to be clear. Possible the word doc is wrong in my case with the 2026 numbers. I did a very good job, I think. But, you know, we might. That might ultimately wind up being what we find, too, but. Yeah, so if we take, like, a first pass to go through these and then. Yeah. Book some time to kind of, like, dig in on anything that either we can’t figure out or things where. We do identify like, ah, this is the tweak that needs to happen. But if we want to do that, like as a big group, that’s okay too. I just not sure if everybody wants to sit on a zoom call and pore over numbers. I might be the only nerd who thinks that sounds like a fun time. I think that sounds good to me. Okay? In that case. Me: Yeah. So maybe we can find time next week. Them: Yeah. Me: And that way, at least it’s on the calendar so we can keep driving and then. Them: That’s true. We’ll have something to talk about regardless. Me: Yeah. And I’m now using some of these models to test Cortex. Basically, these kind of came out in QA yesterday. So I’m kind of starting to use them. Them: Okay, well, how about Monday? I think is kind of. We could use some of the time in planning, if we get through planning fast and then kind of that. But apart from that, I could do. Me: Okay? Them: 3:00 on Tuesday. I could do an hour or two there. And then third Wednesday I could do 1:00. Actually, Wednesday we could do anytime in the afternoon because we could move the meeting with membership and data if we needed to. Me: Yeah, Wednesday could be good. Them: I could book like. Like one to three local time? Yeah, okay. Me: Yeah. Let’s do that. Them: Okay, cool. Okay? Yeah. I actually also wanted to bump the asana request. And I’ve added these five things here, which I need to create Sono tickets for. Like, we need to work on these things. And like Interbrand flag. Price retail brand. The twice is its own sort of list that we probably can we at least have the historical list and then I we probably need to pull the one for 2025. But Interbrand I’ve been told nobody cares about we can ditch that. Then we have Fortune 500. Like we already have a table. But I think you mentioned that we need a devised version of that. One and then, yeah, this one that just come up. We want to have industrial media by department that like the query so that I can just go and also give. A synchronously. For the Fortune 500 thing. I mean, I think it probably basically just becomes a seed file, honestly, like, we’ll. We’ll just have, like, this was the answer for these years. It doesn’t get refreshed. Apart from generating the New Year’s file. I don’t know if I have a strong opinion one way or the other. Like, does it make sense for the file to contain all 500 companies or just the ones that were at CES? Probably. Ultimately, kind of. Kyle comes down to, like, do we ever get asked, like, who wasn’t there, where it would be convenient to have the rest of the Fortune 500 list already in there. I mean, I think we definitely keep it in Snowflake, for sure. Yeah. It definitely will be used as, like, a prospect. I mean, should be probably used as a process prospecting list, potentially. So everybody that’s not in there, I think it makes sense. To keep. Them. Okay. Yeah. So maybe we do one seed per year with the Fortune 500 list and just a Boolean for present at CES Truefals? Yeah, I think so. The one annoying thing with the Fortune 500. Well, I can’t remember if I had to pay to get the previous data or just the current year data. I can’t remember. But I was like, you had to pay? Like, I think I did, like, a trial. So I just scraped, like, literally the past, like, 10 years data and then took it all, and then. Whatever. Unsubscribed, but I don’t know. If you have to do that for every year or not is one thing I don’t know. Yeah, we can take a look at that. That’s funny. I definitely asked, like, Claude to bring me the Fortune 500 list for this year, and it came back with one that matched the website. So, yeah, now I’m like, I wonder how it got it. Back my credit card? No, but yeah, yeah, yeah, I think that makes sense. So I understand, like the, the pre audit report a little bit more you guys are going from. Because look at the SQL, it looks like you’re going from the raw versions of the registration into this audit wide version, this intermediate and then going down. Into the various, like, schema. Schematic reports, basically. Is that correct? Yeah, from raw to intermediate. Then it goes to the Marts as well. The overview table. That is the cleaner version of int intermediate. Then we have declare. So, yeah, so all These joining like for example, joins with Fortune 500 product interest code. These happen in, like right now I’m doing it in. In intermediate. But they can also happen if we have a clean table. We can just do in marts as well. Yeah, I think that would be my biggest question. I think. I know Katherine and I talked a little bit about trying to clean it up a little bit. I just kind of want to have. And I don’t know if this is, like, best. It’s probably not best practices from a developer standpoint, like, have, like. 1. We need to have, like, one clean table of, like, registration history, I guess. And that’s where I would start, basically. And then all the semantic views would go off of that is what I. I was thinking. I don’t know if that’s correct, what you’re thinking, Katherine, but if we want to. Do. Like, is it really necessary to create a whole new model for just this pre audit report? I guess is my question. Because the schematic model, most of this private reports will be a generic question that we get asked anyways, so it’s almost just like it’s part of just registration data itself, so. Like, right now, we have two registration pipelines. The one that you created and the one I created. And so if there’s a way we can, like, make that just one. I think that’s what. I guess I’m where I’m getting at. Yeah, I think. Yeah, we can, basically so. Me: Hey, guys. Sorry. Them: All right, so we’re showing the dashboard that we’ve done so far. So, yeah, these are the numbers. That we have. A couple of things, so, yes. First things. Let me show the equals side by side, or it’s somewhere here. Me: So just to set the stage again, like, we’re mainly trying to just recreate what’s an equals within what’s in omni. I did a review of the dashboard yesterday. It seems like we have most of the core metrics. But, yeah, I guess. I would love to hear Laura, like, if there are discrepancies now. Them: Yeah, I think the ARR. If you look at there, you are at like 3.5 in April. First of all, why are we showing April? It’s now March. So that’s. Yeah, it’s just the labeling. It’s actually March. Oh, we’re saying 3.5 in March. I feel like. Me: So, yeah, just maybe. I just want to set up two things for this call. Like, one, I just want to talk about the structure first. So that we can nail that. Like, we have the right graphs, we have the right cuts of data. And then I want to get into qa. It can sometimes be overwhelming to sort of share, to see everything, especially because I know you’re deep in the weeds. To be like, okay, this is off. The first thing I want to do is just make sure this has all of the metrics. That you’re looking at. And then we can go into basic queane from the top. Them: Yeah, Metrics, I think. One second. Let me just check something. Okay? I think on the top makes sense. Our net arrange. So that means total ARR in february versus march. Yeah. Okay, customers, new customers. I one thing that I would add on the top is churn customer. So like just having you have net n new customers and then also trend customers. Me: Okay? Them: Okay? If we add that, that will be helpful. Okay? And I think other than that, that makes sense. What I would also. Actually one thing that can we add. Where do we have total ARR in the table? Me: We can add totals at the bottom that sort of add up each piece. Them: Yeah. In the ar breakdown. So at total in the bottom and then add a line that says month on month air growth and show the percentage, please. Me: Okay? Them: And same with the customer count. And then total AR by month. That makes sense. AI by component. I think that’s fine. Equal shows it a bit differently. I’m wondering if we should just show the AR by component chart the way equals does because the way you’re showing it now, the churn, it’s so small in comparison that it’s kind of hard to see. So if you remove the end of month ARR and just show everything else I think that’s fine. Me: Yeah. Okay? Them: Okay. Another thing you could do just. Me: Suggest the changes. Them: Yeah, changes. I think another thing to note is if you just click on, like any, you can actually see the value. Me: Yeah, but still, I want to remove the. Them: Yeah, yeah, I will remove it. I’m just saying. Okay. But that’s. That’s helpful. Okay. Yeah. So I think overall, That’s fine. And wait. Well, one thing that’s not here that maybe you’ll do in a separate. Dashboard. But if you go down, is there acv somewhere? Oh, there is. Okay. Okay. Yep. So cost. Yeah, new customers. Of new. All customers. New customer? Yeah, I think. Me: And what do you think about, like, we’re truncating to, like, 11 significant digit for the most part. Like, do you have any concerns on that? Like even. Them: No, that’s fine. Me: Okay? Them: Okay? Yeah. Also, I noticed you had top customer changes over the last 30 days. Do you still want to keep the last 30 days? Do you want to reduce the time frame? Do you want to increase the time frame? Last three days. Fine. Okay. And also in terms of, like, customer changes, is there any other thing customer change you would want to keep track of or note of? If you might want to see churn over like. Actually, that would be great if we add so I can see if you go up where can I say turn by man. Okay, it’s turned customers. I see that, yes. You know what we should add? Churn reason by month. In Salesforce, there is a field. Where? Customer success marks churn and they mark regrettable non regrettable and then they mark reason for churn. I would like to see top like top three or top five reasons by churn. There’s few ways to do this. We could do it on a bar chart and show it by month or we can just do it like, you know, in last six months top few reasons by churn, but I want to see churn reasons basically. Me: Yeah. I would suggest two charts. Demi like one that is like a bar chart by month by the reason. And then you want to see customer account and dollars. And then similarly, we want a table that churn basically last 6 month churn reason customer count $. Them: The last churn. Customer count by reason. Do we want that to be a bar chart, or do we want that to be like a table? Me: Both. Like you have. You can have the reason per month. Them: Okay? Okay, that’s fair. And then we want it to be, what, 60 days? Or 30 days. Me: Yeah. No, we want to do the last six months. Them: Okay? Yeah. But I did. I mean, if we can, like, change the time frame, like a. With a drop down, that could be helpful also. Yeah, we’ll definitely. We’ll definitely add that there. So these are. Yeah. So we’ve been able to build this out, and so we’ll add, like, filters. So this right now is static. It’s just refreshing as the data refreshes every day. But the idea is we’ll add filters based off this ones. We’re stock. Once we finalize the format, we can then just add filters to ensure that we can start to break it down by customer. If you want to look and see the ARR by customer, for instance, or by time frame as well. And also I know you mentioned like mid markets as well, so like enterprise and mid market. So that’s, that would be the next. So that will become more relevant in the future. We don’t have that many, like, enterprise customers yet, but like in the future, hopefully. So. Yeah. Okay, that’s good. Yes, but. Yeah, other than that, I think where in terms of the structure and what we cover in this one, it’s pretty much set. Me: Okay? Them: Okay? All right, sounds good. And is the goal for this lord to show to, like, sales team kind of thing, or is it. Like, executive. I think it’s more executive. So, like, one, it’s for all hands, two, it’s for the board meetings and then. Yeah. I mean, eventually, when we launched Phoenix, I want to do weekly metrics meetings for all revenue teams. And basically, probably I’ll be pulling data from here, maybe, or the model. Okay. The other thing that I was thinking about yesterday when I was looking at this when we have ARR by component, did you see the browser base investor update? Laura. No. Which. What do you mean? You know, browser based. You know, that company? Yeah, I haven’t seen it. No. Okay. Yeah, they put out an investor update. Only because Nico shared it with me. I’m not an investor, but they had a really cool chart which was the breakdown of all of the revenue, like ARR by component, but it would be like enterprise, mid market, plg, so on. And so forth and that for sure we should have. It’s just we cannot have that yet. Right. Because. And I feel like we could probably set it up now by just adding a field and salesforce that what customer they are. Actually let me check with Ryan because we should already have mid market and enterprise or demi. Do you know if we have that? In terms of classifications. I haven’t seen it yet in Salesforce, but I haven’t said every single table. When we’re adding this, but yes. If you can let me know where it’s been added, I can know what tables to go look for in just as well. I’ll do that. All right. Also, is there if there’s any other classification you want to add, like that is currently in Salesforce, do let me know as well so we can. I think it’s fine for now, but. All right, sounds good. So, yes, this is in terms of structure. Utam, do you have anything you wanted to add? Me: Yeah. So I think go through and we’ll do a deeper Q and A on the AR numbers. I also know there’s some things are listed as zero, so we’ll go basically match line by line to what’s in equals. Them: On that. Me: But the bigger piece here is like, I think we’ve driven towards creating this dashboard. But, Laura, you also have the ability to just create dashboards as well pretty easily. So we’ll add, like, dashboard level filters. But you could easily just duplicate this and, like, edit it. And so most of what we’re doing is making sure that, like, when you see ARR anywhere, anyone else uses it, it’s like the confirmed number. And so that’s the one thing I just want to make sure with you that it’s. All lined up so that other teams start to use these figures. We’re comfortable. The other piece that I mentioned to the team yesterday, you know, as, you know, like, these are all, like, stuff from Salesforce is all, like, operational versus stuff from QBO is all going to be like closed book, you know, like accounting certified. So that’s the biggest kind of just thing I want to align on is that, like, a lot of the company is going to report on, like, revenue or, like, bookings. But of course, there’s also what you guys do at the close of month. To close books. So I want to make that crystal clear, and I think even internally want to make that crystal clear, that there is a difference. Them: Yeah, we know. I think. Yeah. Like, and honestly, we haven’t, like, I mean, equals does have some stuff done on QuickBooks, but I never even checked out. We always, like, we go off of Salesforce numbers in terms of, you know, for, like, for investors and all that. Now, in terms of, like, our books, obviously, that’s. Done by our accounting firm that then goes into QuickBooks. Me: There. Okay? O. Kay. Co. Ol? Yeah. Them: So that’s a separate thing, but, yeah, I understand. Okay. Also, speaking of QuickBooks, we’ve also been able to build out some of the numbers in terms of, like, burn rate and cogs. Would you like to see that as well? Yes. Okay? All right. So we’ve been able to. Give me one second. Yeah, I believe it’s this. Oh, yeah. So this is the burn rate. Okay? Yeah. Burn rate. It can’t be burn rate because it’s Showing where? Burning 7 million. That’s not right, okay? So for this, we’re looking at, like, the. So in terms of QuickBooks. The bills on payment. The bills payments tax were not necessarily like explicitly used. For some reason, like in the. In the API. So it was hard to be able to reconcile that I had to use, like, account type, so where the account was going, like. How do I put it? So you could see the account that the money was going out of. The tag or the type of the account was boss venue. So we have like type bin cogs, we have typing, like expenses. And so being able to use that to say, hey, this counts as this. This count has burn rates, discounts as cogs. Do you have an explicit, like, classification of expenses in QuickBooks that we can then we categorize into this chart? I’m not sure. I. I don’t like. We don’t use QuickBooks. Our accounting firm uses QuickBooks. I mean, we have. I look at QuickBooks all the time for, like, expenses or whatnot, but I don’t actually do anything in QuickBooks other than, you know, like downloading statements or. Actually, I can download statements. From our accounting firm, Portal. So I don’t know how to answer to this. I think. I’m not sure where you’re pulling the data from in QuickBooks, but the most accurate way of doing it would be to pull it from the P and L, which is under reports in QuickBooks, and that’s basically. Me: Yeah. I’m just going to pull the items out. Them: Exactly. And just chart them, right? Because, I mean, I can do this quickly. In, like, a Google sheet, but it would be nice to have it here. So just use that data. That data is already reviewed by accounting. The only thing is that it would be up until January as of today, and then February in, like, a few weeks. Right, because they take a few weeks to close the books. Me: Is there any other view, Laura, in this, like, beyond, just, like, a view of the pnl, like, and we could provide that both, just, like, basically all the line items, as well as, like, some of these charts over time. Is there anything else that’s, like, valuable? Them: I would show. So we have. Me: I’m assuming like you want to see percentage of revenues for cap per categories and then. Them: Group. Me: Just growth and growth or decreases in categories like on the front of pnl. Them: Yeah, I think we should add, like, with some of this book revenue. Can we add, like, a line chart? Line on top that shows percentage. Me: Okay? Them: That for burn rate, please Check it, because it’s. It’s wrong. It should be below a million. And we’re, like, at 700, not over. Cogs. Yeah, cogs. That’s fine. And then a Runway. Okay? Me: Okay, cool. Them: So line, chart book, Runway. I’m trying to think. What else? Me: Like, are you guys. Are you guys keeping benchmarks? Them: No. I think. Me: Again. It’s going to be just like either revenue per employee or like, either like any of the common, like, SaaS benchmarks on. Yeah, I guess it’s like burn multiple or Rule of 40 or any of that. Like, I don’t know. Them: We could do. Let’s do burn multiple. Actually, that would be useful, because we do share that with the board. Me: Okay? Them: Burn multiple. Rule of 40. We haven’t really calculated. We haven’t been asked to yet. We could add it, I’m sure. Me: Yeah. I think the biggest thing is, like, whatever is. Whatever is helpful. Whatever just, like, gets you to produce that the investor update faster. And then. Yeah, like, if there’s any other things that’s, like, okay, maybe we’ve never calculated it. We can take a crack at, like, some of the. Them: Yeah. Why don’t you? Yeah, sure do. R40. That could be helpful. And I think that’s it. Like, ideally, eventually we could add things like CAC and all this, but we need to clean our. Our books. We cannot right now. You cannot do it very simply from. From QuickBooks. So that’s work on our side. But I think if you just add. R 40. Me: Okay? So yeah, we’ll show margins and then just show expenses. I guess you could tell us how far deep to go if you want to break down of expense types. Them: Okay. All right. Me: But yeah, we’ll just add a couple of the most common SaaS, finance metrics, so that could be helpful. Yeah, over time, I think we’ll be able to add, like, retention cohorting. Them: Yes. Me: Hack L2E like ratios. Them: So for cohorting, did you manage to do the customer success? What was it? NRR and grr. So not yet, because initially the ARR was a bit off and so that was kind of down. And so until get error to the point where things are looking good. Doing like NRR will just be wasted efforts. So once we’re, like, in the ballpark now, yes, there’s some slight disparities, but then we can easily do nrr and then logic will trickle down. As we clarify to to the spots we want it to be. Okay. Yeah. We need to add the cohorted nrr. Yeah. So we will do that. And I would also add I remember you mentioned like benchmarks being you mentioned on the benchmarks we want to get to ideally. So I can always support that as well in the notes like text. So it’s easy for whoever’s using the dashboard to be aware of either, like, what manta we’re doing well at or doing well in and what months where like, we can also try and ramp up. Okay? Okay, so, like, these are numbers we have so far. We’ll take the feedback, improve the structure. And work on some of the numbers as well to ensure, like, the numbers match what they should match. Me: Yeah, I think maybe. Laura, one thing that I can if we have some figures here that we want further QA on, I think we might just have to do like a working session to just drill into. Like, for example, if we’re a few hundred thousand off, it’s just going to require us to sort of. Start comparing apples to apples. So in that sense, what we can do is, like, typically, we’ll take a month, we’ll look at all the customers. Which I think. And then we’ll just basically try to compare AR contribution from either categories, all the customers, then we’ll find, like, these are off, okay? From hyperline or web. Where is the error coming from? So that’ll get. That’ll get us to the. To the remaining issues. So that’s how we’ll kind of go through. Them: Okay? Quick question. I was wondering, who runs like, who operates like equals? Because there are some views in equals that produce some of this data. That I would like to have access to and I can’t seem to see. I can give you access, but who runs? Equals is Equals. Basically, we have, you know, an FD that we constantly talk to, but I can give you access to the view that you want if you just let me know. Okay? All right, I will let you know. Because usually the way things are built in equals right now is like, they keep going, like, referring to, like, higher level stuff. And so I have been able to see some things. I’ve hit the wall in terms of access, so potentially I might. Me: Can you show an example of what that is? Them: All right, so N equals if. You come in here? To the main page, you can see views. And so I can kind of see, like, oh, this is error deal by customer. I can see the logic behind it, which is cool. But then they’re referring to another view. In this case, it is the. Arrow builds impute, which again, fine. But error build imputes is referring to another call, another view called Opportunities Clean. And I don’t have so from Opportunities Clean and I don’t have access to Opportunities Clean to see what’s going on in there and how they’re cleaning the opportunities to get out the data that we’re using. So I would like to look at that, get an idea of the logic that is going on behind the scenes. But potentially, if I see what’s going on in Opportunities clean, they could also be referring to another view. So I might need more than one of you. I might budget more than one time. It’s kind of like the heads up I’m giving you around that. Oh, I don’t have access to opportunities, inputs, view. I don’t see it here. Okay? So, yeah, that’s kind of why I asked, because I saw that it was done by someone called Sam. Yeah, he’s the. He works at equals. Okay? Would we pause for it to put me through to him so I could just work on that piece? Just came here. All right? I appreciate that. All right, I’ve got to jump to my next call, but I’ll see you guys later. Thanks for this. Me: Thank you. Them: All right. Me: So I think probably next steps. I think, yeah, if we can. Or if we can get down to the root in equals, then we could do that. QA actually. And then basically can come to you with, like, these are off. And like, here’s the reasoning. So, yeah, biblic connection with Sam. But even if, like, that takes a while, I think we still can drive. So our first goal here is just to try to get that ARR Dash usable. Them: Yeah. Me: And then I think hearing how you want the QuickBooks stuff model, like, we can quickly move to that as well. Them: Y. Me: I think you let us know how much. I think as you get more comfortable editing the dashboard and stuff, you just let us know, like, how you kind of want to operate with us. I think, of course, our goal is, like, we will go end to end if you’re like, hey, I need this chart, But I think it’s also, if you’re in a. If you’re in a bond or you want to quickly see something we’d love to show you, like how to quickly make some of these. Them: Eah. Okay? Me: It’s not like. Them: Okay? Me: It. It’s probably a little bit easier than equals, actually. Them: Yeah. Can you send me the access to Omni? I don’t even know if I have it, but I’ve never. Okay, that. And then a question for you. So you know how in our doc we had a list of all these dashboards? So what is the plan? Like that we do it or you’re? Going to continue to do it. Some of this, like. Me: Yeah. We’re going to continue to do it. Them: Okay, great. Yeah. Okay. Me: Yeah. So we’re going to do it. We are working with Nandica closely at default, but again, we’re working on dashboards, not only for finance, for sales and, like, kind of further. So she’s helping in some places, but we’re going to drive towards the first versions. Of as many of them as possible and then tweaks and things. Like, we’ll. We’ll keep taking on as much as we can take on, but I think it’ll be easy for you to be like, okay, I want this column. Most of our jobs, making sure, like, actually, all the columns you pick and join are accurate. But we’re going to drive towards those dashboards. Them: Great. I’m sharing the ARR dash with you. Will have access to it, and as I’m updating it, it will be up to date. But once we get to a spot that we feel everything is good, I can then make it a bit like I can give access to more people on the default team, so they can also start to use it as well, so for now. It will be just like you QA letting me know, like, hey, these numbers seem a bit off today. Or this. Like, we should definitely look at this. And I will be able to add this charts to it. And I’ll do that to the chart. Okay, great. All right, sounds good. Me: Okay, perfect. So I think we’ll get back to you on Monday, probably end of day with some changes, and then probably Monday with a few more, and then maybe we can see if we need to start queuing on a call or if we can just go back and forth. Them: Y. Ep. That sounds good. Me: Okay? Awesome. Thank you so much. Them: Thank you. Have a good day. Bye. Me: Okay? Appreciate it. Bye.