Magic Spoon // Brainforge sync

Date: February 13, 2026
Source: Granola
Meeting ID: a6925422-db0e-4af5-bc5e-b35d00aefbec
URL: https://notes.granola.ai/t/a6925422-db0e-4af5-bc5e-b35d00aefbec

Participants:

  • Uttam Kumaran (Brainforge)
  • Mary (magicspoon.com)
  • Ashwini Sharma (Brainforge)
  • Demilade Agboola (Brainforge)
  • Josh (magicspoon.com)
  • Michael Thorson (magicspoon.com)

Transcript

Them: Okay, so here’s just like, end of week review of this week. Again. Just the general board we’ve been aiming for. The space pipeline, the nmm data mart and as well as the just data audit. So this updates this week are around the spins, API, the NMN mat as well as our next steps and potential blockers as well. So high. Level the spins API update. We’ve been able to refactor the pipeline. So it now goes from API to s3 to redshift. And that has allowed us to have less of a data drop. And so now the disparity is really close to what we expect to see out of the Spains API. And also like in the warehouse, the dollar volume QA looks pretty good. Like I mentioned, it’s about a range of about $200 for 52 weeks in the test slices so far. However, the TDP as well as ACV match like what we get through the platform. And so we’ve reached out to spins about that and just how to compute that and use that in our current models. So we reach out to them. We’re currently waiting for feedback from them. And once we get that feedback, we’ll be able to proceed with the spins data. Any questions on this update? No questions for me. All right. Aligned. That’s.

Me: I’m pumped that we got this S3. This is much better solution. Overall. So I’m really happy that we got to that point. And I will say, even in interacting with spins for our other client, It’s just slow. So I think we’ve done our best to just continue to push things. It seems like we’re working our way up the chain.

Them: Yeah. And if we can support it all, just, I don’t know, adding any sense of urgency to their team, let us know.

Me: Okay?

Them: Yeah. Happy to poke more. And we were, I think the next, like, we were all talking about this just like the kind of like, I think we could probably, based on like the pipes looking ok, actually start to pull the Magicspoon. Full history if we wanted to. And then the ACV and TDP calculations, like, we can kind of parallel path. That conversation with spins while we’re, like, making sure all the magic’s been Data came in.

Me: Go.

Them: Okay, sounds good. We can make a note of that. And just ensuring that we have the backfill for Magicspoon data.

Me: Great.

Them: Okay? On the MMM art. Basically all spend is done. We’ve got in QA from JT and he’s also giving us feedback based off of feedback he’s received from clients. He’s also let us know that he wanted a daily march, so that was also done as well. And then we’ve been able to fix those QA disparity with the daily mat and the weekly math. So that has been fixed. So now when I query it, everything matches to the exact decimal onto the exact same. So that’s pretty good. So yeah, basically the modeling spins data as well has also been done. So that’s basically stuff around the dollars and units. Again, there’s a slight thing I will have to push today, but that’s. That’s a fix that will be done today and will be done with having all the spin setter. So that will be the base table. And then all that will be left would just be ACV and TDP based off of. The feedback from the SP team. So basically the status from that overall is just like, we’re almost done, you know, and then potentially, once we push the new model in, that’s Michael to look at it. He feels like the numbers are much. The filters are what he wants to see. That would be good, and we’ll be done with that. And just basically be waiting for spins. Yeah. So I think the Ask would just be there any other numbers based off of, like, what we have? Just generally speaking, like, across spins and the different, like, CSVs that we might want to see. So that we know that, like, once we push the. Once I send the prn, we’re up, we’re, like, basically done.

Me: I think that’s like, I’d love to take, like, I’ll carve out some time next week to see what your base model is. And then I think the next steps in terms of that model is actually like, there’s a lot of use cases that this will be like, the raw data will be split into. So we have an internal meaning next week to discuss, like, how do we actually want to break this thing up because it’s so many levels of aggregation. So we should have some more direction in terms of, like, where we want to take the model next, probably next Wednesday or Thursday. So that’ll be just, like, keep that in mind. Like, keep going with what you’re doing, and then we’ll. We’ll follow up with some additional direction.

Them: All right, sounds good, then. So I think potentially, I need to put some time on your calendar for, like, Wednesday or Thursday morning, so we can just sync on that, have an idea of the next phase of this. But basically, I think this phase is we’re about to just tie with the bull and be done with it. Okay? Any questions beyond that.

Me: I think we all could on this. I am curious. Like, we’re kind of. I wanted to slow down and, like, walk before we run and focus on the magic spoonback fill. But obviously, like, the goal of this is actually to bring in, like, a lot more brands and, like, a higher volume of data. Do you have any concerns in terms of rate limits as we start that backflow process? Just wanted to, like, make sure we could get Magicspoon in cleanly before we expand the search, because I think that’s where a lot of our issues came from. But, yeah, can you speak to them? Maybe Ashwini or Uttam?

Them: Yeah, I think even for Magicspoon, we might run out of rate limits, so. If you can, you know, do it in smaller chunks of data, I think that that would be better. Maybe we could pull one product universe at a time. Instead of pulling all three of them.

Me: Yeah, I think, Ashwini, I’m probably on our side. We can work.

Them: But. Yeah, what I’m thinking is. Yeah.

Me: Maybe you can send me the API limit docs. I think I reviewed it maybe like a little bit ago. And then, yeah, basically, I think we will just have to schedule our exports to adhere to that. Once we start to land, suffer. Just Magicspoon. It’ll give us a lot more clarity on the volume and how much time. And then we basically could do some envelope math to scale from there conservatively. And then I think Michael just basically try to give you an SLA on.

Them: Y.

Me: Like, okay, this pipeline is taking X amount of time to run. Give or take X like so we can reliably feel like we get refresh data within Y timeframe is like sort of like the message. I’m like, hopefully we can get to you.

Them: Eah y. Eah, that. That would be awesome. Like next week. So we have something we can communicate very discreetly to, like, leadership and say, like, hey, like.

Me: Do they have, like, do they have goals or minimum isolates that you’re trying to meet or sort of based on whatever, however fast we can do it?

Them: There’s no. Like, it’s a monthly process that we, like, dive into. So it’s really like, the time savings is from, like, our team. Heather is, like, manually extracting all this information, which takes, like, four days, the process. So it’s like, it’s. It’s a continuous improvement rather than like a hard.

Me: Okay? Okay? Okay? So it’s four days on close a month, like right after close a month. Or leading up to.

Them: Four days, right after four days, right after the data delivery. So I think.

Me: Okay?

Them: Exactly. So the last Delivery was what, 128 was that. Is that right? So it’d be like the February release date is kind of our next opportunity to align with. Okay.

Me: I see. Okay. Gotcha. Okay? Great.

Them: All. Right? Yeah. So we’ll turn just, like, get some context about that. We’ll do some documentation and just kind of understand that over. So you. You will loop you in on what can be done. And how we can realistically go about the rate limits so that we can get all the data that’s needed. Cool. Okay? Yeah. Excited to talk about that next week and get building some. Some interesting marks for our team. That’s awesome. Yeah, sounds great.

Them: Okay. Yeah. So, like, the next steps, blockers that we have. So in terms of blockers, Like we mentioned, like, the TDP ACV thing is currently a blocker. We’re just waiting for the spins team to respond to us. And, like, basically, we’re just going to have to keep following up. We’ll follow up with the spins team. Through the email thread and just kind of see how they, you know, respond to us and how we’re able to use that information to be able to push this bit forward and over the line in terms of, like, next steps. We’re going to have meetings with outgoing data vendor next week, just basically to get business context on, like, what we need to do to take over as well as just maintain things and keep things flowing. Also in line with that, we will be documenting like DBT questions as well as models that we might need to like look at and dissect so that we can have again context for takeover. And while we have them still around, let’s get the knowledge transfer going and use that to be able to. Stabilize the ship so that when they go, things don’t, like, start rocking. And so that will be part of the next step as well as like the final just be like, so renewal. So at this point, I know we’ve done that internally, it’s currently being reviewed, but that should be shared today. And the idea is we won’t send that over, have the Magicspoon team look over it and then sign it and send back to us and we’ll be ready to go, basically.

Me: Yeah, Michael, maybe just some context. We spoke with Mary yesterday, I think. One of the things that we heard about the partner transition. So the next two weeks, we want our focus to just kind of hit them with whatever questions we have about anything related to Prefect and any of the core Shopify data models that they’ve been maintaining. In particular, we also heard and snoop around in DBT to kind of see a lot of the failures. And kind of hurt a little bit about how alerting and failure triaging has been handled. And so overall, like, in terms of, like, platform stability, that’s our first thing that we typically go in and hit. Like making sure that all jobs are running. We have a clear path with jobs fail and a clear like typically we try to triage with. I mean during business hours we’ll come back two hours. At which point we can basically say, hey, this is, like, gonna take a long time to fix. Or this is gonna take. I’m just gonna push a fix. So that’s how we typically manage dbt for all of our clients. So, ideally, when things fail one, there’s not, like, fake alerts. So, like, you sort of don’t wanna have a Cryolf situation. So we’ll go through and clear all. Make sure all the tests. And job failures that are happening are actually worth investigating. And then we have a path to bring those into slack. So that we can see in triage. So that’ll be that kind of the first thing. And then also from their team, we just wanna get as much information on things that were commonly failing gotchas. Like parts of the repo that maybe they had a hard time maintaining. It looks like anything. Any context on that? So that we can start to help build out a roadmap of those fixes. I know there’s also a lot of net new modeling coming down the pipeline. So that’ll just sort of be, I think, a week to week discussion within us about like, hey there are these platform improvements or tech debt versus this new ask, how should we allocate.

Them: Yeah. That makes sense. Like just kind of a week to week discussion, I think will be key to keeping the priorities like any, like, balance, I guess.

Me: And then even on, like, Prefect, for example, like, I think one of the things that outside of everything in your stack, I feel like that’s probably the one thing that we would say there are some better options.

Them: Yeah.

Me: But for us, I think we will kind of spend a little bit more time looking and seeing, like, okay. Can we reliably maintain the system? Are there other, you know, basically same cost or last options? For us to move orchestration to something commonly we use. Daxter books have used airflow before. Really what that allows us to do is to ship pipeline and observe them as they go faster. So, like, that’s something probably in the year, we will, like, try to just do a spike on it and, like, kind of share. For us to see. Like, okay, if we want to recommend that, how long would that take? And what’s the benefit? Apart from that, on the stack side, I feel like most of it is just going to be making sure that, like, it’s going to be a lot of just DBT cleanup work we haven’t heard anything about, like, necessarily, like. Models are taking too long to run or like things aren’t refreshing. So I feel like it’s probably although we are going to look at all the job times, it’s probably less about that that’s come to common where okay, if we need to go into redshift and like create query groups and things like that. But it seems like most of it’s actually just, like, there are pieces of logic that have been built that we just maybe need to, like, clean up or make more modular and, like, have really great documentation around.

Them: Yeah. Cool. Yeah, that makes. Makes sense. Kind of in line with what we’ve been thinking.

Me: Okay?

Them: At least in terms of the modeling piece. Yeah, I feel like if prefix been working and that was like, what was recommended. It’s like, I feel like I have a bias towards just kind of, like, leaning into the existing infrastructure. So if y’all can prove the worth.

Me: Yeah.

Them: We can have that discussion.

Me: No, there’s definitely a difference between, like, we’re not. I don’t think we care much about if it’s annoying.

Them: But yeah.

Me: But if there is like a worth I think we will share like you know, that could be. Hey, it take saves us X amount of time or these things are brittle. We expect them to be failures, and that time to mitigate will be high. So we’ll put that together before we do anything there.

Them: Yeah, definitely. Like, in terms of priority, that’s, like, lowest on my mind compared to, like, the modeling and just kind of, like, ways of working. And getting to this transition. Okay? Yeah. So we basically have like put down things around like that in the sow. Also, like contact points because like we said, you know, we will be able to reach out. Basically every week, we’ll be able to, you know, do an end of week update as well as, like, touch points within the week about things around modeling and just priorities. So, like, if we need to, like, Pre strategize or things prioritize prices need to shift midweek. We can always touch base and be able to realign those expectations of the week based off of the midweek touch point as well. So yeah, sounds great. All right, ben. Okay. Yeah. So I think that’s it for us for the week in terms of updates. Do you have any, like, questions or feedback on the interaction so far? Or like anything you like to just mention so that we can keep top of mind for next week or just going forward in general. No, I think you. You had a lot there with the sow and then the. I know you mentioned just having a list of questions that you want to send to Orchard, if we could have that. Earlier on Tuesday, too, so our team can just take a look and we can provide that to them. Just so we’re all prepared going into the call. Sounds good. Yeah, Currently in progress. We’ll definitely get that course to you early next week. Okay, great. And then that call with them to is Tuesday at 1pm Eastern. At that time works. Yes, it should.

Me: Works for me.

Them: 1:00pm it works for me. Have you been okay? Great. Yeah, I just need to see the invite to. At least we have my calendar to, like, reject other calls, so. Yeah. All right, then. Sounds good. So I shall see you next week. On Tuesday. I know it’s a long holiday. It’s a long weekend. So have a great weekend and see you next week. Thank you. Thanks, guys.

Me: Thanks, everyone.

Them: Thanks so much, everyone.

Me: Talk to you.

Them: See ya.