Meeting Title: Javy Data Engineering Weekly Date: 2024-12-12 Meeting participants: Luke Daque, Nicolas Sucari, Aman Nagpal, Payas Parab, Robert Tseng


WEBVTT

1 00:04:55.730 00:04:57.019 Aman Nagpal: Hey, guys, how’s it going.

2 00:05:02.250 00:05:03.789 Nicolas Sucari: Hey, Aman! How are you?

3 00:05:04.390 00:05:05.509 Aman Nagpal: Good! How are you?

4 00:05:06.790 00:05:07.630 Nicolas Sucari: Doing, good.

5 00:05:09.540 00:05:13.299 Aman Nagpal: Is this a bad time for otum? This call.

6 00:05:14.610 00:05:15.370 Nicolas Sucari: Over time.

7 00:05:15.580 00:05:16.190 Aman Nagpal: Yeah.

8 00:05:17.524 00:05:20.040 Nicolas Sucari: I’m not sure. Let me see his calendar.

9 00:05:21.100 00:05:24.199 Aman Nagpal: Cause it says he. Re declined. I don’t know if I

10 00:05:24.250 00:05:27.040 Aman Nagpal: I don’t think he was on the last one or 2 weeks either. Right.

11 00:05:27.920 00:05:28.630 Nicolas Sucari: Yeah.

12 00:05:29.380 00:05:30.539 Nicolas Sucari: Let me see.

13 00:05:30.540 00:05:36.080 Aman Nagpal: Schedule it. We can. I just need to know when works for you guys.

14 00:05:38.220 00:05:40.209 Aman Nagpal: But yeah, we can figure that out later.

15 00:05:59.730 00:06:00.120 Luke Daque: Cheese.

16 00:06:00.120 00:06:02.230 Nicolas Sucari: So yas, and Robert are joining.

17 00:06:02.990 00:06:03.510 Aman Nagpal: Cool.

18 00:06:59.350 00:07:00.509 Payas Parab: Hey? How are you guys.

19 00:07:01.780 00:07:02.720 Aman Nagpal: Hey? How’s it going.

20 00:07:02.920 00:07:04.090 Payas Parab: Good! How are you?

21 00:07:04.780 00:07:05.490 Aman Nagpal: Good.

22 00:07:06.150 00:07:08.740 Aman Nagpal: Finally got all the stuff from John. Huh?

23 00:07:09.080 00:07:11.690 Payas Parab: Yeah, yeah, I just I got it. And I,

24 00:07:11.710 00:07:33.569 Payas Parab: we’re moving pretty quickly on getting that ticketed. Actually, I don’t know if you saw my reply there, but we just clarified a couple of things, and I’m like, let’s just ship that data model and Nico and Ryan, I’ve already. I sent them the like ticketed it for them. So I think we’re gonna move as quickly as we can on that. To make sure Jared gets those outputs right like like. There were some things that it was really nice. We did that because

25 00:07:33.850 00:07:54.340 Payas Parab: it seemed like there was some misunderstanding between different members of the team on like what goes into the cost. So like how those were built. So it’s it’s helpful that we have that now. So we’re gonna ship that as soon as we can to get that pipe through. We have a pretty clear idea of what needs to happen. We’ve done that legwork before. So it’s just like

26 00:07:54.450 00:08:12.619 Payas Parab: now it’s just doing it. And I sent like a diagram of like what we’re trying to do if that’s helpful. So you know, cause. Robert also mentioned to me, chatted with you a little bit, and we want to just ensure there’s good handoff documentation right on, like how things are built where data sits and stuff like that. So we wanted to emphasize that as well. With this one.

27 00:08:13.070 00:08:16.660 Aman Nagpal: Awesome. Appreciate it. Yeah, no, I think. I’m sure there’s definitely some

28 00:08:16.970 00:08:26.559 Aman Nagpal: gap between Jared and John’s understanding of it. So I think it’s good that everyone’s in the same channel. We can all, just, you know, finalize there.

29 00:08:28.490 00:08:35.080 Payas Parab: Awesome. Yeah. And I know Robert is joining. He might just be in another meeting.

30 00:08:40.049 00:08:41.030 Payas Parab: yeah.

31 00:08:41.760 00:09:07.560 Payas Parab: yeah, I know we have. We have quite a lot of work streams kind of ongoing, but I think we have some good progress to report across the board between recharge and gorgeous. I know Nico’s working his ass off on saving those costs for the like. We see him every day in our in our internal chat, like he’s working on all the work streams of getting rid of this 5 trend nonsense and seeing how we can migrate. So I know we have a lot to Update on and then I know as well. Robert mentioned to me this whole like

32 00:09:07.740 00:09:13.100 Payas Parab: snowflake native stuff like we did a big investigation into like you know what? What is

33 00:09:13.570 00:09:21.789 Payas Parab: the issues, what are the issues, how we can do it. And like, kind of present, some options. I, yeah, like the Snowflake

34 00:09:21.790 00:09:47.538 Payas Parab: native is more limited than I. We had hoped like. It’s kind of like once we started pipelining the data, and it’s like, Oh, well, you can’t do this. You can’t do that, and doesn’t mean that we can’t do it right? So what I want to, what I want to emphasize like it’s not that we can’t do it. It’s that like we already have a lot of that data in amplitude. And we can like add, now we’re just fixing up the recharge. I can add that into there and just do it right. It’s just a matter of like the pros and cons which I can lay out.

35 00:09:47.840 00:09:49.810 Payas Parab: yeah, I.

36 00:09:49.810 00:09:51.070 Aman Nagpal: Yeah, no, that makes sense.

37 00:09:52.063 00:10:01.179 Aman Nagpal: We can jump into those 2 today. We’ll I guess we can give Robert a few more minutes. Maybe we can talk about the 5 trans stuff. Nico, So

38 00:10:01.640 00:10:06.260 Aman Nagpal: yeah, I wanted to discuss before we go ahead and like, spend the hours to do it right is.

39 00:10:06.270 00:10:13.909 Aman Nagpal: you know what the pros and cons are? I know it seems like everyone’s using 5 trend. And we did all this work to set 5 trend up?

40 00:10:14.791 00:10:18.010 Aman Nagpal: So now, you know, if we switch to portable a

41 00:10:18.070 00:10:28.089 Aman Nagpal: is it, you know a significant amount of time to pivot to that now, B. Do we know what the rows estimates are? If we stick with 5 tran

42 00:10:28.688 00:10:34.650 Aman Nagpal: and see what are yeah, what? What’s like the downfalls of using portable instead of I know it’s not as

43 00:10:35.293 00:10:45.260 Aman Nagpal: I don’t know if well known, is where I were, but it’s it’s not as used as much as 5, Tran. But are we gonna have any trouble with recharge, or this or that? Or is it gonna be as easy.

44 00:10:45.830 00:11:08.970 Nicolas Sucari: I mean, we. We’re now testing gorgeous data in portable. We’re trying to load all of the data into Snowflake. The connection was pretty easy, and I think it’s almost the same. It should be the same data that we are bringing in with Fivetran that will be bringing in with portable we are building it in the same snowflake environment. So we will be able to compare like both

45 00:11:08.970 00:11:15.766 Nicolas Sucari: of of the data sets. And however, we’re bringing the data. That’s the same. I don’t think there should be like

46 00:11:16.340 00:11:39.519 Nicolas Sucari: like an issue. With that, I mean, we can then change the modeling into start consuming the data from the portable import than the the fiber. The fivetron one I don’t know how long it will take this. Maybe look here, can help us, but it shouldn’t be so difficult, because if the data is bringing the same that we already have with 5 Tron, it should be just changing like the

47 00:11:39.520 00:11:46.856 Nicolas Sucari: the source where we are getting it. We don’t, we will. We wouldn’t be needing like modeling all over again. Right.

48 00:11:47.500 00:11:50.320 Nicolas Sucari: so that’s that’s the 1st one. Then all of.

49 00:11:50.320 00:12:04.900 Aman Nagpal: Yeah, no, I just want to confirm. So is that the case that we’re just changing the source and that the data we’re pulling from 5 trans. Portable is exactly one to one, and it’s not going to be like a slight syntax difference that’s gonna require a lot of work. Are we sure about that.

50 00:12:05.220 00:12:09.580 Nicolas Sucari: We? We are not 100% sure now. But yeah, go ahead, Luke. If you want to talk.

51 00:12:09.840 00:12:17.570 Luke Daque: Yeah, we we are not sure yet. But if they’re just coming from the same Api endpoints, yeah, they should. They should be the same.

52 00:12:17.570 00:12:18.149 Nicolas Sucari: Should be as.

53 00:12:18.445 00:12:22.290 Luke Daque: But yeah, that’s what we’re working on right now. We actually have a

54 00:12:22.680 00:12:28.929 Luke Daque: conversation going with portable as well so they can like, hasten up the process as well.

55 00:12:29.320 00:12:36.100 Aman Nagpal: Sounds good. And do you guys, is this, this is I’m assuming the 1st time you work with portable, do you typically just do 5 train for everyone else.

56 00:12:36.590 00:12:37.025 Luke Daque: Yeah.

57 00:12:38.470 00:12:39.069 Aman Nagpal: Got it.

58 00:12:39.820 00:12:41.029 Luke Daque: And I, yeah, but I think.

59 00:12:42.500 00:12:46.890 Luke Daque: yeah, maybe, Nico, you can. You can share, like the pricing model of portable to.

60 00:12:47.030 00:12:47.870 Luke Daque: I might as well.

61 00:12:47.870 00:12:57.719 Nicolas Sucari: I think I I think we already shared that the the pricing model, I mean, it’s cheaper because it’s per connector, and it’s not like per amount of of data or or more.

62 00:12:57.720 00:13:22.070 Nicolas Sucari: as we talked. But yeah, I mean, we didn’t use portable yet. I don’t know if Utam has already worked with the portable in the past. I’m gonna ask him. But we shouldn’t find like a lot of issues. We’re gonna try to like, confirm that once we have the the gorgeous data in Snowflake, and we can compare but if, if, as Luke said, if if we are getting the data from the same end, from the same Api or the same endpoint.

63 00:13:22.070 00:13:30.150 Nicolas Sucari: we shouldn’t be getting different results. So so the modeling is going to be like the same, we just need to change the data source.

64 00:13:30.479 00:13:46.970 Nicolas Sucari: What? What we can do also is we already started with gorgeous, we’re gonna start with shopify, too. Yeah, I I asked you for the Api key and the password demand. So if you can share that, we can also try like during this week shopify data, too.

65 00:13:47.090 00:13:53.589 Nicolas Sucari: And we can. We can keep trying different the other sources, recharge Okendo.

66 00:13:53.960 00:14:02.149 Nicolas Sucari: Amazon and see when and see if we have everything ready, and once we have everything in in portable, we can then turn down

67 00:14:02.450 00:14:09.249 Nicolas Sucari: 5 tran like we are not. Gonna get rid of the data that we have from Fivetran until we know that everything is really importable. Okay.

68 00:14:09.760 00:14:14.179 Aman Nagpal: Right? No. So that that’s what I’m saying right is like, I don’t wanna go ahead. And

69 00:14:14.190 00:14:15.270 Aman Nagpal: you know, I know

70 00:14:15.350 00:14:23.789 Aman Nagpal: hours that you guys could be spending on something else. I I wanna know, have the all the information before we even spend 1 h to even test it. Right? So

71 00:14:24.102 00:14:36.480 Aman Nagpal: do we know you know how many rows we were able to cut out, what we can expect for 5 Tran, before we make that decision? Because if you were to tell me. Hey, look! We were able to knock down this many rows and this many tables, and we think

72 00:14:36.550 00:14:46.549 Aman Nagpal: 5 Tran could be around. I don’t know. 1,502 grand instead of 3 grand. Then that might tell me. Maybe we stick with it right? Or if we what do you think.

73 00:14:46.550 00:15:12.699 Nicolas Sucari: Yeah, I I don’t think that will be the case, I think 5 trend, for now it’s gonna be a little bit higher, and as we, as I said, like I think, each connector that, we add will add like 500 bucks per month, for example, something around. That is what we could see. It will depend obviously on the amount of data that we have on the tables. And the and what is the data that we wanna model then, so that you can get like more insights.

74 00:15:12.700 00:15:39.869 Nicolas Sucari: For example. Now, Gorgeous has a lot of a lot of rows in the message tables, and we are using that in order to get the macros and that kind of stuff so deselecting those tables will not allow us to do some of those analysis, but those are like the most large tables that we are getting right now. So it’s kind of that trade off. We will need to understand which is like the

75 00:15:39.880 00:16:01.629 Nicolas Sucari: the amount of data that it will be coming in. And if we add more data sources like, that’s gonna be higher. But yeah, I mean, we are actively looking for for deselecting those tables that we are not using. And when we are not using, we just deselect that and get rid of that. However, I I’m seeing that the gorgeous data is like huge. And it doesn’t seem it’s gonna be like a lot less right.

76 00:16:02.270 00:16:22.560 Aman Nagpal: No, that that’s pretty clear. So it it sounds like based on the amount of data we’re pulling. And the fact that if we keep adding more and more. Maybe the unlimited approach is what we need to go with, and that’s fine. What about the cons of portal? Right? It’s the same amount of uptime, same speed, same amount of, you know, frequencies that it syncs. What about all that good stuff.

77 00:16:22.870 00:16:45.210 Nicolas Sucari: Yeah, the the plan that we are looking into it has like a daily thing, too. So that shouldn’t be a problem. And yeah, and we are still looking into all of the connectors and see if we can get all of the data we will be testing. Once we get the gorgeous data into Snowflake. We can test if we are bringing the same data that we have from Fivetran. And then we can just start looking connector by connector, and see which ones we can turn off from 5.

78 00:16:45.750 00:16:48.729 Aman Nagpal: Right now are we on a daily sync with 5 tran.

79 00:16:49.490 00:17:11.659 Nicolas Sucari: We? We stop that daily sync in order to reduce the cost and understand, where are we going? So it is not syncing anymore on a daily basis. We’re gonna do manual things if we want on on any of those data, if we need something update until now, I we post that 2 days ago. But yeah, I mean, that’s that’s right. Before that we were doing daily things with 5. Yeah.

80 00:17:12.099 00:17:18.099 Aman Nagpal: Got you and you said it’s probably, I think, 1,500 a month for daily sync with 10 connectors

81 00:17:18.589 00:17:19.749 Aman Nagpal: for portable. Yep.

82 00:17:19.779 00:17:22.649 Aman Nagpal: And what if we were to do more frequent syncs.

83 00:17:23.859 00:17:32.569 Nicolas Sucari: I think the plan and let us do like more frequencing. I’m gonna check that. But I think let me let me check back that right now.

84 00:17:32.679 00:17:35.109 Nicolas Sucari: But yeah, I think that’s what we can do

85 00:17:36.131 00:17:43.859 Nicolas Sucari: daily things for sure. And if you want more frequencing maybe we’ll need to scale up to an hour plan. But yeah, everything.

86 00:17:44.890 00:17:46.860 Aman Nagpal: Yeah, I think, that

87 00:17:46.940 00:18:04.580 Aman Nagpal: does work. I I think you’re right. I think we need need to go in this direction. Otherwise we’re gonna end up with a 5 to 10 K. Bill, just for 5 in a few months. And if we want to add more than 10, I mean, we could just probably just add another 10 for 1,500 or less extra per month if it ever came to that right.

88 00:18:05.790 00:18:06.420 Nicolas Sucari: Yep.

89 00:18:06.860 00:18:19.459 Nicolas Sucari: so yeah, we can do more frequencing. Sorry like every 15 min. I think it’s like the most frequent one that we can do, but I don’t know if we need that amount of frequency syncing.

90 00:18:20.340 00:18:23.229 Aman Nagpal: Yeah. How much would that run for tech connectors? Any idea.

91 00:18:25.500 00:18:32.100 Nicolas Sucari: No, no, it’s in the same. In the same plan that we are. We are talking about the the 1,500 yeah per month.

92 00:18:32.100 00:18:37.090 Aman Nagpal: Is there any downside to thinking every 15 min, instead of daily?

93 00:18:37.930 00:18:45.810 Nicolas Sucari: I I mean, we’re gonna be hitting like a lot of those endpoints. And if something fails, maybe it got kind of blocks. But it depends on what is the

94 00:18:45.940 00:19:02.760 Nicolas Sucari: the the frequency that you need, the data to be updated in, like, if you’re gonna be checking the dashboards like every hour. You need to do something every hour, and that, okay, let’s do that thing. If not, I think we can do something like 24 h, or 24 h, or 2 times a day, or something like that.

95 00:19:03.180 00:19:08.770 Aman Nagpal: As real time as pass as possible is always better. So we know what we’re looking at right. But

96 00:19:09.356 00:19:27.470 Aman Nagpal: of course, you know if there’s a significant downside to doing it every 15 min, then we can make that decision then. So once we have that info, if you can let me know. But yeah, ideally, if we can do 15, then we can do 15 and if we can just confirm that, you know no other issues with using portable as opposed to

97 00:19:27.470 00:19:42.430 Aman Nagpal: 5 Tran. I know the 5 Tran probably has a ton more connectors for anything that portable doesn’t have a quote unquote, ready made Connector, for that doesn’t mean we can’t get that data. That just means it’ll be slightly more work to get that data right.

98 00:19:43.480 00:20:04.140 Nicolas Sucari: Yeah, I don’t know if if spectrum has more connectors than portable we need to see like the entire list, maybe. Yes, but they have like more bigger kind of yeah tools that they are using on portable is kind of more looking into some niche and some other tools, but I think the ones that we are using right now. They are all portable, too.

99 00:20:05.090 00:20:12.819 Aman Nagpal: So I mean, as long as end of the day they don’t list at the connection that we need as long as we have an Api key. And Api, we can get that data.

100 00:20:13.700 00:20:23.109 Nicolas Sucari: Yes, exactly. Yeah, I mean, and if we don’t have one of those connectors, we can build the connection customly. And yeah, bring the data some other way. And that would be fine, too.

101 00:20:23.940 00:20:29.009 Aman Nagpal: That sounds good, and portable is the best one that you guys found. There wasn’t anything else that you think could work.

102 00:20:29.960 00:20:36.460 Nicolas Sucari: We’ve been looking portable and one other. But yeah, we were like leaning more into portable. Right now we want to test that and see what happens. Yeah.

103 00:20:36.500 00:20:39.220 Nicolas Sucari: the idea is to try to

104 00:20:39.620 00:20:51.260 Nicolas Sucari: start testing with these 2 connectors, shopify and gorgeous. And if we see that everything is running okay, yeah, we can then start turning off the connectors on 5 turn. And yeah, we’ll keep doing that integration.

105 00:20:51.810 00:21:09.340 Aman Nagpal: That sounds good to me. Thank you. Yeah, we can start testing that. I know you guys are already testing try to see if there’s any downside to doing it in 15 min, and also just any potential disadvantages using portable instead of 5 trans. So we can make that final decision. But I think the plan makes sense.

106 00:21:10.660 00:21:21.549 Nicolas Sucari: Yeah, once we have more idea on, on the data, on how it’s coming in. And we can do some like testing on on that we’ll let you know. If we found any any downside on that. And yeah, and we can make the decision.

107 00:21:22.170 00:21:23.670 Aman Nagpal: Sounds good. Thank you.

108 00:21:24.620 00:21:28.389 Aman Nagpal: Anything else from you, Nico, or should we jump into the amplitude, conversation.

109 00:21:28.390 00:21:53.969 Nicolas Sucari: No, just yeah, I’m I’m following up like, literally every day on those 2 support tickets that we sent to Fiveran. One. They are answering that the finance team has that request, and they are working on it. And the other one, I’m like pushing into getting any any response. So yeah, I will be keeping that in my mind every day trying to ask for that refunds, and yeah, and if I don’t get any any answer, I will call again, or

110 00:21:54.170 00:21:57.909 Nicolas Sucari: pipeline Rep, in order to help us find that solution. Okay.

111 00:21:58.210 00:21:59.449 Aman Nagpal: Appreciate it. Thank you.

112 00:22:00.860 00:22:06.560 Nicolas Sucari: Cool Nope. Nothing else from from my side. I think.

113 00:22:07.530 00:22:27.629 Payas Parab: So we want to talk a little bit about snowflake snowflake amplitude right? Kind of the integration, understanding what the pros and cons are like I, Robert and I had some time to review, and like look more detailed. I think, Robert, you even attended, like like Demos, with the the amplitude team right on like where they were at with the Snowflake native.

114 00:22:27.955 00:22:31.669 Payas Parab: So we we do have like a perspective on this, which is that like.

115 00:22:32.260 00:22:40.430 Payas Parab: like, we can move things into the snowflake and create those amplitude visualizations. But the way it’s gonna work right now is like

116 00:22:40.460 00:23:00.180 Payas Parab: it’s effectively like, if you really like the ui of amplitude, that’s essentially what we’re really building into. There’s a few capabilities that are like really, really limited. And I can send you an article. It’s a bit long, but it’s like basically what they don’t allow in Snowflake native and like that list is like way bigger than we kind of thought like it was like one of those help articles.

117 00:23:00.513 00:23:15.840 Payas Parab: The reason the reason it’s like very limited is that like, we basically tell them. What identifier to use right? What identifier to use like to identify a customer? And one of the things we’ve noticed in this whole data warehousing project, right is that like

118 00:23:15.990 00:23:23.940 Payas Parab: the shopify and like Amazon as like, I don’t know how Amazon tracks it. I gotta double check, but the shopify, like

119 00:23:24.050 00:23:46.129 Payas Parab: tracking of repeat customers is just way better than anything. We already had an amplitude, and things like that, like when we shifted the logic. And this is a while ago with Brian. Like the figuring out who’s a repeat customer shopify does a way better job of that than anything. We had an amplitude. So when we ran like analysis, we’re like, Oh, we can see that like

120 00:23:46.820 00:24:12.559 Payas Parab: that. The shopify is capturing better. Who’s a repeat customer than anything in amplitude, and it’s like by a substantial margin where you’ll see, like the percentages of new versus returning like we were able to tie out to Jared’s numbers from the shopify, admin and snowflake that tie out is like nowhere close in amplitude, and I’d sent like a detailed analysis on this for Robert. So when we do, the customer like tracking like tracking customer, like

121 00:24:12.660 00:24:22.560 Payas Parab: we’re gonna want to connect gorgeous and recharge, and all of that. Now, can I load everything into amplitude and have it use the snowflake one that we know to be better.

122 00:24:22.590 00:24:31.240 Payas Parab: Yes, I can tell it to do that right. But at the end of the day, then, like all the visualizations, are just limited to whatever they allow in there. And, for example, like

123 00:24:31.500 00:24:41.089 Payas Parab: like, there’s basically like the amplitude has a like a completely new version of amplitude, essentially, that they’ve created to be able to connect to and in interact with data and snowflake.

124 00:24:41.490 00:25:03.930 Payas Parab: That like, we know the snowflake data to be more accurate, especially on returning versus new. So we want to use that we can tell it to use that. But then, once we tell it to use that we’re using this like snowflake native bi, like visualization tool. And then that thing is just like limited and like frankly, for, like very arbitrary reasons like we can’t even understand why.

125 00:25:03.930 00:25:16.180 Payas Parab: But it just is like, and I can send you the article where there’s like, we just don’t have functionality for this yet. I I don’t know exactly why. I don’t know, Robert. If the partner calls gave you any sense, but they’re working on it over time. I don’t know.

126 00:25:20.470 00:25:28.366 Robert Tseng: Yeah, I mean, I’ve I’ve I’ve had multiple support calls with their Cs people now and our ae, and whatever I think

127 00:25:30.340 00:25:32.690 Robert Tseng: I mean, I think it’s hard to like.

128 00:25:33.890 00:25:37.330 Robert Tseng: Say, like what’s on their roadmap or whatnot. They just

129 00:25:38.620 00:25:57.480 Robert Tseng: they just thought it was. It’s not the warehouse native is not designed for real time analysis. Their thought process with releasing it, as is right now, is just to provide quicker access to like newer data sets that they don’t have native connectors to, or that their existing connectors aren’t great with. So

130 00:25:58.028 00:26:06.979 Robert Tseng: it’s it’s not meant to be a replacement for their original amplitude product. Which is why Zack is like, yeah, you should have 2 projects.

131 00:26:08.730 00:26:12.710 Robert Tseng: Yeah. I mean, I think the fact that they we we’ve got

132 00:26:13.030 00:26:26.750 Robert Tseng: Pius. Last time I was with them on we we we went through like a funnel report. Build out we can do it within one model. But you can’t do cross model funnel reporting which like, kind of defeats, the purpose of having.

133 00:26:27.200 00:26:43.699 Robert Tseng: Yeah. So it’s like, whatever you saw in their Demo is. It’s just limited to like. You need to have everything in one single table, and then amplitude will read off of that. But that doesn’t make sense. We would never model that data that way by throwing everything into one table.

134 00:26:46.490 00:26:53.070 Aman Nagpal: yeah, and I emailed back to and showed him that, hey, look, this is the dog you and I were looking at.

135 00:26:53.220 00:27:04.299 Aman Nagpal: But your page on Wna has 2 videos using funnel charts. So he told me. Yes, funnel charts are possible, and I mentioned the real time thing. And then he got back to me again, and he said.

136 00:27:05.124 00:27:18.040 Aman Nagpal: You know you can access funnels in both versions, but the exact same chart charts with all features might not be available. And then you said, you can’t view events in exact order. So I guess what you’re saying is, you know, different models. You can’t do

137 00:27:18.630 00:27:19.030 Aman Nagpal: yeah.

138 00:27:19.030 00:27:23.460 Robert Tseng: If it’s in the same model, you can do it. I think that’s what we went back and and tested. Yeah.

139 00:27:24.140 00:27:27.979 Aman Nagpal: So I guess yeah, I mean, that’s disappointing. Cause that was a

140 00:27:28.060 00:27:31.969 Aman Nagpal: I guess what I had envisioned. But I guess

141 00:27:32.170 00:27:42.559 Aman Nagpal: what’s the solution, or next, you know, what? What can we do instead? Right? So we know what we need to be able to to do. You know, at a base example.

142 00:27:42.580 00:27:44.679 Aman Nagpal: we need to be able to

143 00:27:44.690 00:27:56.359 Aman Nagpal: do a funnel chart. You know, I’m gonna give you one report where we can see a user place, an order or even, we’ll go back further. A user went to a landing page. What specific landing page.

144 00:27:56.380 00:28:00.339 Aman Nagpal: then what percent of them place an order, what percent of them

145 00:28:00.490 00:28:05.630 Aman Nagpal: add, and that’s all you know. Performance marketing data. Then shopify data, then,

146 00:28:06.790 00:28:10.739 Aman Nagpal: they made a ticket in gorgeous. So that’s gorgeous data.

147 00:28:10.770 00:28:17.183 Aman Nagpal: and they requested to cancel. And then somewhere in the properties of one of those, you know, is

148 00:28:17.560 00:28:21.540 Aman Nagpal: 3 bottle people we want to look at just 3 bottle buyers. Right? So that’s a property.

149 00:28:21.630 00:28:29.809 Aman Nagpal: What tool or what visualization tool? Or you know, what? How do we go about doing that? What I guess that’s the best path forward.

150 00:28:30.140 00:28:42.659 Payas Parab: So on everything you described right now, because we have in Snowflake, like we can build all of that in Meta Base where I’m currently building the Cogs dashboard. So I’m moving that amplitude view, anyway, to Meta Base. We’re just fixing some

151 00:28:42.680 00:28:50.500 Payas Parab: logic stuff on there, the one that we showed you last week. We’re just fixing like we just finished actually, Ryan, all the tickets on the recharge and gorgeous stuff. Right?

152 00:28:50.510 00:28:51.950 Payas Parab: I think we’re almost done.

153 00:28:52.170 00:28:52.525 Luke Daque: Yeah.

154 00:28:53.090 00:28:54.970 Payas Parab: Okay, so

155 00:28:55.190 00:29:03.900 Payas Parab: we can move that into Meta base every aspect of the funnel you described stitching that together with customer id in Meta base. Using

156 00:29:04.200 00:29:15.309 Payas Parab: all this data we have in Snowflake. Everything is there except the one thing is the performance marketing data, right? So that aspect like bridging the gap between. Those will need to get a little creative.

157 00:29:15.430 00:29:18.369 Payas Parab: That’s the only component that I could see. That’s like

158 00:29:18.680 00:29:24.849 Payas Parab: missing. And so the solution is like one is like, can we get the performance marketing data

159 00:29:25.330 00:29:32.959 Payas Parab: into Snowflake? Right? That’s 1 approach right? Or to take all that like snowflake data, figure out where it’s connecting from, and then

160 00:29:33.210 00:29:35.909 Payas Parab: connect. See? If, like.

161 00:29:36.680 00:29:45.360 Aman Nagpal: I think that’s the best solution. Robert, do you have any ideas on that one like we’re trying to connect that performance marketing data? I’m just like less familiar with that as you are.

162 00:29:45.867 00:30:01.220 Aman Nagpal: So what you’re saying is, it’s easy to make that chart. It’s just not going to be drag and drop and do it in amplitude. It’s just someone’s gonna have to write a little bit of SQL. Couple of lines, and we can get that into Meta Base, and it’s easy, but just not drag and drop. Is that what we’re at.

163 00:30:01.220 00:30:03.670 Payas Parab: That that’s a pretty good assessment. Correct? Yeah.

164 00:30:03.670 00:30:06.519 Aman Nagpal: Okay, I got that and

165 00:30:06.640 00:30:20.979 Aman Nagpal: the the performance market data. Yeah, Robert, let me know what you think. Right now, we’re firing a bunch of events in our Javascript that go to amplitude. But it sounds like as long as we can figure out a way to get that data

166 00:30:21.330 00:30:24.460 Aman Nagpal: to snowflake, then that part is possible also.

167 00:30:26.240 00:30:36.720 Robert Tseng: Yeah, I mean with the performance marketing data. Yeah. Because right now, well, we, the only thing that we’ve seen is the north theme stuff that’s in amplitude.

168 00:30:36.830 00:30:39.739 Robert Tseng: I mean, the idea is to break out like

169 00:30:39.780 00:30:44.930 Robert Tseng: cause. What am what North beam feeds into amplitude already is like their own

170 00:30:45.560 00:30:56.289 Robert Tseng: they’ve already done all the attribution calculations. They’ve done some. They’ve layered their own model on top of it. So you’re really just pulling in a couple of fields. The idea is that we could get those

171 00:30:56.640 00:31:09.760 Robert Tseng: we could get the channel level data into a snowflake. We can model out just performance marketing table. I can just give you a snapshot of like what that looks like. I think it’s pretty standard across all industries.

172 00:31:10.450 00:31:35.910 Robert Tseng: and once we have that we can, we can assign our own attribution. You can still take that clean table and feed it into north beam. I don’t know how exactly you’re connecting it to north beam. Maybe they have integrations with all the different ua stuff that you already do. But yeah, once again, it’s just like trying to clean the data and have a clean source of data at the at the source before we push it into

173 00:31:36.245 00:31:46.579 Robert Tseng: we’re not trying to replace Northeas. I think their modeling. Whatever is they can, they can continue with that. We don’t. We don’t need to. We don’t need to replicate that in in something.

174 00:31:47.100 00:31:51.120 Aman Nagpal: Yeah, I think because we’ve been working with so many different services.

175 00:31:51.440 00:32:06.859 Aman Nagpal: I should probably make clear with north Beam. Right? We’re not. You know, the media buying team looks at Northeam dashboard for a lot of things. They see how the ads are doing. They see what you know. They’re attributing to sales to each specific

176 00:32:07.190 00:32:15.450 Aman Nagpal: ad, let’s say, or ad set compared to what Facebook or Tiktok is attributing to those ads right? And then they make decisions based on that.

177 00:32:16.410 00:32:19.700 Aman Nagpal: What to the only thing we’re bringing in from

178 00:32:19.760 00:32:43.580 Aman Nagpal: north beam to amplitude is really just the spend. And then we make our Cac charts. So that is a financial thing that you know. We already figured out that okay, financial data is going to live in metabase or whatever other visualization tool. It is right, not necessarily amplitude and amplitude. You know. What we’ll do is we’ll make a funnel chart, and I think it’ll be good to just give you the example of what I’m thinking of, right? So our funnel charts are typically view listical.

179 00:32:44.140 00:32:51.080 Aman Nagpal: viewed landing page add to cart initiate checkout.

180 00:32:52.050 00:32:58.109 Aman Nagpal: and then order created. So I think this this will help clear things up. So we’re on the same page.

181 00:32:58.551 00:33:14.750 Aman Nagpal: This, you listicle. What I’ll do is I’ll select the offer name. So sorry, Listicle name. So I know that this is a property that we’re sending where you know this is our concentrates listicle. So I’ll look at the data just for that property landing page will be lp. 43

182 00:33:18.330 00:33:20.089 Aman Nagpal: I’ll do contains

183 00:33:20.110 00:33:29.429 Aman Nagpal: add to cart. That’s fine initiated. That’s fine. And the order created. I’ll do the same thing. I’ll do order, type, subscription, and one time only. So we’re not looking at any renewals

184 00:33:30.091 00:33:35.720 Aman Nagpal: accidentally being displayed in there and then I’ll do the offer name again as lp. 43.

185 00:33:38.520 00:33:40.060 Aman Nagpal: Let’s see this.

186 00:33:40.860 00:33:44.622 Aman Nagpal: So this doesn’t include the example I gave with

187 00:33:45.650 00:33:49.620 Aman Nagpal: gorgeous and all that, but you know we can. We can get a little bit into that also. I

188 00:33:49.830 00:33:59.269 Aman Nagpal: want to take this opportunity to take out jumbleberry as an affiliate program. They really skew our stats. So typically we take out, you know, Jumbleberry Snapchat

189 00:34:00.286 00:34:06.419 Aman Nagpal: app loving and just make sure we don’t include those right? So on a daily basis.

190 00:34:06.530 00:34:10.550 Aman Nagpal: we’re looking at charts like this. Now, where this comes from is

191 00:34:11.040 00:34:21.159 Aman Nagpal: this this this and this? All? The 1st 4 are literally just us firing a Javascript line of code from our landing page, right? Which I think.

192 00:34:21.170 00:34:28.590 Aman Nagpal: whether or not we need another tool to do that for Snowflake, or we can fire it somehow directly into Snowflake. This, I don’t think

193 00:34:28.600 00:34:51.350 Aman Nagpal: you know. Correct me if I’m wrong. I don’t think this is the hard part right? And the order created is coming from shopify’s. At that, you know. Shopify data that flow that we had, which is a shopify flow 15, I think, 5 or 15 min after the order is created. We do an Http request to amplitude Api, and send that event here. But now what we’re doing with Snowflake is just a continuous sync through 5 tran.

194 00:34:51.350 00:35:01.590 Aman Nagpal: But yeah, I mean, these 4 events are literally just quote unquote events that were fired from Javascript. Couldn’t we just send that into Snowflake and use all of that data.

195 00:35:03.480 00:35:07.950 Robert Tseng: Yeah. So I guess. Question here on this funnel. At what point are you identifying users.

196 00:35:08.400 00:35:19.010 Aman Nagpal: So that’s the missing piece here. Right? So amplitude is identifying them as specific users and tying them together via their email, right or their device. Id

197 00:35:19.550 00:35:36.340 Aman Nagpal: all the other events in our quote unquote project organization, it’s tying them together by that user. So this is so, the 1st part is easy. It looks like this is the part. Where can we use something else? Or is there another way to do that? What amplitude is doing here

198 00:35:36.580 00:35:39.080 Aman Nagpal: for snowflake. I guess that’s the hard part right.

199 00:35:39.670 00:35:51.108 Robert Tseng: Yeah, I I don’t wanna re, I don’t wanna reproduce like the identity resolution thing. Whatever we do like. At best, it’ll just replicate what like amplitude is that? And it’s just so much work to do it.

200 00:35:52.060 00:36:16.150 Robert Tseng: yeah, I guess like. So I mean, this is a great view of like what? What the perform, how the performance marketing, reporting kind of looks for me. So I mean for each of these events like, yes, most of these. Yeah, these are just Javascript fired events. You can instrument them using many different ways. I don’t know if you’re using Google tag, manager or any. If you. I guess you’re not because you’re using code. But yeah, you could send those events into

201 00:36:16.310 00:36:19.659 Robert Tseng: into a snowflake. We would end up just

202 00:36:19.680 00:36:23.230 Robert Tseng: kind of capturing an event stream of just this data.

203 00:36:23.877 00:36:31.939 Robert Tseng: That’s like, pre like pre checkout, flow or pre like whatever convert like actual

204 00:36:32.880 00:36:40.549 Robert Tseng: before the point where users actually start to like purchase like that, that all that stuff gets fired because the order created.

205 00:36:40.680 00:36:47.749 Robert Tseng: I think, what initiated checkout or or added to cart as a shopify, then initiated checkout. So like the 1st 2.

206 00:36:47.750 00:36:50.779 Aman Nagpal: Javascript. These, these are coming straight from our code.

207 00:36:51.950 00:36:55.970 Robert Tseng: Okay. Order create is is where the shopify stuff starts great.

208 00:36:55.970 00:36:56.480 Aman Nagpal: Yep.

209 00:36:56.740 00:37:00.946 Robert Tseng: Yeah, okay, so yeah, I mean those those 4 1st 4 events.

210 00:37:01.360 00:37:05.539 Robert Tseng: yeah, we we could just we could have that event stream in Snowflake.

211 00:37:06.180 00:37:14.000 Robert Tseng: I mean, I guess if we wanted to like on their identity resolution side.

212 00:37:14.220 00:37:32.629 Robert Tseng: I mean, I I would prefer to just use what amplitude already has been doing here. Like I I think the only change that needs to be made here is you could send those events 1st into into Snowflake, and if we stored everything in a single, I mean, I don’t know if we want a single event stream there, I I guess I think we just need to

213 00:37:32.710 00:37:46.599 Robert Tseng: see. I I don’t think we can. I don’t think I have an answer on this call on like what part of this flow should be moving to Snowflake. I think this is a good example that maybe you should just send us, and we should discuss internally on like what

214 00:37:46.800 00:37:56.240 Robert Tseng: like. Let’s let’s architect. A data flow for this specific instance of like, what needs to be moved from the existing like amplitude setup

215 00:37:56.524 00:38:03.120 Robert Tseng: in into snowflake. And yeah, I think that’s I think that this is. This is a good good one for us to discuss.

216 00:38:04.230 00:38:04.890 Aman Nagpal: Yeah,

217 00:38:05.770 00:38:09.260 Aman Nagpal: So the reason I said is ideally everything.

218 00:38:09.280 00:38:19.629 Aman Nagpal: What happened there, right? And speaking to the identity stuff, I don’t. You know. I’m sure there’s stuff we would be missing. But I wonder the reason it’s able to

219 00:38:20.420 00:38:22.719 Aman Nagpal: tie up each user.

220 00:38:23.360 00:38:31.489 Aman Nagpal: So every time we fire an event, we’re sending them an email if we have it, or if we don’t have an email, it’s an arbitrary randomized device, Id

221 00:38:31.500 00:38:37.319 Aman Nagpal: or uuid that we create and send it. When we send it to Amp. We send either or both.

222 00:38:38.340 00:38:43.239 Aman Nagpal: And I would hope that’s all amplitude is using to tie the user together. But

223 00:38:45.280 00:38:52.239 Aman Nagpal: I you know it. It might be more than that, but I think that’s at least a good starting point. We can see if that’s enough or not in terms of

224 00:38:52.400 00:39:06.160 Aman Nagpal: what. So what goes into Snowflake? Right is, I’m envisioning what we initially what you know. What I initially thought was gonna happen is, everything would be in the same project. Right? So let’s say I have this example I get rid of.

225 00:39:06.530 00:39:07.510 Aman Nagpal: I don’t know.

226 00:39:08.780 00:39:15.900 Aman Nagpal: I keep you landing page. I do order created, and then I’ve filter by

227 00:39:15.940 00:39:20.159 Aman Nagpal: people who only buy 3 bottles. So the product quantity is 3.

228 00:39:22.230 00:39:40.330 Robert Tseng: Can you? Can you share the link and not just the screenshot. So I can click, click in here. And I want to click into the users that are identified at the 1st step. I want to know how many of them are actually identified. I want to filter out the ones that are already existing customers to see, like what has amplitude identified before they even before they even made a purchase.

229 00:39:40.630 00:39:53.189 Aman Nagpal: Yeah. Gotcha, yeah, I’ll send you the link as well. So this other example I spoke about earlier is how many people view a landing page. They create an order, let’s say, only 3 bottle buyers, and then the next event would be

230 00:39:53.580 00:39:56.740 Aman Nagpal: gorgeous ticket created

231 00:39:57.880 00:40:01.960 Aman Nagpal: And then the last event would be from recharge. It would be

232 00:40:02.546 00:40:12.360 Aman Nagpal: subscription canceled on recharge, and then we an entire funnel. So to to do all that it sounds like, you know, we would need all of that data in Snowflake, right?

233 00:40:13.730 00:40:26.550 Robert Tseng: Yeah, I mean, all these app like the shopify recharge events. We have it already. But it’s just like this, this Javascript stuff of like the landing page, and like pre checkout events, stuff we don’t have in in Snowflake yet.

234 00:40:27.970 00:40:34.900 Aman Nagpal: Yeah. So my point being, as long as we get the performance stuff somehow into amplitude, we can figure out the identity piece

235 00:40:34.990 00:40:40.526 Aman Nagpal: if we send it the same way, we’re sending it to amplitude with an email and or randomized Id into Snowflake.

236 00:40:40.840 00:40:45.079 Aman Nagpal: All of that is workable. Then we should be able to get

237 00:40:46.030 00:40:48.910 Aman Nagpal: what we’re looking for? Or is there still other issues.

238 00:40:50.960 00:40:51.610 Payas Parab: When then you

239 00:40:52.050 00:40:59.660 Payas Parab: what you’re looking for? The only question I do have is just kind of more like a business like from your business perspective like, what what are you guys hoping to achieve with like

240 00:40:59.850 00:41:10.339 Payas Parab: understanding like the performance marketing like, think about the performance marketing all the way down to like the refund like? Is there some belief in the business that there’s a relationship there that, like

241 00:41:10.350 00:41:27.270 Payas Parab: you know what marketing materials we present to yields a certain refund rate. That’s higher. I’m just curious, because, like, yes, that whole journey like we can stitch it right. But I’m I’m more just curious like, what is the business case? That or business question you guys are trying to answer with that.

242 00:41:27.710 00:41:54.050 Aman Nagpal: So maybe the example I gave isn’t the best this last one. But the way another way to look at it is, we need the 1st chart I gave you. So all the performance marketing to the correct shopify order data. And right now, the correct shopify order data, the the most correct one is in Snowflake. That’s why, you know, we you guys did all this work to get us the correct data versus what we’re firing off from our flow right now, right and then, separately. We would want to chart with

243 00:41:54.970 00:41:58.480 Aman Nagpal: order data what the user bought to

244 00:41:58.640 00:42:02.680 Aman Nagpal: gorgeous ticket to recharge, and that might be a separate chart.

245 00:42:02.780 00:42:05.780 Aman Nagpal: but it’s all using the same, you know.

246 00:42:05.780 00:42:06.210 Payas Parab: Yeah.

247 00:42:06.210 00:42:08.679 Aman Nagpal: Pieces of data that so to have it in

248 00:42:08.800 00:42:15.099 Aman Nagpal: if we were to use the 1st one in the 1st amplitude project, we’re not then using the correct order data, we’re using our old order data.

249 00:42:15.710 00:42:16.859 Payas Parab: I see. Okay.

250 00:42:18.160 00:42:30.359 Robert Tseng: Yeah, I guess what surprises me about this setup, which is why I was kind of like fumbling and not sure at first, st is like the fact that everything pre checkout is not. It’s just like it’s custom that you send it to amplitude.

251 00:42:30.360 00:42:41.820 Robert Tseng: because normally, I would expect, like amplitude based basically functions like a Google analytics as well where they have their own auto tracking events. So if you have the the one Js snippet of amplitude

252 00:42:41.820 00:43:03.840 Robert Tseng: in your code already. When a user lands on your page like it’ll it’ll track. It’ll start to track them from from there. But it seems like you’ve overridden it with this own custom. Js, stuff. I don’t know if you’re catching more or less than what what amplitude would auto track. But then, yeah, I think that that ends up making. Yeah, that that ends up, confusing me on like where the identification actually happens. So that’s why I was hung up there.

253 00:43:04.070 00:43:24.009 Aman Nagpal: No, for sure. I I like I didn’t set this up initially. I kind of was brought into the setup my assumption is the snippet amplitude gives us. If you know, if you’re saying there’s the base ones versus us using custom ones, I think it’s just a matter of. I would guess it’s just a matter of the name of the event. Add to cart versus our custom event name. I would think the snippet is the same.

254 00:43:24.580 00:43:28.130 Aman Nagpal: So we’re just naming it. Different events, I believe.

255 00:43:30.810 00:43:33.894 Robert Tseng: Okay. Well, I mean don’t know for sure.

256 00:43:34.280 00:43:35.350 Aman Nagpal: We don’t know for sure.

257 00:43:38.000 00:43:41.459 Aman Nagpal: You’re saying functions like analytics where they have.

258 00:43:41.500 00:43:47.439 Aman Nagpal: You know the preset stuff. I don’t know if the snippets like, I said. I don’t know if their snippets are the same or not. But

259 00:43:47.923 00:43:50.969 Aman Nagpal: yeah, this is. This is how we have it set up. Now.

260 00:43:51.350 00:43:52.370 Robert Tseng: Yeah.

261 00:43:52.820 00:44:01.540 Aman Nagpal: So it sounds like questions to answer next steps is a figure out, do we?

262 00:44:01.780 00:44:15.300 Aman Nagpal: The identity piece? And sending this stuff into Snowflake for us. It’s easy, you know, whatever we’re sending to amplitude on the code side, we would just make a different snippet and send to Snowflake. The rest of it is the hard part, right? You know, for you guys to figure out.

263 00:44:16.700 00:44:17.970 Aman Nagpal: And

264 00:44:18.750 00:44:29.660 Aman Nagpal: yeah, is the identity piece good. And then it sounds like like we said, it’s probably not going to be easy to do an amplitude with the drag and drop. It may come to it where

265 00:44:29.930 00:44:39.130 Aman Nagpal: it has to be in metabase or something else, and it requires some SQL. To make each chart, which I think we can work around like that’s that should be okay.

266 00:44:39.549 00:44:43.899 Aman Nagpal: Not the best scenario, but that is, that are we all kind of on the same page with that.

267 00:44:44.450 00:45:07.170 Robert Tseng: Yeah, I mean, I think to me the threshold we’re crossing is, if we choose to move these events into in Snowflake. Then Snowflake will have everything, and there’s like no reason to put any events directly into amplitude anymore. And I think that’s like a that’s a that’s a bridge. We have to make sure we I mean, we we internally even want to to cross that because I’ve been biased up until this point, saying, like.

268 00:45:07.190 00:45:20.010 Robert Tseng: Yeah, we should keep some of these things going into amplitude. Let them handle identity resolution. Let you have the the drag and drop funnel reporting. So we don’t have your feature analysts, or our team kind of

269 00:45:20.020 00:45:36.290 Robert Tseng: oh, inundated with like, can you build this funnel with steps one through 5 or 1, 3, 5, or whatever. Because I think in in Meta base, like, yeah, we can set it up. But like every single iteration will be a separate report. And that just seems like

270 00:45:36.310 00:45:46.760 Robert Tseng: not something like I want my team to be spending our time on so like, if even just for the sake of you having an easy tool to like. Do

271 00:45:46.760 00:46:13.950 Robert Tseng: funnel reporting on event streams like that’s what amplitude’s core product is about. So I think, like, that’s kind of why I was hoping with this whole like data warehouse thing that we would take the transactions, the revenue data out of it. We would. We would pretty much just block all the other data that’s not relevant to user behaviors and just keep the event stream going into amplitude so that you could just do this type of drag and drop

272 00:46:14.275 00:46:41.309 Robert Tseng: like like report building. You know yourself right? Because otherwise the constraint becomes someone on your team needs to know. Sequel, to be able to make a change to a report. And it’s not complicated. It’s just changing filters. But like this is more accessible to everyone in the job team right now. So I I think I I want us to consider that. You know everyone here on this call before we we make that we make a decision on this.

273 00:46:41.800 00:46:50.420 Aman Nagpal: Yeah, I think it really depends on those limitations. Right is, you know, if there’s certain reports and dashboards that we have now, or that, you know, should be easy for us to do.

274 00:46:51.285 00:46:57.600 Aman Nagpal: That for some reason can’t be done in amplitude. Wna? Then.

275 00:46:57.970 00:47:00.849 Aman Nagpal: you know, that’s kind of the what we gotta figure out right.

276 00:47:01.480 00:47:14.909 Robert Tseng: Yeah. So I think we’ve confirmed you can’t do cross model refund reporting. So what you just did with dragging your custom events with your shopify event and the recharge event into a single funnel report. Warehouse native cannot do that.

277 00:47:15.020 00:47:21.489 Robert Tseng: I grew with Pius. We could easily do that in snowflake. Because all the data is like we just, we’re just joining it using SQL logic.

278 00:47:21.580 00:47:38.910 Robert Tseng: It doesn’t make sense to me. Why amplitude doesn’t support that. But that’s the limitation we live with in the warehouse native amplitude, and so to be able to preserve this view that you have right now. It seems like the only way you can do it is if you continue to have direct integrations into amplitude.

279 00:47:38.940 00:47:51.559 Robert Tseng: So that’s this is the this is like kind of where we need to draw the line on, are we going? Yeah, like, I, yeah, I think I don’t need to rehash it. But this is like, this is the crossroads that we’re at right now.

280 00:47:51.780 00:48:04.619 Aman Nagpal: Yeah. And if we do it in Snowflake, you know everything you said we can’t do in for this funnel chart can’t do in amplitude. We can do it in Snowflake. Is it possible to do it in Snowflake? But then visualize it in amplitude or no?

281 00:48:05.820 00:48:13.779 Robert Tseng: I I think the ha! The way around it is to dump everything into a single model. So like we end up, maybe just

282 00:48:14.030 00:48:31.019 Robert Tseng: like I don’t know. I think this is an conversation we need to loop and see if this makes sense, but how I can envision it is. Yes, we already have the this data modeled into different kind of dimensions, currently. But if we have, like a single event stream, that’s just like

283 00:48:31.090 00:48:47.959 Robert Tseng: 2 or 3 columns, just the event name, or like the timestamp, the event name and then, like, maybe, like the user Id, or whatever that could be, the single model that we push into warehouse native amplitude, and maybe that would support all of these funnel reports.

284 00:48:48.920 00:49:03.490 Robert Tseng: for, like at at this level, I but I think it would just get more complicated if you want more of these properties. But because we basically have to put it into a single model in order for warehouse native amplitude, to be able to read it.

285 00:49:04.480 00:49:05.240 Aman Nagpal: Got it.

286 00:49:05.922 00:49:10.900 Aman Nagpal: Yeah, I guess. I’m gonna I’m gonna send you this link. Let me know

287 00:49:11.070 00:49:19.769 Aman Nagpal: whatever you guys need from me. I’m happy to hop on any additional calls or anything, just so that we can get this sorted out. Yeah, just just let me know.

288 00:49:20.380 00:49:28.314 Robert Tseng: Okay. Yeah, I think. Sorry. I know. I probably talk too much on this call. 1 1 more thing here is

289 00:49:29.550 00:49:38.380 Robert Tseng: yeah, I guess. We want, we want to get this figured out. I mean, end of I just look back at our original start date. It was like September 10.th So I think

290 00:49:38.510 00:49:55.880 Robert Tseng: I I know that we just ended, or we just got like some stuff today. But I think all of our effort now is really just to kind of give you the information you need to make a decision. But we’re we’re we’re. We’re like not doing additional like

291 00:49:55.940 00:49:59.509 Robert Tseng: work after this this week, unless we figure out a renewal.

292 00:50:00.210 00:50:06.160 Aman Nagpal: Yeah, I think if we can just get the information at hand. And also just you know.

293 00:50:06.702 00:50:36.059 Aman Nagpal: what you know. Now, we’re doing this portable thing I know that’s gonna take some time. But compared to how much time we spend on, let’s say, the engineering piece the last 3 months versus how much of that? How many hours we need going forward? Questions like that is, I think, would be helpful. Our answers like that would be helpful just for us to make a decision, you know. Maybe it comes to it where we’ve done the bulk bulk of the engineering piece, and maybe those hours would be reduced. Because, you know, we’ve built out a lot of the

294 00:50:36.430 00:50:41.059 Aman Nagpal: foundation already, right? And maybe that’s not the case. So if you can let me know about that

295 00:50:41.782 00:50:44.249 Aman Nagpal: and just you know what you envision.

296 00:50:46.010 00:50:51.210 Aman Nagpal: you know, timeline. And yeah, the hours, all that good stuff. So we can figure that out.

297 00:50:51.510 00:51:12.469 Robert Tseng: Yeah, yeah, I’ll I’ll kind of reference the deck again, and we’ll have, like the the line by line work to be done. Kind of estimated time. We’ll basically kind of zoom back out and go back to the scope of work. Stage to to figure out what what it will take to to, to do, to do some of these remaining things that we we talked about.

298 00:51:13.090 00:51:27.339 Aman Nagpal: Sounds good. And my last question was, so I just wanted to confirm if we decided to go ahead. And just, you know, try things out for the time being with Matthew ad hoc, and we decide we want to bring him on full time and just pay him directly. That’s an option.

299 00:51:28.627 00:51:31.480 Robert Tseng: Yeah, I mean, I guess if you

300 00:51:31.500 00:51:39.745 Robert Tseng: I I forgot exactly what we discussed. I thought we were going to. You were gonna like loop me into an email with him or something, and

301 00:51:41.230 00:51:45.310 Aman Nagpal: Oh, so I was basically what we were thinking was.

302 00:51:45.550 00:51:56.420 Aman Nagpal: you know, maybe we keep looking for someone, or, you know, depending on how it goes with him. We try him out for the time being. Get some reports and dashboards made. I know you. I think you said you guys are going to train him for a month

303 00:51:57.496 00:51:58.369 Aman Nagpal: and then

304 00:51:58.410 00:52:03.700 Aman Nagpal: based on how things go. If we wanna if we decide we want to bring him him on full time. Then.

305 00:52:03.945 00:52:05.669 Aman Nagpal: I was hoping that would be an option.

306 00:52:07.100 00:52:17.950 Robert Tseng: Got it. Yeah, I mean, let’s let’s add that as like just like another line item to cause he would be under us. For that month, so it would just pretty pretty much be part of like

307 00:52:18.390 00:52:26.750 Robert Tseng: part an extension of us at that point, because you’re not offering him full time. He’s just like trainee, just like you’re testing him out for a month right.

308 00:52:26.750 00:52:29.580 Aman Nagpal: Right. And then after that, you would be okay with us. If we decide.

309 00:52:29.580 00:52:37.019 Robert Tseng: Yeah, then then he, yeah. Then, if you decide to bring him in house full time, and that’s I guess that would just be a contract between you and him. Yeah.

310 00:52:37.020 00:52:44.580 Aman Nagpal: Sounds good, so I’ll look out for you know the the updated stuff whenever you guys have it, and we’ll we’ll make a decision as soon as possible.

311 00:52:45.060 00:52:46.250 Robert Tseng: Okay. Cool.

312 00:52:46.860 00:52:47.910 Aman Nagpal: Thank you. Guys.

313 00:52:48.120 00:52:49.400 Robert Tseng: Alright! Thanks everyone.

314 00:52:50.260 00:52:50.950 Nicolas Sucari: Bye, bye.