Meeting Title: Miguel de Veyra’s Zoom Meeting Date: 2025-05-06 Meeting participants: Luke Daque, Uttam Kumaran, Amber Lin, Demilade Agboola, Miguel De Veyra, Casie Aviles, Awaish Kumar, Caio Velasco


WEBVTT

1 00:01:50.800 00:01:51.620 Miguel de Veyra: Hello!

2 00:01:54.650 00:01:55.220 Casie Aviles: Hey!

3 00:01:56.320 00:02:01.580 Miguel de Veyra: Yeah, bye, hey, Luke, what’s up, mate?

4 00:02:32.110 00:02:32.580 Luke Daque: Hello!

5 00:02:32.580 00:02:34.470 Miguel de Veyra: That’s hey, hey?

6 00:02:34.750 00:02:39.490 Miguel de Veyra: Just 3 for Kyle, because I think he requested this meeting.

7 00:02:40.700 00:02:42.300 Luke Daque: Okay, there, you go.

8 00:02:48.320 00:02:51.780 Miguel de Veyra: Have you guys played the new expedition ticket.

9 00:02:53.530 00:02:54.830 Luke Daque: No expedition.

10 00:02:55.000 00:02:56.779 Miguel de Veyra: Yeah. Expedition 33.

11 00:02:57.510 00:02:58.880 Luke Daque: And what’s that?

12 00:02:59.190 00:03:02.249 Miguel de Veyra: Oh, yeah, game is good. It’s good.

13 00:03:02.250 00:03:03.480 Casie Aviles: Saying it lately.

14 00:03:03.890 00:03:05.330 Miguel de Veyra: You.

15 00:03:06.846 00:03:07.273 Awaish Kumar: Yes.

16 00:03:09.700 00:03:11.979 Casie Aviles: Yeah, that’s October.

17 00:03:16.390 00:03:19.010 Miguel de Veyra: Have you enjoyed like 3rd base games before or not? Yet.

18 00:03:19.810 00:03:20.485 Casie Aviles: Yeah.

19 00:03:21.810 00:03:23.379 Miguel de Veyra: It was my first.st

20 00:03:26.240 00:03:27.450 Miguel de Veyra: Yeah.

21 00:03:27.450 00:03:29.169 Casie Aviles: Active, you may enable me active.

22 00:03:30.350 00:03:35.559 Miguel de Veyra: Yeah, my mile is like doing 80 million damage now.

23 00:03:37.140 00:03:40.130 Miguel de Veyra: Oh, hey, guys, hey, guys.

24 00:03:40.130 00:03:40.950 Caio Velasco: Don’t worry.

25 00:03:40.950 00:03:42.089 Luke Daque: Any rooms.

26 00:03:42.840 00:03:47.169 Miguel de Veyra: Good morning, everyone yeah. So

27 00:03:47.790 00:03:52.459 Miguel de Veyra: let’s let’s give it 5 min away. Let me check with him’s calendar. Actually.

28 00:03:57.360 00:03:59.140 Miguel de Veyra: yeah, he should be able to call.

29 00:04:02.710 00:04:04.400 Miguel de Veyra: Let me just send them a message.

30 00:04:10.860 00:04:12.480 Luke Daque: So it’s turn based.

31 00:04:13.050 00:04:14.420 Miguel de Veyra: Yeah, it’s turn based.

32 00:04:15.230 00:04:17.829 Luke Daque: It’s like final fantasy or something. Okay.

33 00:04:17.839 00:04:18.209 Casie Aviles: Yeah.

34 00:04:18.790 00:04:20.473 Miguel de Veyra: I haven’t played final fantasy.

35 00:04:20.810 00:04:23.749 Miguel de Veyra: Yeah, but I heard it’s inspired by final fantasy.

36 00:04:24.970 00:04:25.920 Luke Daque: Nice.

37 00:04:34.980 00:04:37.820 Miguel de Veyra: Let’s just give him 4 or 5 min.

38 00:04:38.260 00:04:40.000 Miguel de Veyra: I will be sent to another message.

39 00:04:43.640 00:04:47.960 Miguel de Veyra: my own, and the music is loaded.

40 00:04:49.220 00:04:49.675 Casie Aviles: Yeah.

41 00:04:55.170 00:04:57.559 Luke Daque: I think I need a new gpu to.

42 00:04:58.350 00:05:02.780 Miguel de Veyra: Yeah, you know, my 50 80 is like running 90 fps with

43 00:05:02.960 00:05:07.189 Miguel de Veyra: already with everything. I mean, I’m at ultra. But still right.

44 00:05:07.710 00:05:08.440 Luke Daque: Yeah.

45 00:05:09.370 00:05:15.549 Miguel de Veyra: I’m running the what do you call it? The frame generation. 2.

46 00:05:17.100 00:05:18.469 Luke Daque: And it’s only 90.

47 00:05:18.470 00:05:23.140 Miguel de Veyra: Yeah, it’s only 90 and optimizing the game.

48 00:05:26.200 00:05:29.350 Miguel de Veyra: although it doesn’t really matter because it’s turn based.

49 00:05:31.380 00:05:34.489 Luke Daque: Yeah, but still God.

50 00:05:35.270 00:05:37.770 Miguel de Veyra: I might have to go 50 90 at this point.

51 00:05:43.160 00:05:43.840 Miguel de Veyra: Okay.

52 00:05:48.204 00:05:55.300 Miguel de Veyra: but yeah, I think, let’s start. I think. Let’s start with the let me just share my screen.

53 00:05:58.630 00:06:00.720 Miguel de Veyra: Okay, let me. Just

54 00:06:06.720 00:06:07.450 Miguel de Veyra: oh, okay.

55 00:06:08.320 00:06:09.579 Miguel de Veyra: Can you guys see my screen.

56 00:06:12.678 00:06:13.749 Casie Aviles: Yeah, you can see.

57 00:06:16.350 00:06:23.119 Miguel de Veyra: Okay, yeah. So Kyleish, are you guys there?

58 00:06:25.090 00:06:25.840 Caio Velasco: Yes.

59 00:06:26.555 00:06:27.060 Awaish Kumar: Yep.

60 00:06:27.400 00:06:28.306 Miguel de Veyra: Okay, great

61 00:06:29.130 00:06:32.297 Miguel de Veyra: So we answered some of it here.

62 00:06:34.430 00:06:40.020 Miguel de Veyra: I think most of it are self explanatory other than this one. So

63 00:06:40.300 00:06:42.279 Miguel de Veyra: yeah, let’s start with that one.

64 00:06:43.750 00:06:47.449 Miguel de Veyra: So the structure, the new structure that we implemented

65 00:06:47.590 00:06:52.080 Miguel de Veyra: it’s basically on an 8. N, we go, you type the client.

66 00:06:52.630 00:06:57.680 Miguel de Veyra: for example for this one, we’ll be demoing. Yeah, because I think.

67 00:06:57.840 00:07:00.769 Miguel de Veyra: Cayo, you’re the one working on this right.

68 00:07:02.250 00:07:04.710 Caio Velasco: Yeah, I was working with Johnny.

69 00:07:05.500 00:07:07.829 Miguel de Veyra: Okay, yeah, great. So

70 00:07:09.590 00:07:16.290 Miguel de Veyra: the way this works is, wait. Sorry. These are all tests. By the way, so we want really

71 00:07:16.490 00:07:21.400 Miguel de Veyra: deal with that. Let’s just focus on this blue box. So this is like the main workflow.

72 00:07:21.894 00:07:24.810 Miguel de Veyra: The only way we can communicate with the bot.

73 00:07:25.000 00:07:28.709 Miguel de Veyra: like the only way utam at least wants it to is via slack.

74 00:07:30.020 00:07:36.730 Miguel de Veyra: So, for example, we go to data AI, what’s the test?

75 00:07:37.050 00:07:42.529 Miguel de Veyra: So, for example, you just do this

76 00:07:44.520 00:07:46.870 Miguel de Veyra: and then send it there. It should.

77 00:07:47.010 00:07:50.830 Miguel de Veyra: The way it should go is executions.

78 00:08:12.980 00:08:14.230 Miguel de Veyra: That’s good, I guess.

79 00:08:14.580 00:08:18.660 Miguel de Veyra: So. Yeah. Wait. Where is I think it’s this one.

80 00:08:24.000 00:08:25.230 Miguel de Veyra: Okay, here you go.

81 00:08:25.340 00:08:29.150 Miguel de Veyra: So the request comes in like the user question.

82 00:08:29.550 00:08:37.100 Miguel de Veyra: And then basically, we add, like a react bottom to, I react to the message just to notify that you know

83 00:08:37.460 00:08:38.780 Miguel de Veyra: that.

84 00:08:38.789 00:08:40.199 Casie Aviles: The bot received the message.

85 00:08:40.200 00:08:45.079 Miguel de Veyra: Yeah, just what received it. And then the variables, it’s just to map it out, basically.

86 00:08:45.960 00:09:03.090 Miguel de Veyra: And then here’s like the main bread of it. So we have an agent where we pass the user input, here’s like the prompt, the user message, the input of the user from slack. And then we just find it, you know, very helpful to put the current date and time.

87 00:09:04.320 00:09:07.629 Miguel de Veyra: And then, of course, there’s the system instructions.

88 00:09:08.210 00:09:12.010 Miguel de Veyra: We just added this here for now, this is basically like the slack ids.

89 00:09:13.095 00:09:21.350 Miguel de Veyra: but yeah, basically, it’s just set of instructions on how best to answer the user. So basically, this is what we call, you know, quote unquote training.

90 00:09:24.430 00:09:29.479 Miguel de Veyra: So you know the the I mean, the client details are here.

91 00:09:30.820 00:09:35.850 Miguel de Veyra: our team, the data resources. That’s why, this bot has access to basically everything.

92 00:09:36.710 00:09:40.020 Miguel de Veyra: And you know, we basically just got this from notion.

93 00:09:41.260 00:09:56.399 Miguel de Veyra: And then from that, there’s like this tool has 3 tools. Eventually, it’ll be 4, because linear. But basically, the way we treat it is one data source is one tool. So it doesn’t get very confusing. So in this test. It used the slack tool.

94 00:09:56.580 00:09:59.880 Miguel de Veyra: which is this one.

95 00:10:00.460 00:10:06.980 Miguel de Veyra: because one, every workflow, every one workflow in any time can only have one of this execute workflow.

96 00:10:07.660 00:10:11.080 Miguel de Veyra: So you know, that’s why sorry. Go ahead.

97 00:10:11.550 00:10:19.420 Awaish Kumar: I have a question here like, are these agents, like, trained for specific source.

98 00:10:19.640 00:10:24.810 Awaish Kumar: like separate one for slack, separate one for guitar or.

99 00:10:24.810 00:10:26.869 Miguel de Veyra: Yep. So this one is for slack.

100 00:10:27.100 00:10:35.199 Miguel de Veyra: and then we created a separate one, for I don’t think this has to be active, but let’s leave that for now. And this one is for the Github source.

101 00:10:36.110 00:10:43.009 Miguel de Veyra: It’s basically just getting the Xml files. And then this one is for zoom, which is from super base.

102 00:10:43.650 00:10:44.400 Awaish Kumar: Okay?

103 00:10:44.830 00:10:45.989 Awaish Kumar: So yeah. So those.

104 00:10:45.990 00:10:47.129 Awaish Kumar: Then how?

105 00:10:47.600 00:10:51.530 Awaish Kumar: So? When I ask, for example, if I ask question

106 00:10:51.680 00:10:57.609 Awaish Kumar: to the agent like, on in slack correctly.

107 00:10:57.740 00:11:05.360 Awaish Kumar: Show me like like, give me the formula for for some

108 00:11:05.520 00:11:23.129 Awaish Kumar: specific, some metric calculation, and maybe the piece of code where it’s being calculated in the in the like SQL. File. So how it’s going to like collaborate between, for example, the slack formula, the metric formula is available somewhere in slack text.

109 00:11:23.250 00:11:28.470 Awaish Kumar: and the calculation is happening in in Dbt. SQL. Model.

110 00:11:28.600 00:11:31.530 Awaish Kumar: How it’s going to like. Relate both.

111 00:11:33.520 00:11:36.679 Miguel de Veyra: So basically, what you want to know is like a piece of code, right.

112 00:11:37.310 00:11:48.929 Awaish Kumar: Yeah, I want to know piece of code. But I I want to a top level like, I want to see the collaboration between slack in text

113 00:11:49.510 00:11:51.740 Awaish Kumar: with the git github code, like.

114 00:11:51.850 00:11:59.350 Awaish Kumar: So Github has the code. But the maybe the the business questions related to that code are in slack.

115 00:11:59.500 00:12:06.239 Awaish Kumar: So if I ask a question which can be like, okay, for example, if I say,

116 00:12:08.430 00:12:13.710 Awaish Kumar: okay, let’s calculate revenue

117 00:12:14.380 00:12:25.320 Awaish Kumar: some or something like that. And it’s it’s going to same like in the code. It can show me the piece of code from the Github. And but I want like, how

118 00:12:26.930 00:12:43.800 Awaish Kumar: expect is that slack? Answers me that right like the this is how you calculate this, and the the profit is, is equal to sales minus cost which which the which the bot learns from some slack text in our slack conversation. There is some

119 00:12:44.130 00:12:57.319 Awaish Kumar: conversation about this formula, right? Maybe I’m asking, Robert. Okay, Robert, what is the formula to calculate profit? And he says profit is equal to sales minus cost. And then I put it in the in the Github code.

120 00:12:57.440 00:13:05.560 Awaish Kumar: And when I ask the board, it answers me with the formula, and plus the code. So yeah, I’ll get collaboration between the slack and Gito.

121 00:13:06.360 00:13:14.960 Miguel de Veyra: Yeah, that’s something definitely we can do. Because what it it’s not very, it’s because basically, the instructions are here on how to use the tool

122 00:13:15.800 00:13:38.289 Miguel de Veyra: right on basically how the agent should use the tool. That is one of you know, quote unquote the trainings to basically, what specific use cases does the agent have to use this tool? So, for example, for your specific use case, I believe what the bot should should do is reference laptop, and then reference also Github, and then combine the answers to formulate, like, you know.

123 00:13:38.960 00:13:40.390 Miguel de Veyra: a golden answer based on your.

124 00:13:40.390 00:13:41.949 Awaish Kumar: Yeah. Yeah. Yes.

125 00:13:42.400 00:13:55.330 Miguel de Veyra: But then, yeah, that’s something we have to communicate with the data team, because, you know, we can only train it if we know how it works. So basically, we need like a set of questions that you know, this is for this one, you know.

126 00:13:55.690 00:14:01.779 Miguel de Veyra: or you can. Actually, if you I think if you redo like, hey? From the last Zoom Meeting Yada Yada.

127 00:14:01.950 00:14:06.409 Miguel de Veyra: and then connected. But yeah, we’ll find a way to make it easier for you guys. But yes.

128 00:14:06.720 00:14:12.940 Miguel de Veyra: per message or per run. It’s not really restricted to one tool. It can run multiple tools to get more context.

129 00:14:13.990 00:14:17.890 Awaish Kumar: Yeah, actually, that is that was my question. That, for example, like.

130 00:14:18.100 00:14:30.649 Awaish Kumar: if if I I say that I see the club boat name as Javi boat, right? It’s it’s based on client, right? Not specific source. So I can ask

131 00:14:30.750 00:14:31.900 Awaish Kumar: anything.

132 00:14:32.040 00:14:54.150 Awaish Kumar: Maybe my question is something which going to have answer from the in both the slack and in both in the github or in maybe some maybe I can ask like also tell me the link of the linear ticket, or something like that, so it can. It answers me with some and with the description, but also

133 00:14:54.260 00:15:02.760 Awaish Kumar: attaches some linear tickets links or something like that. So it basically collaborates between multiple.

134 00:15:02.760 00:15:03.619 Miguel de Veyra: Everything else. Yes.

135 00:15:03.620 00:15:04.600 Awaish Kumar: Pushes it up.

136 00:15:05.270 00:15:17.809 Miguel de Veyra: Yes, yeah. Actually, I just realized you weren’t on the AI test channel. You could. I would appreciate it if you guys tested here because we can let. Then we can see you know what’s happening, and then we can adjust it accordingly.

137 00:15:18.140 00:15:18.860 Awaish Kumar: Okay.

138 00:15:19.010 00:15:25.210 Miguel de Veyra: And then the way you guys test. By the way, so for example, we only have Eden. And

139 00:15:25.970 00:15:31.760 Miguel de Veyra: yeah, we up for now, cause yeah, we have. We’ve just tag the bot like this. And then.

140 00:15:32.390 00:15:35.389 Miguel de Veyra: Hello, from the other side, something like this to message.

141 00:15:35.390 00:15:35.930 Awaish Kumar: Okay.

142 00:15:36.860 00:15:46.560 Awaish Kumar: Okay, yeah. My, my like, what I said is not an instruction. I I was, I was. I’m just trying to understand how the current agents. You are training, how they work.

143 00:15:46.560 00:15:47.429 Miguel de Veyra: Yep, yep.

144 00:15:48.490 00:16:00.989 Miguel de Veyra: So yeah, you we can definitely change that, I think, for now you have to specify on the message that, hey? I need the piece of code from Github. And then based on, you know this, the last slack messages, Yada Yadda.

145 00:16:01.200 00:16:04.879 Miguel de Veyra: So so it connects us right now, it’s very like one is the one

146 00:16:06.280 00:16:13.960 Miguel de Veyra: because we just want to initialize it to get something out there. But then, you know, once it’s out there, we can then tailor it to best fit the team’s needs.

147 00:16:16.470 00:16:17.410 Awaish Kumar: Okay. Yeah.

148 00:16:18.720 00:16:27.489 Miguel de Veyra: Yeah, okay, thanks. A wish, great questions. And then, yeah, again, the slack, the slack tool is here. It’s basically the way this works is.

149 00:16:27.700 00:16:30.460 Miguel de Veyra: Here’s like all the slack messages like

150 00:16:30.963 00:16:34.389 Miguel de Veyra: I don’t know how many this is like all 3,000 of them.

151 00:16:36.090 00:16:39.360 Miguel de Veyra: So we don’t get yeah. Sorry. Go devil.

152 00:16:39.360 00:16:42.769 Demilade Agboola: Oh, I was gonna ask, what’s the refresh cadence on that.

153 00:16:43.650 00:16:44.740 Miguel de Veyra: The what sorry.

154 00:16:45.340 00:16:49.480 Demilade Agboola: Refresh cadence. How often does it get refreshed like the slack messages.

155 00:16:49.994 00:17:01.680 Miguel de Veyra: We do this daily not really refresh. Basically, if there’s like a, we have a scheduled one where basically, we’ll just add everything. We’ll we’ll just increment this database. Add the new messages every day here.

156 00:17:02.230 00:17:04.030 Miguel de Veyra: and we’re still working on that, for now.

157 00:17:05.859 00:17:07.159 Demilade Agboola: Okay. Great.

158 00:17:08.700 00:17:11.879 Miguel de Veyra: And then that’s the same with zoom. Basically, every meeting

159 00:17:12.329 00:17:16.409 Miguel de Veyra: that that happens, it’s automatically added. So it’s always up to date.

160 00:17:20.247 00:17:21.649 Caio Velasco: Miguel. One thing

161 00:17:22.302 00:17:27.437 Caio Velasco: just so that we can. I can at least understand the flow of the of the meeting.

162 00:17:28.269 00:17:33.669 Caio Velasco: so at least on my end. I’m I’m not only assuming, but it’s also true.

163 00:17:33.779 00:17:40.239 Caio Velasco: The I don’t have a lot of knowledge on AI agents, or whatsoever. So

164 00:17:41.039 00:17:49.039 Caio Velasco: what I’m expecting as well. It’s which I think you’re going through it. But it’s like this is an agent. This is what it does.

165 00:17:49.169 00:17:55.169 Caio Velasco: It comes. Someone gives a prompt, so it receives a message. It does whatever XYZ.

166 00:17:55.379 00:18:10.399 Caio Velasco: Replies. I mean, just the overview, the big picture. And then, while you go into details, and whatever thing it’s important, because the ideas I get is like, treat me as a client as well, because actually don’t have the knowledge.

167 00:18:10.819 00:18:21.299 Caio Velasco: So yeah, so just so that we can have like a flow. Because there is question questions or those questions are good. But it’s going into details, and assuming that everyone has the same knowledge.

168 00:18:21.920 00:18:26.229 Miguel de Veyra: Okay, yeah. So, okay, yeah, let’s go back. Sorry. I’ve been

169 00:18:27.410 00:18:48.559 Miguel de Veyra: too deep, I guess. But yeah, as a general overview, like the goal of this was the way the reason we built it is, for example, there. One of the main reasons is, for example, for someone like Annie. If she wants to hop onto Yavi or like in any other clients, she could basically start asking this questions. So she doesn’t have to ask another person right?

170 00:18:48.750 00:18:52.010 Miguel de Veyra: Which cause we’re an asynchronous company. It takes, you know.

171 00:18:52.414 00:18:55.829 Miguel de Veyra: We’re not sure when we’ll get an answer probably today, maybe tomorrow.

172 00:18:56.640 00:19:03.650 Miguel de Veyra: So this agent basically has. This Yavi agent has access to every Yavi data that there is.

173 00:19:03.980 00:19:09.449 Miguel de Veyra: whether it be, you know, Github, whether it be slack, zoom linear, so you can. The idea is.

174 00:19:09.870 00:19:16.399 Miguel de Veyra: it has access to everything, and you can ask it anything. It’ll give something, it’ll give something out.

175 00:19:16.510 00:19:23.640 Miguel de Veyra: And then what a wish mentioned earlier was basically, how then, we can connect this tools to basically, you know.

176 00:19:24.976 00:19:27.360 Miguel de Veyra: generate a better result.

177 00:19:29.770 00:19:32.780 Miguel de Veyra: Did that answer my your question, Kaya? Sorry.

178 00:19:34.702 00:19:51.819 Caio Velasco: Was just saying that like giving the overview and going step by step. That’s why, if you go to the golden data sheet, the questions I made were even theoretical, like, you know what is happening in the background, etcetera. But yeah, now you just you nailed it. Yeah, perfect.

179 00:19:52.390 00:19:59.560 Miguel de Veyra: Okay? And then, honestly, the core of this agent building is the tools like

180 00:20:01.330 00:20:11.179 Miguel de Veyra: This is probably where we spend the most time in the ingestion of tools. So slack is over here. Basically it gets from here. And then

181 00:20:12.280 00:20:17.489 Miguel de Veyra: Github is a bit simpler because we have the repo mix.

182 00:20:17.930 00:20:21.090 Miguel de Veyra: And then we, the basically the way it works is you just

183 00:20:22.117 00:20:23.919 Miguel de Veyra: wait. Let me just show you guys.

184 00:20:26.520 00:20:32.489 Miguel de Veyra: yeah. So basically just gets from Github and then downloads that file, turns it into

185 00:20:33.110 00:20:36.530 Miguel de Veyra: raw text. And then we pass that agent.

186 00:20:37.680 00:20:38.719 Miguel de Veyra: This is wrong.

187 00:20:38.980 00:20:42.730 Miguel de Veyra: But basically, you know, the Xml file will be used as context.

188 00:20:42.970 00:20:47.690 Miguel de Veyra: And then, basically, the what happens is now the agent has access to the entire database.

189 00:20:48.660 00:20:51.079 Miguel de Veyra: I mean the the entire code base.

190 00:20:53.920 00:21:03.859 Miguel de Veyra: And for zoom, it’s pretty straightforward, just like slack. And then we’re adding linear. But we don’t really know how to do it yet. We’re gonna figure that out tomorrow.

191 00:21:05.610 00:21:07.299 Miguel de Veyra: So to recap

192 00:21:08.580 00:21:26.210 Miguel de Veyra: slack comes in agent processes, processes the request, decide which tools to use the best to answer that question. It would really help if you guys mentioned, hey? Based on the last messages, on slack or based on the last meeting that’s automatically Zoom or Hey, I need to find this piece of code or

193 00:21:26.570 00:21:39.479 Miguel de Veyra: anything code related. Use this. Basically, if you guys know the tools, it’s a lot better to, you know. Just tell them, hey, can you use this tool and this tool to combine, figure out something, and then the agent should be able to do it.

194 00:21:42.790 00:21:44.950 Miguel de Veyra: Casey. Sorry you wanted to add something.

195 00:21:45.590 00:21:49.390 Casie Aviles: No, no, I was just going to, you know. Basically say that

196 00:21:50.113 00:21:51.929 Casie Aviles: the AI agent alone is.

197 00:21:52.340 00:21:56.210 Casie Aviles: it requires, like the enrichment of these data sources.

198 00:21:56.550 00:22:02.559 Casie Aviles: So in order to properly like, have the correct context for your questions. So.

199 00:22:03.220 00:22:08.100 Casie Aviles: yeah, that’s pretty much the gist of it. And yeah.

200 00:22:09.520 00:22:15.450 Miguel de Veyra: And then some of it. By the way, because, for example, something wait. Sorry. Let me. Just I think it’s this one.

201 00:22:15.890 00:22:20.550 Miguel de Veyra: Yeah. Cause for so for clients, for older clients like Yavi, or

202 00:22:20.800 00:22:23.490 Miguel de Veyra: which is the other old client pool parts, I believe.

203 00:22:24.610 00:22:40.549 Miguel de Veyra: like they. They just have so much slack messages which I don’t think we need to get everything like. So by default, we do it 30 days. Of course we can adjust it. But do you guys think 30 days? Or maybe you should. We adjust to 60 to 90?

204 00:22:40.650 00:22:47.380 Miguel de Veyra: Basically, the past 90 days is the the old, the default one that we collect. And eventually we’re planning to add, you know.

205 00:22:48.760 00:22:50.489 Miguel de Veyra: an option to hey?

206 00:22:50.750 00:22:54.370 Miguel de Veyra: Let’s do all time. Let’s do the last 120 days.

207 00:22:56.610 00:23:00.910 Luke Daque: Is there any implication like if we do it all time, like.

208 00:23:01.450 00:23:01.960 Miguel de Veyra: Oh!

209 00:23:01.960 00:23:03.449 Luke Daque: The cost or something.

210 00:23:04.140 00:23:13.210 Miguel de Veyra: Not really the cost. But to give you guys a reference. If we do the past 30 days, that’s 120,000 tokens, which is a lot already.

211 00:23:13.990 00:23:18.510 Miguel de Veyra: But then, if we do all time, it’s like 400,000. It takes like

212 00:23:19.510 00:23:26.429 Miguel de Veyra: to for the agent to go through all that context, it takes a lot of time. And then, you know, of course, the more context there is, the

213 00:23:27.560 00:23:32.419 Miguel de Veyra: the more chances of hallucination, though the models are getting better, so we don’t really worry about it too much.

214 00:23:32.620 00:23:38.750 Miguel de Veyra: But then the the thing is, do we really need all of it to, you know.

215 00:23:39.490 00:23:45.280 Luke Daque: I guess ideally, that would be the best case scenario, because, like I just

216 00:23:45.490 00:23:48.020 Luke Daque: an example for pool parts, which is

217 00:23:48.280 00:23:50.849 Luke Daque: one of I guess they are all displaying.

218 00:23:50.950 00:23:59.280 Luke Daque: There are some business logic that was decided like a year or ago, and then everybody like

219 00:23:59.540 00:24:08.409 Luke Daque: probably forgot about that specific business logic, and it came back again. Last month, and then we had like no context

220 00:24:08.590 00:24:11.869 Luke Daque: on the business logic. So probably we could be.

221 00:24:12.110 00:24:16.960 Luke Daque: That would probably the be the best case scenario. But I don’t know. Yeah, what what do you think, guys.

222 00:24:17.590 00:24:22.280 Miguel de Veyra: But I think that part should be on the documentation. No

223 00:24:23.470 00:24:32.599 Miguel de Veyra: cause, we’re gonna add, like another tool here, basically the client documentation, the technical documentation. But that’s for later date. I think we talked about it last week.

224 00:24:34.395 00:24:38.529 Demilade Agboola: Yes, yes, so we definitely need to add more documentation.

225 00:24:39.150 00:24:44.410 Demilade Agboola: And I think that’s part of why we want to test this out so we can see just

226 00:24:45.010 00:24:54.299 Demilade Agboola: once we have the Github logic, and once we have, like some slack messages, what else do we need to supplement it. So if we discover that, hey?

227 00:24:54.626 00:25:15.099 Demilade Agboola: We can’t have all the slack messages which is part of what we’re saying. How do we then document the logic that comes in, and maybe a folder in notion, so that notion can also have that context without needing all the slack messages. So it’s these kind of things that we’re trying to do to to ensure that we know, like, know, all the context of the data we should be keeping.

228 00:25:16.433 00:25:21.170 Miguel de Veyra: On that end. Is it possible to do it via

229 00:25:21.450 00:25:25.679 Miguel de Veyra: Google Docs instead of notion? Or do you guys prefer doing it in notion.

230 00:25:27.580 00:25:32.449 Demilade Agboola: I. Whatever will be easier for the AI team. I think Google Docs should work. I mean.

231 00:25:33.690 00:25:37.269 Miguel de Veyra: Yeah, cause it’s like a lot easier to pull data from Google Docs. Then

232 00:25:38.060 00:25:40.949 Miguel de Veyra: then what do you call this totion?

233 00:25:41.150 00:25:46.949 Miguel de Veyra: But I think either way should work. Whatever you guys think, it’s just a bit more complicated to pull data from notion.

234 00:25:49.540 00:25:58.580 Demilade Agboola: Okay, we’ll definitely take that under consideration again. That’s part of why, you know, part of why this call is important.

235 00:25:58.770 00:26:07.909 Demilade Agboola: Again, we just need to figure out like, okay. So we have what we would ideally want. But like, let’s get the feedback from the AI team on what? Exactly.

236 00:26:07.910 00:26:08.300 Miguel de Veyra: This fall.

237 00:26:08.300 00:26:10.699 Demilade Agboola: Is, yeah, it’s possible.

238 00:26:12.207 00:26:23.910 Miguel de Veyra: Yeah. And then I think the doc, because I think, for example, the use case that Luke mentioned. I don’t think it’s worth adding 300,000 tokens, just to add.

239 00:26:24.390 00:26:36.479 Miguel de Veyra: just for, like, you know, a business logic. So I think that part we can save so much time and tokens if we just move it to documentation, but we can default it, for now it’ll it’ll probably like take. I don’t know. Probably.

240 00:26:37.320 00:26:42.290 Miguel de Veyra: Casey, I think, 2 to 3 min now instead of 10 to 15 seconds.

241 00:26:43.570 00:26:44.130 Casie Aviles: Yeah.

242 00:26:44.130 00:26:53.619 Luke Daque: Yeah, I guess I guess we can. Yeah, that makes I, I think that makes sense. Because, like for pool parts, we really didn’t do a very good job in documentation and stuff like that which is like

243 00:26:53.920 00:27:01.350 Luke Daque: also something that we need to improve on. But yeah, I think, for now we can like start

244 00:27:01.490 00:27:03.269 Luke Daque: with the 30 days.

245 00:27:03.550 00:27:06.350 Luke Daque: And then, yeah, we can. We can from there.

246 00:27:06.630 00:27:20.300 Miguel de Veyra: Or what we can do is I can. We can make it. We can make it all time. For now, while you guys try to get the context, for you know the business logic and stuff. So it’ll help you guys document. And then once we get it down, we revert it back to 30 days.

247 00:27:22.350 00:27:24.030 Luke Daque: Yeah, that sounds great. Yeah.

248 00:27:24.030 00:27:24.700 Miguel de Veyra: That’s good.

249 00:27:29.280 00:27:37.910 Caio Velasco: Okay, 1 1 thing that I like to add so just to follow up on what mentioned in a recap. So

250 00:27:39.910 00:27:44.570 Caio Velasco: the we have text as input, we have text as output.

251 00:27:45.040 00:27:57.830 Caio Velasco: How the agent does the work or does the magic comes from whatever is reading, from sources. Sources are those things down there. And for some reason we started with slack.

252 00:27:58.280 00:28:04.409 Caio Velasco: because maybe it was good for the clients or easier for us to show that the AI works and sell.

253 00:28:04.890 00:28:07.427 Caio Velasco: This is how I’m I’m picturing this

254 00:28:08.240 00:28:13.159 Caio Velasco: and then we’ll go into what the lab says for us to be able to help you guys.

255 00:28:13.330 00:28:22.080 Caio Velasco: And and to understand a bit better of what is happening. For example, you just showed that slack. It’s like a database on the background with

256 00:28:22.380 00:28:23.690 Caio Velasco: daily stuff.

257 00:28:24.130 00:28:32.090 Caio Velasco: And then the AI goes there, does the magic and outputs whatever for whatever question someone is asking. So far, so good, right.

258 00:28:32.540 00:28:33.449 Miguel de Veyra: Yes, yes.

259 00:28:34.050 00:28:41.309 Caio Velasco: Okay. And for example, now, you said about Google Sheets, or or Google Docs, or whatever. And then, for example, like when

260 00:28:41.510 00:28:44.559 Caio Velasco: when I also ask questions regarding

261 00:28:44.780 00:28:48.529 Caio Velasco: the logic of the AI. What I meant by that is more like.

262 00:28:49.710 00:28:56.239 Caio Velasco: should we care how we write stuff in one of the sources of the AI agent.

263 00:28:58.110 00:29:03.220 Caio Velasco: Is it better to go like questions, answers just whatever text

264 00:29:03.390 00:29:08.130 Caio Velasco: is this part of of the work as well? Or for you? It doesn’t really matter.

265 00:29:12.540 00:29:14.120 Miguel de Veyra: Sorry. Let me process that.

266 00:29:15.130 00:29:17.420 Casie Aviles: So like, are you saying, are you asking if

267 00:29:17.899 00:29:21.790 Casie Aviles: that, your input, you have to think about the input? Right? Like, how.

268 00:29:22.456 00:29:23.790 Caio Velasco: Exactly. Yeah.

269 00:29:24.676 00:29:25.459 Casie Aviles: Yeah, I think what?

270 00:29:26.540 00:29:26.900 Casie Aviles: Yes, sir.

271 00:29:27.960 00:29:48.119 Casie Aviles: Yeah. What I was just going to say, is that yeah, it would help if you are more specific, like, for example, like what Miguel said earlier. If you want specifically something from slack, then you would ask the bot to, Hey, can you reference slack? Or maybe a a Zoom Meeting, a previous Zoom Meeting something like that? Or

272 00:29:48.932 00:30:02.690 Casie Aviles: for example, if you have, if you have a question regarding the code base. You would say that, can you check their repo or something like that? So I guess specificity helps with or like being specific with

273 00:30:03.030 00:30:05.539 Casie Aviles: what you want out of the bot can also help.

274 00:30:06.880 00:30:13.280 Caio Velasco: Okay. But okay, that’s like the interaction, the input output related to the interaction between bot and user.

275 00:30:13.440 00:30:18.729 Caio Velasco: I mean more about the source itself, like when the agent is consuming from the sources, to

276 00:30:20.240 00:30:28.239 Caio Velasco: to like work the magic, and give the the final answer. I assume that the sources have. They have to have like a certain

277 00:30:29.920 00:30:31.960 Caio Velasco: structure, I guess it would.

278 00:30:31.960 00:30:33.910 Caio Velasco: What kind of structure is helpful to you?

279 00:30:35.120 00:30:52.179 Miguel de Veyra: For the sources. It doesn’t really matter, because, for example, we’re just getting all the text in slack we get who the users are like when the message was sent, from which channel was it sent? And then we just pass that to the bottom and structure, we we basically process the data on our end

280 00:30:52.410 00:30:55.379 Miguel de Veyra: to make sure that the bot to make sure the bot is accurate.

281 00:30:56.400 00:31:07.650 Miguel de Veyra: And then for zoom we have a way to to do that. We had a wish. Help us there and then we transformed it. For Github, it’s just all the code into one

282 00:31:07.770 00:31:09.640 Miguel de Veyra: linear. We’re still figuring it out.

283 00:31:11.440 00:31:12.060 Caio Velasco: Okay.

284 00:31:12.060 00:31:13.130 Miguel de Veyra: The way Arthur.

285 00:31:13.540 00:31:20.159 Caio Velasco: Okay? And the way the bot parses everything process everything. This is something that you guys

286 00:31:20.490 00:31:21.809 Caio Velasco: bid, or is it

287 00:31:22.100 00:31:32.789 Caio Velasco: already like like a chat. Gpt that of course we are not reinventing the wheel. Or did you build the AI agent itself to to go through the text and do all the

288 00:31:33.240 00:31:34.680 Caio Velasco: the parsing and everything.

289 00:31:34.680 00:31:37.819 Miguel de Veyra: That the data because.

290 00:31:37.820 00:31:49.649 Amber Lin: Sorry can we pull up the original diagram we created like 2 weeks ago with Utam? I think that will be very helpful, for, like a visual structure of how the data flows.

291 00:32:01.820 00:32:03.290 Miguel de Veyra: You guys have the link for it.

292 00:32:14.330 00:32:16.830 Amber Lin: Am not on my computer.

293 00:32:17.780 00:32:19.089 Uttam Kumaran: What is? What is it?

294 00:32:19.620 00:32:20.720 Miguel de Veyra: The big jump.

295 00:32:23.810 00:32:24.749 Miguel de Veyra: Alright. Let me.

296 00:32:24.750 00:32:32.929 Amber Lin: Diagram that we created a few weeks ago, of how all the sources flow, and what process goes into creating the agents.

297 00:32:38.060 00:32:40.119 Uttam Kumaran: Is this on Miro, or is this on.

298 00:32:40.120 00:32:41.230 Miguel de Veyra: It’s a big job.

299 00:32:51.620 00:32:57.659 Luke Daque: I think what Kyle was asking was related to the documentation. Right, Kyle, like, how?

300 00:32:58.270 00:33:14.660 Luke Daque: What? What would be the best way to document? Let’s say, business logic, for example, would it be like in an FAQ style, or just like a paragraph? Bullets and stuff like that like, is there any specific way that where the AI would be.

301 00:33:15.180 00:33:16.230 Uttam Kumaran: Would be best suited.

302 00:33:16.230 00:33:24.670 Uttam Kumaran: That’s all. In the prompt. Yeah. So Kyle, they they basically have prompt logic. Can you pull up one of those nodes.

303 00:33:24.920 00:33:28.229 Uttam Kumaran: Miguel, just pull up a prompt node.

304 00:33:29.120 00:33:31.820 Uttam Kumaran: So literally, this is the logic.

305 00:33:32.490 00:33:35.330 Uttam Kumaran: So we have a prompt here that takes an input.

306 00:33:35.490 00:33:37.400 Uttam Kumaran: and then has it gives an output.

307 00:33:39.540 00:33:51.670 Uttam Kumaran: So yeah, so, for example, if you look here, the prompt includes who the clients are who our team is. So we fix these as part of the prompt. Imagine you were discussing with Chat Gpt.

308 00:33:51.850 00:33:57.339 Uttam Kumaran: you had to write all this. This is like the core part of the core back end. Logic is this prompt?

309 00:33:58.610 00:34:00.660 Uttam Kumaran: So this not only dictates

310 00:34:00.840 00:34:04.690 Uttam Kumaran: everything that a prompt, but it does, but it could be out output formatting

311 00:34:05.140 00:34:09.940 Uttam Kumaran: the sequence. How it discusses the tone. Everything good question.

312 00:34:09.940 00:34:25.469 Miguel de Veyra: Yeah, yeah, yeah. But I think the the thing they’re asking with them is because we’re we were talking earlier about creating a documentation that the technical basically documentation for each, I think how to format that documentation. So the bot can read it is what they’re asking.

313 00:34:26.600 00:34:38.289 Uttam Kumaran: Yeah, but I think that’s a good question for you. Like, do you? Do? You have requirements on what the documentation needs to look like, in order to consume it.

314 00:34:39.583 00:34:54.869 Miguel de Veyra: It’s basically the same problem. It’s same, not problem, the the same issue with ABC, Central Doc, as long as you guys avoid tables. And it’s raw text. If it’s in Google docs, we can handle it. If it’s in notion.

315 00:34:56.719 00:34:59.739 Uttam Kumaran: I I have been pushing for notion.

316 00:35:01.559 00:35:05.479 Uttam Kumaran: but I don’t. I can get veto like.

317 00:35:07.470 00:35:13.350 Miguel de Veyra: Yeah, if it’s in notion, we’ll check if we can pull tables, because, as you know, right, notion is a bit.

318 00:35:14.240 00:35:17.700 Uttam Kumaran: So so this is where I think the team just needs those requirements.

319 00:35:18.200 00:35:22.200 Uttam Kumaran: No, tables can’t do color.

320 00:35:22.200 00:35:24.330 Miguel de Veyra: No pictures. Yeah. Yeah. Well, for pictures.

321 00:35:24.330 00:35:30.290 Uttam Kumaran: There’s like that’s I don’t know. Kaya. Is that like sort of what the ask is.

322 00:35:30.290 00:35:35.410 Caio Velasco: Yes, yes, yes, it was one of the asks, and and but not just like a random. Ask.

323 00:35:35.410 00:35:35.920 Uttam Kumaran: That’s why I’m saying.

324 00:35:35.920 00:35:50.490 Caio Velasco: More related to the flow of the of the of the presentation. Let’s say that you’re explaining AI. You’re explaining the agent at some point to say, Hey, we have all these sources. So this sources. It’s like in the database, and we are consuming. But for the AI to

325 00:35:50.950 00:36:03.190 Caio Velasco: do whatever we would need this kind of structure to make it happen in a nicer way. So then I would pick up at that time and say, like, Oh, okay. So you need kind of this requirements, and that that’s more like the expectation that I have.

326 00:36:12.030 00:36:12.949 Uttam Kumaran: Yeah. Go ahead.

327 00:36:12.950 00:36:22.259 Miguel de Veyra: Yeah, for yeah, for sources. It’s I wouldn’t worry about it from your end. The sources is we’re transforming it so that the bot can use it.

328 00:36:22.640 00:36:33.949 Miguel de Veyra: because we, as mentioned earlier, right? If it, we have 4 main sources right now, Github is from an Xml file, which we really can’t do anything slack is this one?

329 00:36:34.940 00:36:39.900 Miguel de Veyra: And then zoom is just. Transcripts.

330 00:36:42.410 00:36:50.579 Caio Velasco: Okay. So the only thing that is seems to be a bit like changeable it, or something, I say.

331 00:36:50.690 00:37:01.629 Caio Velasco: think they can. They can change the structures that for them notion or Google Docs, something that we can do whatever way we think is best, but avoiding tables, avoiding callers, avoiding those things.

332 00:37:01.820 00:37:03.100 Caio Velasco: Okay, got it.

333 00:37:03.650 00:37:14.980 Miguel de Veyra: If it’s notion. If it’s notion or github if it’s notion, or what do you call it? Or Google Docs, if you could let us know ahead of time, so we can start looking into it.

334 00:37:16.010 00:37:17.559 Miguel de Veyra: Can you give you.

335 00:37:17.560 00:37:24.180 Uttam Kumaran: Is that in that like in that formatting guideline, like, you know, the one we’re developing for ABC.

336 00:37:24.180 00:37:24.870 Miguel de Veyra: Yeah.

337 00:37:25.340 00:37:33.959 Uttam Kumaran: Can you just like make one? So people, whenever they want something ingested by AI. They know, like what the guidelines are.

338 00:37:33.960 00:37:34.630 Miguel de Veyra: Okay.

339 00:37:34.920 00:37:41.909 Uttam Kumaran: Because this is gonna be the 1st piece of documentation we like move from notion into AI or Google Docs into AI,

340 00:37:42.310 00:37:58.210 Uttam Kumaran: I push for notion, because all of our stuff is a notion. And Google Docs organization is is really not great. However, I can compromise by if you have at least keep a notion. And then you could link out to Google docs if, if like, we need that, then that’s

341 00:37:58.500 00:38:02.069 Uttam Kumaran: I’m fine. With that. I just like really trying to consolidate.

342 00:38:02.800 00:38:10.570 Miguel de Veyra: Yeah, it should. Those one should be fine. My only worry. There is, for example, if there’s like the toggle thing

343 00:38:12.540 00:38:18.219 Miguel de Veyra: because they treat it as like separate pages. So you know, I don’t want to go page deeper page deeper for each.

344 00:38:18.340 00:38:19.679 Miguel de Veyra: But yeah, we will look into it.

345 00:38:19.680 00:38:23.990 Uttam Kumaran: Yeah. So that’s again, write that down. Just say, no, no nested pages.

346 00:38:23.990 00:38:24.840 Miguel de Veyra: Yep. Yep.

347 00:38:25.230 00:38:25.810 Uttam Kumaran: Yeah?

348 00:38:26.180 00:38:26.840 Uttam Kumaran: Good? Question.

349 00:38:26.840 00:38:28.740 Miguel de Veyra: Use bullet points and headers.

350 00:38:32.655 00:38:36.369 Miguel de Veyra: Yeah, I think that is pretty much it, I think. Demolade hopped off.

351 00:38:37.340 00:38:38.410 Miguel de Veyra: Oh, no, he’s still okay.

352 00:38:39.760 00:38:40.420 Miguel de Veyra: Next week.

353 00:38:40.420 00:38:41.340 Demilade Agboola: I’m still here.

354 00:38:42.410 00:38:45.729 Demilade Agboola: I’m debugging something with a client, but I’m still here.

355 00:38:46.070 00:38:46.900 Miguel de Veyra: Okay? Sure.

356 00:38:47.300 00:38:52.919 Miguel de Veyra: I think that’s pretty much it for the presentation. Do you guys have any questions for me or Casey?

357 00:38:56.801 00:39:05.329 Demilade Agboola: Okay, once you create the notion like Do’s and don’ts, I think that’ll be very helpful so that we can integrate it with our documentation.

358 00:39:05.600 00:39:12.409 Demilade Agboola: And then we can push forward like, you know. So we can also maybe create a structure on how

359 00:39:12.865 00:39:20.329 Demilade Agboola: this thing should be done for clients like a template. So we can maybe have a page, for like

360 00:39:20.620 00:39:32.090 Demilade Agboola: the client info page, for, like logic, though, like business logic that we’ve gotten from the clients, maybe another page, for, like maybe general industry information.

361 00:39:32.632 00:39:40.519 Demilade Agboola: so that that way, we can just have a template that people can use, and then they don’t necessarily play with too much, and it doesn’t break too often.

362 00:39:41.070 00:39:41.680 Miguel de Veyra: Yep.

363 00:39:45.442 00:39:47.920 Miguel de Veyra: I’ll get something to you by

364 00:39:48.260 00:39:55.529 Miguel de Veyra: probably after our stand up. I’ll just test stuff out a bit, and notion that I’ll send something over to the data. AI Channel.

365 00:39:58.750 00:39:59.319 Demilade Agboola: Sounds good.

366 00:39:59.320 00:40:19.559 Amber Lin: And Kyle. Thank you for the golden data set I think a good stuff. We, since we have Github for a few of the agents. Now we’ll go in and test if it can answer those questions correctly, and then we’ll probably get it back to you to say, Hey, is is this accurate?

367 00:40:20.103 00:40:24.759 Amber Lin: Miguel? What ages do we have like Github on for now.

368 00:40:25.220 00:40:26.040 Miguel de Veyra: Yeah, we.

369 00:40:26.970 00:40:29.469 Amber Lin: Oh, okay. Who sold? Javi.

370 00:40:32.970 00:40:34.519 Miguel de Veyra: Sorry. Is that a question for me?

371 00:40:35.470 00:40:39.499 Amber Lin: No. For the data people who’s on Javi, who was on Javi.

372 00:40:40.970 00:40:49.249 Caio Velasco: Myself, and not sure if Demilla did you work with. No, I think myself, and any.

373 00:40:49.620 00:40:55.840 Amber Lin: Okay, awesome, awesome. So I will ask you guys to rate if the if the answers are accurate.

374 00:40:58.470 00:40:59.150 Caio Velasco: Okay.

375 00:41:00.000 00:41:00.600 Amber Lin: Okay.

376 00:41:00.810 00:41:01.890 Amber Lin: Awesome.

377 00:41:03.030 00:41:04.899 Amber Lin: Any more questions. Everyone.

378 00:41:04.900 00:41:10.950 Uttam Kumaran: One more question for me. It’s like, we know, we’re producing this documentation for each client. Are you guys planning on using

379 00:41:11.330 00:41:19.009 Uttam Kumaran: the? I assume you’re planning on using this agent to help create that documentation? Am I? Am I correct?

380 00:41:19.481 00:41:25.579 Miguel de Veyra: Yeah, actually, that’s what we I, we discussed this earlier with Luke cause, because basically, what we’re doing is

381 00:41:26.287 00:41:32.880 Miguel de Veyra: we’re limiting basically the context for slack for the last 30 days, because it just takes too much tokens.

382 00:41:33.325 00:41:43.090 Miguel de Veyra: But what we’re gonna do in the meantime is cause one use case that Luke mentioned is that there was some business logic that was discussed over a year ago. For

383 00:41:43.570 00:41:54.399 Miguel de Veyra: yeah, we our pool parts, was it? And then, you know, basically, they couldn’t find it. So for documentation. Wise, I think, for the foreseeable future, just to help the team document, will.

384 00:41:54.900 00:41:57.890 Miguel de Veyra: we’ll mark it as just return everything.

385 00:41:57.890 00:42:01.049 Uttam Kumaran: But what so? What but what is this token problem?

386 00:42:03.530 00:42:11.540 Miguel de Veyra: So if we do everything, for I think it was at the Avi. There’s like 400,000 tokens also.

387 00:42:11.920 00:42:12.649 Miguel de Veyra: it’s just.

388 00:42:12.650 00:42:16.910 Uttam Kumaran: But I like when I why like, why is there that many tokens.

389 00:42:17.430 00:42:18.710 Miguel de Veyra: It’s like, I think.

390 00:42:18.710 00:42:20.890 Uttam Kumaran: Why are you putting everything into context?

391 00:42:21.790 00:42:25.979 Miguel de Veyra: No, no, that’s why that’s why we did only the past 30 days. So it doesn’t balloon that high.

392 00:42:25.980 00:42:31.789 Uttam Kumaran: No, no, you’re missing my question like, why, why are we putting it all into context like

393 00:42:34.510 00:42:38.759 Uttam Kumaran: we? There’s why can’t I access more than 30 days worth of data.

394 00:42:44.370 00:42:46.180 Miguel de Veyra: Because the alternative is ragger.

395 00:42:47.620 00:42:48.660 Uttam Kumaran: Yeah, but like.

396 00:42:49.360 00:43:00.039 Uttam Kumaran: there’s like 30 days. And we’re not gonna have all the slack messages like. So why, I guess. Tell me, like, why, yeah, I guess I’m asking more of like, why didn’t we do rag? Or another method?

397 00:43:00.360 00:43:02.229 Uttam Kumaran: Because 30 days is no amount of data.

398 00:43:03.920 00:43:06.479 Miguel de Veyra: Oh, yeah, no. The 30 days is just for, you know.

399 00:43:07.000 00:43:12.430 Miguel de Veyra: for testing, basically to just limit the tokens we can adjust it to. However, we want.

400 00:43:13.680 00:43:17.999 Uttam Kumaran: But is there a plan to avoid? But I think you know what I’m trying to ask? Right?

401 00:43:18.000 00:43:19.800 Uttam Kumaran: Yeah, yeah, I’m trying to ask, okay.

402 00:43:21.306 00:43:21.800 Miguel de Veyra: For now.

403 00:43:21.800 00:43:33.869 Uttam Kumaran: We need. We need the history, I mean, like the the point of the AI agent is that it has access to everything right, like the benefit of like be. Instead of asking someone on the climb, is that us as humans we can’t keep.

404 00:43:34.300 00:43:36.009 Uttam Kumaran: we can’t keep like

405 00:43:36.190 00:43:42.160 Uttam Kumaran: thousands and thousands of slack messages in our brain. Right? So the benefit here is that this thing can

406 00:43:42.380 00:43:47.930 Uttam Kumaran: guess what I’m asking is like, what what’s the path towards having it have access to more than 30 days.

407 00:43:51.480 00:43:57.579 Miguel de Veyra: Well, there’s 2 ways we can do it. We could just remove the filter, but, you know, have give it context to everything.

408 00:44:00.150 00:44:03.930 Miguel de Veyra: or we can do. I I wouldn’t recommend. We do rug.

409 00:44:07.770 00:44:08.270 Miguel de Veyra: Okay.

410 00:44:08.270 00:44:10.230 Miguel de Veyra: So we’re gonna do contextual on it.

411 00:44:12.110 00:44:15.290 Uttam Kumaran: So there’s so what what I’m hearing is, there’s no plan.

412 00:44:16.040 00:44:19.939 Miguel de Veyra: No, no, not for now I mean there is, you know, just remove this 30 day limit.

413 00:44:20.230 00:44:23.110 Uttam Kumaran: But that’s but you, even you just said, that’s not like

414 00:44:23.770 00:44:26.779 Uttam Kumaran: like, what are we gonna pay like $5 per message.

415 00:44:26.780 00:44:30.240 Miguel de Veyra: No, no, not really, I think with Gemini token is like.

416 00:44:31.280 00:44:36.819 Uttam Kumaran: But, like again, this is what I’m asking is like, what what happens if you put all that into context like.

417 00:44:38.530 00:44:43.399 Uttam Kumaran: why do we not. Not all of the messages are relevant to the question. To every question. Right.

418 00:44:43.740 00:44:44.380 Miguel de Veyra: Yeah.

419 00:44:46.480 00:44:49.460 Uttam Kumaran: So there’s no path towards more than 30 days of data.

420 00:44:52.600 00:44:54.200 Miguel de Veyra: Without including more.

421 00:44:54.542 00:44:58.310 Uttam Kumaran: Is putting all the messages into context. Really, the solution here

422 00:45:04.620 00:45:08.910 Uttam Kumaran: like that doesn’t seem like a well thought out solution

423 00:45:16.740 00:45:23.510 Uttam Kumaran: is there? Is there like you told me, there’s no other path forward, apart from throwing everything into context, like I find that hard to believe.

424 00:45:35.410 00:45:39.009 Miguel de Veyra: We could probably add some more filters. But yeah, let me think about that.

425 00:45:40.650 00:45:45.150 Uttam Kumaran: Okay, this is where I need you guys to like. If because if I go Google.

426 00:45:45.390 00:45:54.090 Uttam Kumaran: I’m telling you and I, Google how to how to slack message rag behind an AI agent. I’m gonna find 4 or 5 techniques that work.

427 00:45:55.010 00:45:57.940 Uttam Kumaran: I’m just surprised that, like we haven’t googled this problem.

428 00:45:58.120 00:46:00.499 Uttam Kumaran: It doesn’t seem like we’ve googled that this problem at all.

429 00:46:00.930 00:46:05.890 Uttam Kumaran: Well, I mean, and I’ll do right like we could do it right now if you Google.

430 00:46:06.120 00:46:08.080 Uttam Kumaran: how do I use? How do I put a

431 00:46:08.210 00:46:21.149 Uttam Kumaran: ingest, all my slack messages into a slack agent? You’re gonna find a few retrieval methods that do message based summarization that help you find out whether the message is relevant to the question, and then it retrieves the message.

432 00:46:21.690 00:46:22.500 Uttam Kumaran: 5 Bill.

433 00:46:22.790 00:46:26.879 Uttam Kumaran: I can’t be the one here telling you that that’s the thing like I’m not even on the team.

434 00:46:27.660 00:46:34.609 Uttam Kumaran: So we need a path forward to more than 30 days of of slack messages like this isn’t fair. This isn’t enough.

435 00:46:37.320 00:46:40.189 Uttam Kumaran: Like using contextual is not a solution.

436 00:46:40.640 00:46:45.620 Uttam Kumaran: That’s that’s that’s like, not a software we’re using for our stuff like that’s just for clients.

437 00:46:45.830 00:46:50.339 Uttam Kumaran: Second piece is throwing everything into context is also not a solution. None of those are like

438 00:46:50.730 00:46:52.710 Uttam Kumaran: great techniques for this problem.

439 00:46:53.610 00:46:59.410 Uttam Kumaran: So I need I need you guys to on the AI team to go find a better solution for this that works

440 00:47:00.480 00:47:10.319 Uttam Kumaran: because this, this is always gonna happen right? We’re always gonna get to this point. So I’m just surprised that we’re here, and there’s no path towards more than 30 days worth of slack messages like, what’s gonna happen.

441 00:47:10.640 00:47:13.110 Uttam Kumaran: you know. So yeah, that’s all from.

442 00:47:13.610 00:47:35.419 Uttam Kumaran: that’s all from my side. So I feel like we, I need to know what’s gonna happen there or like this whole crew needs to know how we’re gonna get, because for for them, they need to ask questions. Everybody on the data side is gonna ask questions on the entirety of the data set. Every Github file, every Zoom Meeting. We’ve had, every slack message we’ve sent needs to be available for questioning.

443 00:47:35.530 00:47:37.729 Uttam Kumaran: So we need to find a way to enable that

444 00:47:38.390 00:47:43.680 Uttam Kumaran: cause. We’re basically at the end. Right? We’ve we’ve ingested all the data. We have all the filters.

445 00:47:44.260 00:47:47.899 Uttam Kumaran: So I think it’s on you guys to go figure out how to do this next piece.

446 00:47:51.200 00:47:52.189 Miguel de Veyra: Okay. Yeah.

447 00:47:55.420 00:47:56.110 Uttam Kumaran: Okay.

448 00:47:59.770 00:48:03.923 Caio Velasco: Okay? And just to re final thing, just to recap like with Demilla, the

449 00:48:05.010 00:48:11.599 Caio Velasco: from our end. What we need is be for for the documentation part of the data platform thing

450 00:48:11.790 00:48:14.359 Caio Velasco: is to basically structure

451 00:48:14.840 00:48:23.120 Caio Velasco: in a way to avoid the things that would not be nice for the agent like tables whatever. And put it in a Google Docs, maybe from notion.

452 00:48:23.320 00:48:26.360 Caio Velasco: If we if we can do that instead of notion.

453 00:48:26.600 00:48:36.909 Caio Velasco: is that basically it? Then I learned a lot today, for sure. But at the end of the day on our side is basically focusing on the source.

454 00:48:39.379 00:48:44.330 Demilade Agboola: Yeah, I think largely, what we just need to do is get the rules for notion.

455 00:48:44.580 00:48:47.510 Demilade Agboola: And then once we get the rules, we can create the templates.

456 00:48:47.670 00:48:56.750 Demilade Agboola: I’m also creating templates. We can then use utilize it across multiple clients. So we can use Eden or urban stems. Or you know, whatever clients

457 00:48:57.528 00:49:05.449 Demilade Agboola: and just Doc, like, populate the document templates based on you know, to different

458 00:49:06.210 00:49:19.561 Demilade Agboola: the different, like bits of logic that we want to put. So okay, like, this is the template for this, and we add that information here, or we add the information to this page, and then we can then tell the item that we have it ready for

459 00:49:20.260 00:49:28.539 Demilade Agboola: ingestion, and then we can also do that with the same with like the repo mixes. And just kind of just see how the

460 00:49:29.050 00:49:33.070 Demilade Agboola: the agents can answer questions based off the information provided.

461 00:49:33.560 00:49:40.099 Demilade Agboola: So yeah, I think once we get the guidelines, we can just use that. So we can try and scale up what what we have

462 00:49:40.320 00:49:42.140 Demilade Agboola: across all the clients as well.

463 00:49:43.940 00:49:44.950 Caio Velasco: Perfect.

464 00:49:54.630 00:49:58.080 Caio Velasco: Okay, I think we’re good. Right?

465 00:49:59.040 00:50:05.200 Uttam Kumaran: Yeah, I mean, Miguel Amber. This is your meeting. So it’s like, what else we? What else we have to talk about.

466 00:50:07.100 00:50:07.630 Miguel de Veyra: Oh, I see!

467 00:50:07.630 00:50:08.030 Amber Lin: Okay.

468 00:50:08.030 00:50:08.580 Miguel de Veyra: 15 months.

469 00:50:08.910 00:50:10.069 Miguel de Veyra: It from my end.

470 00:50:11.470 00:50:21.281 Amber Lin: Okay? I guess the one last thing I want to talk about is, what is the timeline that we expect on delivery? When are we gonna test a

471 00:50:21.890 00:50:31.569 Amber Lin: like a scrappy minimal viable product? And when do we expect this out to be fully? Because we have a great we’re talking about great things. I just want to know when this gets done.

472 00:50:39.272 00:50:41.210 Miguel de Veyra: I’ll speak to Casey about it.

473 00:50:43.220 00:50:52.190 Amber Lin: Do we have like a rough idea of when things would be workable, like, what? What is the State right now? We do have a job, Asia that

474 00:50:52.310 00:50:54.639 Amber Lin: can answer some of the questions right.

475 00:50:55.650 00:51:00.799 Miguel de Veyra: Yeah, I mean, we’ve initialized the agents already, like most of it.

476 00:51:01.810 00:51:07.179 Miguel de Veyra: And then but yeah, I guess we’ll we’ll just push through with what we have first.st

477 00:51:07.840 00:51:14.559 Miguel de Veyra: And then, yeah, while we’re working on a solution, let’s have something out there first.st

478 00:51:15.550 00:51:27.389 Uttam Kumaran: Can you set a deadline on coming back to this crew with that next solution? That’s the question right now is, it’s it’s okay. If you don’t have the answer. But like we wanna have deadlines.

479 00:51:27.750 00:51:38.890 Uttam Kumaran: we just again like the psychology is just tasks. Take up the time that they’re given. So I I think we can just agree like, what are the next? What’s the next timeline

480 00:51:39.300 00:51:41.110 Uttam Kumaran: for this crew to check in?

481 00:51:41.350 00:51:48.919 Uttam Kumaran: What should we expect from each side at that point. Right? So I’ve heard. I mean I joined the meeting late. I I know they have the guidelines.

482 00:51:49.030 00:51:50.609 Uttam Kumaran: We have the slack thing

483 00:51:50.770 00:51:55.880 Uttam Kumaran: so we can just go through the list of things we talked in this meeting, and and we want to put

484 00:51:56.570 00:51:57.910 Uttam Kumaran: timelines on those.

485 00:52:06.050 00:52:07.349 Uttam Kumaran: So does anyone. I mean.

486 00:52:07.350 00:52:07.890 Amber Lin: And.

487 00:52:08.130 00:52:12.809 Uttam Kumaran: I’m sure do like. Should I leave this meeting, or I mean, I’m just trying to ask questions, but.

488 00:52:12.810 00:52:14.940 Miguel de Veyra: Yeah. Yeah. Oh, my God, sorry.

489 00:52:15.620 00:52:16.020 Uttam Kumaran: Yeah.

490 00:52:16.020 00:52:22.189 Miguel de Veyra: Should should we discuss that in this meeting or in the next meeting for the AI standard? Because we already planned out basically this week.

491 00:52:22.440 00:52:23.629 Miguel de Veyra: cause I think that that’s.

492 00:52:23.630 00:52:25.070 Uttam Kumaran: Like. But in

493 00:52:25.760 00:52:43.670 Uttam Kumaran: we talked about 5 things, right? Like, we talked about having the the data team needs to work on the documentation for Javi. They need to test. They need. They need the requirements for the guidelines, for the documentation. The AI team needs to enable slack beyond 30 days and needs to have a path towards that.

494 00:52:43.950 00:52:48.960 Uttam Kumaran: Those are 3 things right. Is there? Are there any other items beyond that.

495 00:52:50.920 00:52:52.090 Miguel de Veyra: I don’t think so.

496 00:52:52.310 00:52:54.880 Uttam Kumaran: Okay. So then, for those 3, can we each.

497 00:52:55.560 00:53:01.479 Uttam Kumaran: whoever is owning each of those items, just say when you can come back to the team with

498 00:53:01.760 00:53:05.530 Uttam Kumaran: a path forward on those or the deliverable.

499 00:53:08.220 00:53:13.590 Uttam Kumaran: So, Miguel, your thing is the slack and the guidelines. So when can you have those.

500 00:53:15.200 00:53:22.039 Miguel de Veyra: The guidelines I can send to them a lot of, I think, by by end of day.

501 00:53:22.770 00:53:23.430 Uttam Kumaran: Okay.

502 00:53:23.680 00:53:31.250 Miguel de Veyra: And then for the what do you call this for the new slack thing? I’ll run a spike.

503 00:53:32.480 00:53:35.420 Miguel de Veyra: and then we’ll come back with you.

504 00:53:36.340 00:53:41.200 Miguel de Veyra: We’ll come back to you and the team on the best path forward tomorrow.

505 00:53:42.540 00:53:43.100 Uttam Kumaran: Okay?

506 00:53:43.450 00:53:44.220 Uttam Kumaran: Right?

507 00:53:44.850 00:53:55.169 Uttam Kumaran: That’s all. I’m looking for just ownership over your item. So the 3rd piece, I think, for Kyle and demalade, is the javi documentation.

508 00:53:57.550 00:54:03.710 Uttam Kumaran: like, how do you guys feel about that? If you get the guidelines today? When do we think we can turn around a version of that.

509 00:54:03.850 00:54:08.419 Uttam Kumaran: So maybe the data, the data. Everybody on this crew can review that.

510 00:54:11.440 00:54:17.459 Caio Velasco: So I think it would be. It would depend on agreeing what needs to be there.

511 00:54:17.600 00:54:22.410 Caio Velasco: Because, for example, like, if you ask something for the bot. It goes on slack, and it brings like

512 00:54:22.890 00:54:25.400 Caio Velasco: stuff from the past a lot of things.

513 00:54:25.740 00:54:28.779 Caio Velasco: but if we so it could be like a lot of things. But

514 00:54:28.910 00:54:33.070 Caio Velasco: if we think on a on a lot, on a business logic like.

515 00:54:33.420 00:54:39.450 Caio Velasco: what are the tables, what are the sources, what what are the important fields, or

516 00:54:39.680 00:54:46.839 Caio Velasco: and something has to be in there. And and I think it could be so many things, so maybe defining. That would be the first, st

517 00:54:47.180 00:54:48.060 Caio Velasco: Steph.

518 00:54:49.920 00:54:53.020 Caio Velasco: Couple of days depends on the Melanie as well.

519 00:54:53.856 00:55:00.920 Caio Velasco: Going through everything. And yeah, 2 days, maybe, or something like that, I think, would be at least the 1st draft.

520 00:55:01.920 00:55:07.320 Uttam Kumaran: Well, I know we had this roadmap, right? So are we. Is this roadmap like still accurate?

521 00:55:10.140 00:55:11.500 Uttam Kumaran: Like, is this?

522 00:55:13.310 00:55:17.390 Uttam Kumaran: Yeah? Does does this cover the plan, or

523 00:55:17.590 00:55:20.460 Uttam Kumaran: this is what I I thought we were sort of going off of.

524 00:55:22.660 00:55:28.250 Caio Velasco: Yeah, no, that’s a that’s a good point. Because from from the meeting, so sorry from from the meeting.

525 00:55:28.880 00:55:32.310 Caio Velasco: Well, I understood that at the end that we just have to focus on on

526 00:55:33.250 00:55:40.260 Caio Velasco: putting content content into notion, or whatever document we decide.

527 00:55:42.230 00:55:47.780 Caio Velasco: So this roadmap would be something larger to like, including, you know, obviously, the

528 00:55:47.980 00:55:51.640 Caio Velasco: the AI team developing their side, etc. and testing.

529 00:55:51.820 00:55:57.616 Caio Velasco: I think we can definitely still keep track of this.

530 00:55:59.820 00:56:07.579 Caio Velasco: yeah. So so let’s say that we are like in the 1st week, like validating structure and getting feedback from from people like, that’s what we are doing now.

531 00:56:08.279 00:56:12.269 Caio Velasco: And from next week I think we can have, like at least the documentation they need.

532 00:56:14.030 00:56:19.720 Uttam Kumaran: Okay, cool. So I I think this is fair, like, I mean again.

533 00:56:19.990 00:56:25.060 Uttam Kumaran: what goes into this document is everything you mentioned, which is like

534 00:56:25.809 00:56:29.199 Uttam Kumaran: right cause. So this is more. This piece is like

535 00:56:29.760 00:56:42.779 Uttam Kumaran: more of the documentation. This is more about like what the bot has which the AI team is taking care of, I think. Ultimately, I sort of want to know what is actually in the document itself.

536 00:56:42.970 00:56:46.129 Uttam Kumaran: right? Like we previously had

537 00:56:47.520 00:56:51.709 Uttam Kumaran: like in in notion we have what’s it called like

538 00:56:52.200 00:56:57.470 Uttam Kumaran: some data platform documentation already met? Right? So if I go to stack Blitz.

539 00:56:57.630 00:57:02.219 Uttam Kumaran: this is an example of an existing document that exists, that

540 00:57:02.420 00:57:10.600 Uttam Kumaran: that we’ve written. But what I want to know is like, what are the what is the standardized format for these like, is this format?

541 00:57:11.200 00:57:18.260 Uttam Kumaran: Okay, did we want to change that? Because we basically wanna like, have a structured format for documentation for every client.

542 00:57:18.460 00:57:19.260 Uttam Kumaran: you know.

543 00:57:21.440 00:57:26.200 Uttam Kumaran: So the way we did it here, the way we did it here is we have Faqs

544 00:57:26.410 00:57:33.340 Uttam Kumaran: technical Faqs and then model level Faqs. And then we have platform overview.

545 00:57:33.660 00:57:38.829 Uttam Kumaran: But I think what I’m what I’m looking for before I can sort of give sign off is like.

546 00:57:39.030 00:57:56.740 Uttam Kumaran: what is the structure of this document that we can then copy paste for every client, because the AI may be able to write 50% of this just based on all those sources. And then there may be another 50% that we have to write as humans. Right? That’s more like understanding or business law business understanding.

547 00:57:56.910 00:58:05.009 Uttam Kumaran: But without this document we don’t have a like a structured way of do of templating this across clients.

548 00:58:07.250 00:58:08.370 Demilade Agboola: Exactly. Yeah.

549 00:58:09.940 00:58:11.120 Caio Velasco: Quietly, but.

550 00:58:11.800 00:58:15.390 Demilade Agboola: So that’s that’s part of why we want the AI to come up with like

551 00:58:15.890 00:58:18.849 Demilade Agboola: roles. And then we plan to create a template.

552 00:58:19.300 00:58:25.229 Uttam Kumaran: On how the notion document should be structured, and once that is structured, we can then.

553 00:58:25.628 00:58:41.709 Demilade Agboola: Use it across the different clients, because, you know, we kind of have. That’s part of what they said about notion. And like notion having issues. So once we know what the issues are, we can then create the template that we can then use. Because if you go further down this page, we have an idea of how the

554 00:58:41.930 00:58:54.680 Demilade Agboola: notion should look like the structure, but like we will then have to use what they’re saying to be able to implement it in our templates, and then we can start to use that template across multiple plans.

555 00:58:55.350 00:58:58.889 Uttam Kumaran: Okay. So I think one thing that would be great is if we can just

556 00:58:59.070 00:59:05.470 Uttam Kumaran: like make sure that this task ends up in the roadmap

557 00:59:06.560 00:59:09.750 Uttam Kumaran: and then ends up also in linear

558 00:59:10.262 00:59:21.340 Uttam Kumaran: I also think that this document highlights not only the roadmap for the AI team, but also for the data platform. But I think it’s it’s mainly focused on

559 00:59:21.910 00:59:27.389 Uttam Kumaran: getting the bot to work. When I think one piece that’s still important is that

560 00:59:28.190 00:59:39.310 Uttam Kumaran: having the structured document. For example, if you think about all of the sources we have right, we have Github notion, linear. We have Zoom Meetings that may have 80%

561 00:59:39.700 00:59:45.549 Uttam Kumaran: of the context. And there may still be 20% that’s needed from like

562 00:59:45.930 00:59:52.530 Uttam Kumaran: this data platform documentation, right? So it not only it becomes another source of information.

563 00:59:52.820 00:59:57.910 Uttam Kumaran: I, this is where I’m open. If you guys are like, Hey, we don’t even need the documentation because the bot

564 00:59:58.320 01:00:03.129 Uttam Kumaran: has everything. Then that’s actually a good pushback. At this point

565 01:00:03.290 01:00:14.350 Uttam Kumaran: I’m not. I don’t know whether all those sources cover like everything about a client, so we still need. I think this. I think we still need this notion document because it has

566 01:00:14.580 01:00:23.370 Uttam Kumaran: some information, or it has information that maybe we’ve never discussed on a meeting. But like, yeah, you’re right. Maybe it doesn’t have to be this long. Or

567 01:00:23.540 01:00:33.859 Uttam Kumaran: it could be this long that way. Someone could ask the bot, or they can come, read this. The other thing is the form factor. Some people may be comfortable reading this. Some people may be comfortable asking the bot.

568 01:00:34.200 01:00:37.519 Uttam Kumaran: The other thing is, we want to leave this for clients. So

569 01:00:37.910 01:00:50.250 Uttam Kumaran: maybe they want to have this. And the the bot creates this. So I think this is still helpful, and to have in a written format. But I would like to see that added to this data platform documentation.

570 01:00:50.700 01:00:56.440 Uttam Kumaran: basically just see it here where it’s like, we have these these key steps written down.

571 01:00:59.600 01:01:04.619 Uttam Kumaran: And for the data platform team like y’all don’t own, don’t need to own the

572 01:01:04.810 01:01:09.009 Uttam Kumaran: like. The basically the prompt structure. I think a helpful separation is

573 01:01:09.160 01:01:23.449 Uttam Kumaran: you should you guys should give criticism back to the bot, which is like, hey? The bot answered this question, and it was wrong, or the format was too long, or the format was too short or too many emojis. That’s the feedback that the AI team needs to fix their prompts.

574 01:01:23.760 01:01:36.769 Uttam Kumaran: And then for the AI team, I think the biggest thing is you like, we need to make sure that all of the data from all those sources are accessible through the bot, every Zoom Meeting, every linear ticket.

575 01:01:36.950 01:01:45.370 Uttam Kumaran: every slack message, every get, every github file like that has to happen for this to work. Otherwise there’s no way this works.

576 01:01:46.160 01:01:52.560 Uttam Kumaran: So I know we we wanted to push forward and just limited 30 days. But we’re there where we finish that. So

577 01:01:53.240 01:02:06.000 Uttam Kumaran: like, now’s the now’s the next thing. So that’d be my feedback overall cool anything else before we hop.

578 01:02:09.323 01:02:17.339 Miguel de Veyra: Just a quick one. I just checked the pricing. For if we put the 200 K context, it’s like point $10

579 01:02:17.470 01:02:18.850 Miguel de Veyra: point 10 cents.

580 01:02:19.650 01:02:20.959 Uttam Kumaran: I know dude, but like if.

581 01:02:20.960 01:02:23.159 Miguel de Veyra: Yeah, yeah, we’ll still, you know, we’ll still work on it.

582 01:02:23.160 01:02:25.310 Uttam Kumaran: This is what I’m saying. No, no, I just wanna like.

583 01:02:25.750 01:02:31.699 Uttam Kumaran: So you know as much as everyone on this call that throwing everything into context is not the right solution here.

584 01:02:32.610 01:02:41.480 Uttam Kumaran: because just because you need one slack message doesn’t mean you need 1020,000 slack messages, right?

585 01:02:41.770 01:02:49.489 Uttam Kumaran: So I don’t feel. I mean, other people can push back. But like I don’t feel like throwing everything into context is an appropriate solution for this.

586 01:02:49.840 01:02:57.979 Uttam Kumaran: It’s not an elegant solution. I doubt it’s gonna work like, I doubt you’re gonna actually be able to get the answer, the specific answer you need

587 01:02:59.790 01:03:06.190 Uttam Kumaran: like, I don’t know I this is lazy like, but we gotta think harder.

588 01:03:13.030 01:03:17.120 Uttam Kumaran: I mean, do you? Do you feel otherwise like I don’t know. That’s what I feel like.

589 01:03:17.120 01:03:20.279 Miguel de Veyra: No, no, yeah. Of course, 200 K is too much context.

590 01:03:21.340 01:03:24.444 Uttam Kumaran: Yeah, it’s.

591 01:03:25.080 01:03:28.189 Amber Lin: It’s just like, yeah, it’s just it’s inefficient, you know.

592 01:03:28.330 01:03:29.020 Miguel de Veyra: Yep. Yep.

593 01:03:30.738 01:03:46.199 Amber Lin: I’ve got cases. We’ll look into that, and then we’ll definitely come up with a solution. Can we meet later this week? Seems like all the items that we want to do. Have a date of around 2 to 3 days

594 01:03:46.420 01:03:50.169 Amber Lin: is end of this week, a good time to meet again.

595 01:03:50.910 01:03:57.899 Amber Lin: I don’t want us to have this go on for another week. I think we’re at the point where we can push a lot faster.

596 01:04:02.180 01:04:03.500 Uttam Kumaran: I mean I’ll be there.

597 01:04:05.950 01:04:16.270 Amber Lin: Are you guys, do you guys think everyone who has a deliverable? Do you think Thursday is too early, or should we do Friday.

598 01:04:18.930 01:04:20.700 Miguel de Veyra: I think Thursday should be fine.

599 01:04:22.000 01:04:25.080 Amber Lin: What about the data platform team? What do you think.

600 01:04:27.450 01:04:28.960 Demilade Agboola: I’m out of office Thursday.

601 01:04:31.800 01:04:35.604 Amber Lin: Oh, I remember you were asked for this

602 01:04:36.080 01:04:36.680 Demilade Agboola: It’s not.

603 01:04:39.050 01:04:45.100 Amber Lin: Okay. Okay. Don’t always want there. Kyle. What? What about you, Kyle? Luke? A wish.

604 01:04:46.350 01:04:50.080 Awaish Kumar: Yeah, I can. I’m available on Thursday as well.

605 01:04:50.800 01:04:54.660 Caio Velasco: I can work like on this Wednesday and Thursday and

606 01:04:55.050 01:05:01.249 Caio Velasco: and see what happens. I think we can. Yeah, definitely have a draft if it’s not the best. At least we have something.

607 01:05:02.350 01:05:12.529 Amber Lin: Okay? And don’t a lot what tasks are you working on? Would you be? And would you be able to have someone else represent you at the meeting as well.

608 01:05:12.960 01:05:15.660 Amber Lin: Or just tell Kyle what you’re working on.

609 01:05:16.440 01:05:24.150 Demilade Agboola: Yeah, I’m working with Kyle tomorrow on this, and then anything I have in mind, I’ll I’ll let him know.

610 01:05:25.110 01:05:33.419 Amber Lin: Okay, fantastic. I will book a meeting for us on Thursday, then, so that will keep us on our toes, and we’ll get things done.

611 01:05:38.370 01:05:38.940 Uttam Kumaran: Okay.

612 01:05:38.940 01:05:39.610 Amber Lin: Awesome.

613 01:05:39.830 01:05:51.789 Uttam Kumaran: Thanks everyone. This is again. I think we’re getting into the meat. We’re getting into the meat of this project. It’s like we’re gonna have, like some of these tougher questions and design decisions to make. But I just need everyone to kinda like

614 01:05:52.210 01:05:57.899 Uttam Kumaran: be very clear with what they need, and that they know 100% of what’s going on. So good questions today.

615 01:06:00.080 01:06:00.840 Uttam Kumaran: Thank you.

616 01:06:00.840 01:06:01.500 Caio Velasco: Great.

617 01:06:01.770 01:06:02.869 Caio Velasco: Thank you. Thank you, man.

618 01:06:03.100 01:06:04.910 Amber Lin: Awesome bye. Everyone.