Meeting Title: AI Team Standup Date: 2025-05-07 Meeting participants: Miguel De Veyra, Casie Aviles, Amber Lin


WEBVTT

1 00:00:45.360 00:00:46.650 Amber Lin: Alright!

2 00:00:49.110 00:00:50.350 Miguel de Veyra: Hello! Hello!

3 00:00:52.080 00:00:54.310 Amber Lin: Did you stay up until now?

4 00:00:55.260 00:00:58.589 Miguel de Veyra: No, no, I I think I only yeah

5 00:00:58.720 00:01:02.590 Miguel de Veyra: worked until like 4 Am. Yesterday. Then I just slept.

6 00:01:02.590 00:01:05.582 Amber Lin: Okay. Okay. Good. I was very worried.

7 00:01:06.310 00:01:07.060 Amber Lin: Cool.

8 00:01:09.040 00:01:13.480 Miguel de Veyra: But yeah, I’ll probably I’ll probably head out after.

9 00:01:14.070 00:01:14.450 Amber Lin: Yeah.

10 00:01:14.450 00:01:20.389 Miguel de Veyra: After our meeting with you I was just in a cold case you ate. You probably went to toilet or something.

11 00:01:20.850 00:01:21.250 Amber Lin: Okay.

12 00:01:22.180 00:01:23.090 Amber Lin: Hi.

13 00:01:28.090 00:01:46.480 Amber Lin: okay. A wish probably is in a Eden. Stand up maybe we don’t need him to be here yet. Let’s let’s just go through our AI tickets. And then let’s talk about ABC, a little bit. Because that has been a bit neglected.

14 00:01:46.990 00:01:50.099 Amber Lin: So let me pull up linear.

15 00:01:53.840 00:01:54.810 Amber Lin: Hmm!

16 00:02:11.420 00:02:13.539 Amber Lin: No way. He didn’t have tickets.

17 00:02:13.920 00:02:15.990 Amber Lin: I I’ll go check.

18 00:02:17.480 00:02:19.400 Amber Lin: I’ll go check with that.

19 00:02:27.480 00:02:29.390 Amber Lin: Right, Casey, let’s start here.

20 00:02:31.420 00:02:34.940 Casie Aviles: Sure. So for the AI 1, 8, 8.

21 00:02:35.520 00:02:43.209 Casie Aviles: So yesterday I’ve been looking into the possible approaches that we could implement for our AI.

22 00:02:43.610 00:02:48.949 Casie Aviles: So I think. And also I’ve been talking with Miguel Brainstorming, and

23 00:02:49.080 00:02:52.740 Casie Aviles: I think we’re going to just proceed with rag.

24 00:02:53.646 00:02:55.230 Casie Aviles: I think that’s the most.

25 00:02:56.700 00:02:57.360 Miguel de Veyra: Articles.

26 00:02:58.230 00:03:00.080 Casie Aviles: Yeah, I guess. So.

27 00:03:00.820 00:03:05.469 Casie Aviles: Yeah, I did. I did flesh out that ticket you could take. You could also click there.

28 00:03:06.200 00:03:09.630 Casie Aviles: Yeah, I did add some stuff here.

29 00:03:12.150 00:03:15.499 Casie Aviles: There’s also this updated diagram.

30 00:03:21.470 00:03:28.329 Amber Lin: Oh, let me go! Do you wanna walk? Just walk us through it quickly. If you want to share screen.

31 00:03:28.760 00:03:30.590 Casie Aviles: Sure. Sure, I’ll I’ll share.

32 00:03:34.830 00:03:42.999 Casie Aviles: Okay. So before I think this was, this part was blank, mostly blank, and we just had this

33 00:03:44.805 00:03:46.869 Casie Aviles: extract, load, transform, part.

34 00:03:48.250 00:03:49.850 Casie Aviles: So we have this.

35 00:03:50.660 00:03:53.990 Casie Aviles: So I added this context, retrieval section here. And

36 00:03:54.720 00:04:03.499 Casie Aviles: I guess we just this is my proposal. So we have like A, we vectorize each

37 00:04:03.610 00:04:12.179 Casie Aviles: of the content. So we 1st transform the raw data. We want to transform them into something that is more useful for the AI,

38 00:04:12.750 00:04:15.261 Casie Aviles: and then basically, we have these,

39 00:04:16.459 00:04:19.340 Casie Aviles: what do you call this? The ideal

40 00:04:19.500 00:04:23.380 Casie Aviles: contents that should be vectorized. So this is how they should look like.

41 00:04:25.360 00:04:32.510 Casie Aviles: Yeah. And then I guess we just need some feedback here, probably from Utam, or

42 00:04:32.960 00:04:39.160 Casie Aviles: even maybe away, since he knows about the transforms part.

43 00:04:39.980 00:04:42.439 Casie Aviles: But this is our proposal on how we want to.

44 00:04:45.045 00:04:46.920 Casie Aviles: So how our data would

45 00:04:47.150 00:04:51.040 Casie Aviles: look like for the super base part.

46 00:04:52.690 00:04:56.089 Casie Aviles: So yeah, I’ve added a bunch of suggestions here, like.

47 00:04:57.120 00:05:04.770 Casie Aviles: for example, this attaching metadata cleaning transcripts and yeah, Demos.

48 00:05:04.770 00:05:06.800 Casie Aviles: So Miguel has included his own.

49 00:05:07.290 00:05:10.430 Casie Aviles: Input, but yeah, this is, I guess, the plan

50 00:05:10.930 00:05:14.710 Casie Aviles: that we have right now. And it’s not really.

51 00:05:14.710 00:05:24.780 Amber Lin: It will be really helpful to say, like, help clean the transcripts doing the transformations. It’ll be like, it’ll save you guys a lot of time. If you just let him let him do that part.

52 00:05:26.020 00:05:33.199 Miguel de Veyra: Yeah, what’s but I think the best thing to do here is, since we’re we have cycles. Now.

53 00:05:33.420 00:05:33.830 Amber Lin: Hmm.

54 00:05:33.830 00:05:37.659 Miguel de Veyra: Which 2 weeks. So what I talked to Casey about earlier was that

55 00:05:37.960 00:06:06.819 Miguel de Veyra: we could do right because there’s no point in embedding each message. So one of the cause we tried embedding all 4,000 messages, and then some messages, or like, you know it just said deleted message, and it was still embedded, which is, which doesn’t make sense right? It shouldn’t be embedded. So what I would suggest in that meeting data is we need. We have like a timeframe. So maybe it’s a week. Maybe it’s per cycle, maybe 2 weeks, right? And then for that 2 weeks we’ll include all the messages

56 00:06:06.910 00:06:09.540 Miguel de Veyra: and that. And then that’s what we will embed

57 00:06:09.900 00:06:18.309 Miguel de Veyra: in that way, you know, there’s a lot of context, but not enough to, you know. Be, I’m assuming there’s gonna be like 3 to 5 pages worth of context.

58 00:06:19.300 00:06:26.289 Miguel de Veyra: If text right? So we then we can drag it. We can add, there’s there’s sufficient enough context to do

59 00:06:26.990 00:06:34.470 Miguel de Veyra: to for rag, to make sense to use, and then we can also generate embeddings, but embeddings. Not sorry metadata

60 00:06:34.810 00:06:42.610 Miguel de Veyra: of it, cause one is the one, I think, is not really the best idea. But if it’s like, you know, 20 messages in the past 2 weeks, then

61 00:06:43.040 00:06:47.460 Miguel de Veyra: you know, then it makes sense. Yeah, at least for slack.

62 00:06:48.235 00:06:49.010 Amber Lin: See.

63 00:06:50.750 00:07:10.159 Amber Lin: I think, depending on the client, and how active they are. We might need to adjust a 2 week window, because some of the clients have a lot of stuff during 2 weeks, especially especially right now. Say, for urban stem is there on like a very intense Mother day’s period. And so they have a lot of messages. So if maybe, like.

64 00:07:10.460 00:07:18.499 Amber Lin: I do agree that wanna wanna like, what’s the problem? If we have deleted messages like

65 00:07:18.750 00:07:21.620 Amber Lin: it like, what’s the downsize of

66 00:07:21.800 00:07:25.180 Amber Lin: factorizing like each one of the messages.

67 00:07:25.180 00:07:28.640 Miguel de Veyra: No, we’ll just no. If we vectorize each of the messages.

68 00:07:29.610 00:07:32.440 Miguel de Veyra: It’s very expensive.

69 00:07:33.195 00:07:33.950 Casie Aviles: Another.

70 00:07:34.060 00:07:39.589 Casie Aviles: Yeah. Sorry. Another, I think drawback is that it’s not really complete. So the

71 00:07:39.810 00:07:42.933 Casie Aviles: so so sometimes like, for example,

72 00:07:43.830 00:07:53.849 Casie Aviles: if you can see here the query over here on the left. So the the query is snowflake structure for Javi. And then what what the the rag.

73 00:07:54.630 00:07:58.059 Casie Aviles: or basically what this retrieves is a question.

74 00:07:58.490 00:08:01.990 Casie Aviles: So it’s not really answering the query right?

75 00:08:01.990 00:08:03.110 Amber Lin: Oh!

76 00:08:03.420 00:08:09.077 Casie Aviles: So this is just an individual message sent by Robert, as you can see here, Robert.

77 00:08:09.650 00:08:20.749 Casie Aviles: and it’s not really relevant. So it doesn’t answer. So it’s it’s just another question. And sometimes there are some messages, individual messages that are just. Thank you. Or you know, just random.

78 00:08:20.750 00:08:23.379 Miguel de Veyra: Yeah, if I say, okay, that’s still gonna be embedded.

79 00:08:23.380 00:08:34.850 Amber Lin: Oh, I see. So if the point is that we get surrounding messages around like a like a message to make sure that we actually cover something that makes sense.

80 00:08:35.080 00:08:40.379 Casie Aviles: Yes, yes, yeah. It’s going to be more rich in terms of information.

81 00:08:40.520 00:08:41.080 Casie Aviles: So.

82 00:08:41.409 00:08:46.350 Amber Lin: So right now, what it retrieve retrieves is not very long, is what I see.

83 00:08:46.350 00:08:52.029 Miguel de Veyra: Yeah, because right now, the way it’s embedded is one message, one embedding. But then

84 00:08:52.680 00:09:03.200 Miguel de Veyra: the problem is, it lacks context, right? Like that part. Maybe it’s like a longer conversation, right? Maybe the 1st message, for example, for that one

85 00:09:04.080 00:09:12.519 Miguel de Veyra: title of Snowflake Yada. And then he sent the code, after which didn’t, which didn’t include the word snowflake. Then we’re kind of

86 00:09:12.760 00:09:17.880 Miguel de Veyra: after, because, you know, there’s no word for that, so the rag won’t pick that up.

87 00:09:18.730 00:09:19.430 Amber Lin: And.

88 00:09:19.430 00:09:23.450 Miguel de Veyra: It’s only gonna pick up the word that you know the things that have words on them.

89 00:09:23.820 00:09:26.080 Amber Lin: I see cool.

90 00:09:26.080 00:09:28.790 Miguel de Veyra: But if we group them into one week.

91 00:09:29.180 00:09:35.829 Miguel de Veyra: then it makes sense. I think the only problem there will be. So. For example, I had applied this week.

92 00:09:36.540 00:09:40.659 Miguel de Veyra: and I requested it this week, but then the messages were sent lap

93 00:09:41.040 00:09:43.579 Miguel de Veyra: the following week, or like 3 weeks after.

94 00:09:44.237 00:09:53.082 Amber Lin: Then it will just collect it from this. It will just have, like 2 weeks worth of messages, one from this week, and one from 3 weeks.

95 00:09:53.880 00:09:55.280 Amber Lin: Okay? I think that.

96 00:09:55.410 00:09:56.460 Miguel de Veyra: But yeah, I think that

97 00:09:56.460 00:10:08.020 Miguel de Veyra: that’s the best. I think. Honestly, I think that’s the only way we could approach it right now. We could use context, but doesn’t want that. So that’s probably our best.

98 00:10:08.980 00:10:13.529 Miguel de Veyra: our best case scenario, although that will take some time.

99 00:10:14.690 00:10:18.035 Amber Lin: Oh, okay. So

100 00:10:19.800 00:10:28.919 Amber Lin: let’s say, if we wanna do like date span of one week, right? How long would that take to get done?

101 00:10:29.379 00:10:32.450 Miguel de Veyra: Cause we’ll forward it to a wave, so we’ll have to bring him in.

102 00:10:32.450 00:10:33.270 Amber Lin: Oh, okay.

103 00:10:33.850 00:10:38.140 Casie Aviles: I think we’ll ask oasis help for that, since that’s a transformation step.

104 00:10:38.920 00:10:39.700 Amber Lin: Sure.

105 00:10:39.890 00:10:47.850 Miguel de Veyra: Yeah, because basically, the things we need to do here is we need to transform the the raw super based data into

106 00:10:48.130 00:11:02.070 Miguel de Veyra: what do you call this into basically the chunk, that’s the chunk, basically the chunk, the super base data. That’s the 2 weeks. And then after that, we’ll do the embedding embedding. We can do pretty easily. No, Casey.

107 00:11:03.167 00:11:09.929 Casie Aviles: Yeah, it should be doable. I already did I? I know it’s not part of the ticket, but I already did some.

108 00:11:10.090 00:11:21.650 Miguel de Veyra: Yeah, yeah. So that’s probably gonna be like a 5 day. Because, of course, we have to add metadata and stuff just to make it, you know, a bit more accurate.

109 00:11:21.870 00:11:28.120 Miguel de Veyra: So it’s gonna be like a 5 to 8 points, maybe one for embedding, one for metadata filtering Yada Yada.

110 00:11:28.640 00:11:31.430 Amber Lin: Let’s write that. Let’s write that ticket.

111 00:11:33.280 00:11:40.169 Miguel de Veyra: And just add it to the discovery first, st just add it there to the what do you.

112 00:11:41.000 00:11:42.899 Casie Aviles: Where do you want me to add? Sorry.

113 00:11:43.331 00:11:45.489 Miguel de Veyra: The current ticket where it’s

114 00:11:45.920 00:11:48.310 Miguel de Veyra: but then, what’s the name of that ticket?

115 00:11:48.510 00:11:50.969 Miguel de Veyra: The one where you put the reviews?

116 00:11:52.070 00:11:52.980 Miguel de Veyra: Oh, Spike.

117 00:11:52.980 00:11:54.889 Amber Lin: I can share. I can share screen.

118 00:11:55.330 00:11:57.069 Amber Lin: And we can go write that.

119 00:11:57.310 00:12:00.660 Amber Lin: Yeah, yeah, let’s just add it there. So we can decide later on.

120 00:12:02.950 00:12:06.710 Miguel de Veyra: Let’s not create a ticket first, st Amber, because we still need to discuss it with Uta.

121 00:12:07.450 00:12:11.500 Amber Lin: Hmm, okay, it’s scary.

122 00:12:11.500 00:12:14.240 Miguel de Veyra: Casey’s ticket spike 1, 68.

123 00:12:14.780 00:12:15.380 Amber Lin: Hmm!

124 00:12:31.240 00:12:35.169 Amber Lin: Is he gonna help with adding, we’re gonna do adding metadata right?

125 00:12:38.810 00:12:43.099 Amber Lin: It’s going to help with transformations. What else?

126 00:12:43.880 00:12:45.660 Amber Lin: Super base embeddings.

127 00:12:45.820 00:12:48.620 Miguel de Veyra: No, no, super busy embeddings. Me and Casey.

128 00:12:55.320 00:12:58.380 Amber Lin: And let’s see

129 00:13:01.740 00:13:03.520 Amber Lin: metadata.

130 00:13:09.520 00:13:13.480 Amber Lin: What’s left after that like once we do this, what do we.

131 00:13:13.480 00:13:15.119 Miguel de Veyra: Implementation today agent.

132 00:13:16.670 00:13:19.519 Miguel de Veyra: But that shouldn’t take long. That’s a 2 point thing.

133 00:13:20.000 00:13:22.110 Casie Aviles: Yeah. The next part is to

134 00:13:22.290 00:13:27.230 Casie Aviles: have the agent be able to actually get those data.

135 00:13:28.090 00:13:29.809 Miguel de Veyra: Isn’t that implementation tuition.

136 00:13:30.280 00:13:33.260 Casie Aviles: Yeah, yeah. That’s also that. That’s what I was talking about. Sorry.

137 00:13:33.575 00:13:33.890 Miguel de Veyra: Okay.

138 00:13:34.890 00:13:38.320 Amber Lin: So his transformations. I don’t know how long it’s gonna take him.

139 00:13:38.320 00:13:39.749 Miguel de Veyra: It’s probably a second day.

140 00:13:39.750 00:13:40.330 Amber Lin: And.

141 00:13:40.700 00:13:46.499 Casie Aviles: I think he can. It should be a he should be able to do it within a day. But that’s my perspective.

142 00:13:47.710 00:13:48.110 Amber Lin: You too, bye.

143 00:13:48.340 00:13:52.589 Miguel de Veyra: Honestly, we could do this, but we have. We’re bombarded.

144 00:13:52.590 00:14:00.689 Amber Lin: Okay, like we have. We have other work. He’ll be the fastest at this, and he’s already like caught up with super base and everything, so it doesn’t take

145 00:14:00.900 00:14:02.679 Amber Lin: him to learn again.

146 00:14:04.270 00:14:07.740 Amber Lin: What about what about these? What are the estimates?

147 00:14:08.940 00:14:09.970 Amber Lin: I just rough.

148 00:14:09.970 00:14:22.899 Miguel de Veyra: Embeddings is 5, cause it takes a lot of time, and then it depends on per client. Because this could be. This could be 5 for Yavi. For the longer clients this could be 3 on like shorter clients.

149 00:14:25.530 00:14:28.930 Casie Aviles: So the the points are per client.

150 00:14:29.380 00:14:31.719 Casie Aviles: not like the entire thing, right.

151 00:14:31.860 00:14:33.079 Miguel de Veyra: Yeah, yeah, yeah.

152 00:14:33.200 00:14:33.700 Casie Aviles: Okay.

153 00:14:33.700 00:14:37.139 Amber Lin: The transformations. For a wish is that, like an entire thing.

154 00:14:37.720 00:14:38.620 Miguel de Veyra: Hmm!

155 00:14:39.730 00:14:43.169 Miguel de Veyra: I no, no cause. He has to do that separately, for each client.

156 00:14:44.570 00:14:53.660 Amber Lin: Oh, isn’t is like the I mean, if it’s transformation, how does each client differ if it’s like?

157 00:14:53.860 00:14:54.450 Amber Lin: And so.

158 00:14:54.450 00:14:59.579 Miguel de Veyra: Once he has something. Once he has a template, he should be able to just plug and play it.

159 00:14:59.910 00:15:07.099 Amber Lin: Okay, then, I’ll say, keep 5 points. I’ll I’ll let him figure it out. And adding, Meta-meta.

160 00:15:09.220 00:15:12.599 Miguel de Veyra: But the data should be I don’t know. Kc. 5, 3 points.

161 00:15:15.130 00:15:16.850 Casie Aviles: I’m not sure, like.

162 00:15:17.430 00:15:27.210 Casie Aviles: what exactly do we need to do here on our side? Because, if I understand correctly the raw data that a wish has already has the metadata included. Right?

163 00:15:28.110 00:15:33.530 Casie Aviles: So all he has to do is like, include it to the

164 00:15:33.650 00:15:38.060 Casie Aviles: the the actual text that we want to vectorize.

165 00:15:38.910 00:15:43.799 Miguel de Veyra: Oh, oh, yeah, yeah, that makes sense. So we probably don’t need this, do you think?

166 00:15:43.920 00:15:44.959 Miguel de Veyra: Or let me check.

167 00:15:44.960 00:15:45.460 Amber Lin: For him.

168 00:15:45.460 00:15:49.199 Casie Aviles: I think it’s just going to be, you know, part of the transformation.

169 00:15:49.500 00:15:50.090 Casie Aviles: So like.

170 00:15:50.090 00:15:52.549 Miguel de Veyra: There’s no metadata, there’s no metadata.

171 00:15:53.620 00:15:55.149 Casie Aviles: No metadata. For what?

172 00:15:55.520 00:15:57.780 Miguel de Veyra: Or the slack. The Javi messages.

173 00:16:00.150 00:16:02.789 Miguel de Veyra: cause I think metadata has to be AI generated.

174 00:16:04.480 00:16:07.889 Miguel de Veyra: or you know, we could do something about. But there’s currently there is none.

175 00:16:09.680 00:16:17.830 Casie Aviles: Yeah, I mean, my understanding of metadata is just, you know, channel ids. User. Ids, thanks, Tom.

176 00:16:18.340 00:16:23.600 Miguel de Veyra: Tags on what the mess, what the core messages, keywords, basically should be in metadata.

177 00:16:24.260 00:16:27.239 Casie Aviles: Okay, so we have different definitions of metadata. Sorry.

178 00:16:27.860 00:16:31.510 Miguel de Veyra: Yeah, no worries cause what cause? What we can do there is

179 00:16:31.710 00:16:48.070 Miguel de Veyra: for the implementation to agent, we can add, basically. So it doesn’t have to run rag on the entire thing. So, for example, Snowflake related ones. So maybe, like, you know, half of the time they didn’t really talk about snowflake. So the agent doesn’t have to run rag on those.

180 00:16:53.160 00:16:57.459 Miguel de Veyra: But yeah, I will double check that if that’s accurate, because that’s how I understood it.

181 00:16:57.880 00:17:00.959 Casie Aviles: Okay, yeah, I’m not sure of that part yet. So.

182 00:17:01.440 00:17:08.040 Amber Lin: Okay, and both of these are blocked until he, until a wish does his transformations right?

183 00:17:08.950 00:17:12.509 Miguel de Veyra: I mean, it’s blocked, even though it’s blocked until we decide with Utah.

184 00:17:12.819 00:17:13.359 Miguel de Veyra: So yeah.

185 00:17:13.369 00:17:19.639 Amber Lin: Okay. So once we decide with Utah, we’re waiting on a wish. So we can do something else when we’re waiting for him.

186 00:17:19.640 00:17:20.380 Miguel de Veyra: Yeah.

187 00:17:21.006 00:17:24.210 Amber Lin: Sounds good. Okay, we’ll confirm what we found them.

188 00:17:25.613 00:17:30.760 Amber Lin: But I know, Miguel, your day is ending after that meeting. Casey. What about you?

189 00:17:32.798 00:17:37.750 Casie Aviles: Yeah, I’m gonna keep working because I’ve started just an hour ago

190 00:17:38.680 00:17:40.630 Casie Aviles: or 2 h ago. Sorry, I mean.

191 00:17:41.083 00:17:44.710 Miguel de Veyra: Casey. Sorry. One thing. Can you add the

192 00:17:45.140 00:17:48.699 Miguel de Veyra: can you add the slack bot, for I think pool parts.

193 00:17:49.470 00:17:50.090 Amber Lin: Oh!

194 00:17:53.020 00:17:55.349 Casie Aviles: Oh, we don’t have one yet for pull parts.

195 00:17:55.350 00:17:59.270 Miguel de Veyra: I don’t think so. Let me check. I don’t think so at full works. No.

196 00:18:00.310 00:18:02.930 Miguel de Veyra: because we couldn’t get all one stamps. And right here, right.

197 00:18:04.070 00:18:06.120 Casie Aviles: Oh, so you just want the slack back right.

198 00:18:06.120 00:18:07.659 Miguel de Veyra: Yeah, yeah, just this last one.

199 00:18:08.970 00:18:10.270 Miguel de Veyra: The agent is already done.

200 00:18:12.680 00:18:18.440 Miguel de Veyra: I’ll just switch around some context. And we can, I think, deploy it blue parts.

201 00:18:19.210 00:18:26.409 Casie Aviles: But yeah, all of just to be clear, I guess all of these existing agents are still using the context approach. So.

202 00:18:27.580 00:18:30.740 Miguel de Veyra: Yeah, yeah, yeah, fundamentally.

203 00:18:32.170 00:18:35.120 Amber Lin: Cool. So pull par, as I say, is, in testing.

204 00:18:37.130 00:18:38.710 Amber Lin: We’re still in progress.

205 00:18:38.710 00:18:41.460 Miguel de Veyra: Escalation. I think I moved it to escalation.

206 00:18:42.430 00:18:44.830 Amber Lin: For urban stones or full parts.

207 00:18:45.410 00:18:46.550 Miguel de Veyra: Pool parts.

208 00:18:47.340 00:18:48.490 Amber Lin: Oh, okay.

209 00:18:49.230 00:18:51.400 Miguel de Veyra: Yeah. And then urban stem should be in review.

210 00:18:51.920 00:19:02.249 Amber Lin: Oh, awesome and review notion via NAN.

211 00:19:02.920 00:19:15.570 Amber Lin: Oh, awesome if we did the sample, then we know that it works right. So we’ll just say that this is this is done, and then we’ll make another ticket. Once Demoda sends his stuff.

212 00:19:18.072 00:19:19.159 Miguel de Veyra: Yeah. Yeah. Yeah.

213 00:19:19.430 00:19:21.309 Amber Lin: Yeah, sounds good.

214 00:19:21.900 00:19:23.250 Amber Lin: And

215 00:19:26.520 00:19:35.849 Amber Lin: okay. And what about pool Parses escalated? Okay, what’s left is just just need to.

216 00:19:35.850 00:19:36.760 Miguel de Veyra: Blackboard.

217 00:19:37.410 00:19:38.060 Amber Lin: Oh.

218 00:19:45.740 00:19:46.620 Amber Lin: sounds good.

219 00:19:51.400 00:19:59.229 Amber Lin: Okay? So it seems like it’s mostly getting a ways to do the transformations. And then you guys will be unblocked.

220 00:19:59.927 00:20:03.829 Amber Lin: At the same time I’ll get to

221 00:20:04.170 00:20:09.830 Amber Lin: let me get to Kyle and Demo at it. What do we need from them? Let me just ping them right now.

222 00:20:10.545 00:20:11.969 Miguel de Veyra: The documentation stuff.

223 00:20:12.600 00:20:13.020 Amber Lin: Okay,

224 00:20:14.430 00:20:16.189 Miguel de Veyra: I don’t think they’re gonna finish it this week.

225 00:20:17.860 00:20:21.239 Amber Lin: We want something right? Like.

226 00:20:49.860 00:20:54.170 Amber Lin: so just documentation for traffic to clarify.

227 00:20:54.400 00:20:55.140 Amber Lin: Okay.

228 00:21:01.450 00:21:04.320 Amber Lin: say, we figured out the notion part

229 00:21:25.510 00:21:29.730 Amber Lin: alright. So we’ll just check it with Tom. Let’s talk about

230 00:21:31.166 00:21:36.400 Amber Lin: the other ones, for now I know today, Casey, we have the.

231 00:21:36.740 00:21:39.410 Amber Lin: We have the meeting with off the record.

232 00:21:40.160 00:21:40.530 Casie Aviles: Okay.

233 00:21:41.450 00:21:47.730 Amber Lin: So I don’t think there’s immediately work.

234 00:21:48.440 00:22:00.289 Amber Lin: Let’s do like today. We probably have some ABC work which is good, because then we’re also a little bit stuck on other things. Let’s

235 00:22:00.930 00:22:07.249 Amber Lin: let me schedule Miguel. Let me schedule the meeting between us and Uton like right after this.

236 00:22:32.260 00:22:33.020 Amber Lin: Okay.

237 00:22:36.070 00:22:41.079 Amber Lin: let’s decide on what we want to do for the trainer. Bot.

238 00:22:41.250 00:22:45.509 Amber Lin: So I don’t know if you saw

239 00:22:46.130 00:22:50.110 Amber Lin: Scott’s message in ABC last time.

240 00:22:52.380 00:22:56.510 Amber Lin: Ultimately, we want to.

241 00:22:56.730 00:22:58.609 Amber Lin: How? How do I put this?

242 00:23:00.260 00:23:04.140 Amber Lin: So we were talking about a knowledge base

243 00:23:04.420 00:23:25.939 Amber Lin: system? With Scott, of how? Because currently they’re a lot of their ABC. Messages is very scattered, and there’s no logic, or say like schema to all of their messages and kind of what we were talking about is to

244 00:23:27.320 00:23:30.879 Amber Lin: run, say, 1st step, run the central dock

245 00:23:31.010 00:23:51.689 Amber Lin: through AI to see, hey? What like? Based on this? What is a common structure that you see right? And the second step would be say, based on that like, can you suggest any parts of this document that’s incomplete? And then we take that, take the parts as incomplete. We go to like

246 00:23:51.810 00:24:01.940 Amber Lin: Janice and Yvette and say, Hey, you guys should complete this to make sure that it has like, and then it’s a like a well rounded document.

247 00:24:04.520 00:24:06.580 Amber Lin: And then

248 00:24:07.858 00:24:19.250 Amber Lin: later on, after that. So based on that criteria criteria of what are some basic fields that we need to fill out when the trainer

249 00:24:19.730 00:24:28.870 Amber Lin: wants to put in new information, we make them fulfill all those different fields. It’s kind of. I believe it’s kind of like a

250 00:24:29.630 00:24:36.410 Amber Lin: schema. I don’t a schema of the document, if if I can describe it correctly that way.

251 00:24:36.620 00:24:38.970 Amber Lin: Does that make sense to you guys?

252 00:24:49.070 00:24:53.660 Casie Aviles: Not exactly. I’m still. I’m still trying to visualize it.

253 00:24:53.660 00:24:57.779 Miguel de Veyra: Are we? Wait? Are we talking about moving their Doc from Google Doc to somewhere else?

254 00:24:58.130 00:25:03.869 Amber Lin: No, no, so how do I? How do I best explain this?

255 00:25:04.520 00:25:11.970 Amber Lin: So let me find the ABC ports.

256 00:25:18.490 00:25:25.329 Amber Lin: Okay, let’s just go look at what Scott shared. It’s it makes a bit more sense because he did some examples.

257 00:25:26.040 00:25:34.750 Amber Lin: I’m not saying we’re gonna do exactly what he did. But to just to give you guys some ideas of what we were talking about in that meeting.

258 00:25:35.450 00:25:44.080 Amber Lin: So 1st of all, he took through the prompt Tom da- da- da-da.

259 00:25:44.950 00:25:59.250 Amber Lin: So so kind of knowledge engineering that just like completing creating a knowledge architecture for the very long. It’s like 200 pages now for for the Central Doc.

260 00:26:00.060 00:26:08.770 Amber Lin: I’m and kind of this was the example structure that it came up with of

261 00:26:10.231 00:26:13.290 Amber Lin: there’s stuff about service domains.

262 00:26:13.940 00:26:19.370 Amber Lin: There’s like treatment characteristics stuff like that.

263 00:26:19.760 00:26:25.170 Amber Lin: So this is where the knowledge structure that we’re talking about

264 00:26:27.260 00:26:31.710 Amber Lin: ignore that I think that diagram is really shitty. And then

265 00:26:32.030 00:26:43.649 Amber Lin: so after that. Does this make sense like to create, to understand, say what this, what structure exists in? Currently, that’s very unstructured document.

266 00:26:45.890 00:26:50.210 Casie Aviles: It’s really just how like the bot helping them format

267 00:26:50.550 00:26:53.729 Casie Aviles: the document, or like the knowledge.

268 00:26:55.557 00:27:00.692 Amber Lin: I think this is even earlier than helping them format things. So.

269 00:27:01.120 00:27:06.649 Miguel de Veyra: It’s just a guideline, I believe, for the bot to know what the ideal structure is. Right.

270 00:27:07.540 00:27:30.129 Amber Lin: Sort of yeah. And it helps us organize. It’s like organizing a library right now. Right? So right now in the central Doc, it’s just a pile of random books. There’s no, we’re not organizing by like service type. We’re like not organizing but necessary steps. So this is more about general organization

271 00:27:30.290 00:27:36.879 Amber Lin: of how we categorize the different knowledges in that central doc because they’re a mess.

272 00:27:37.200 00:27:45.230 Amber Lin: So right now is, it is like, they just spit out what’s on their mind. They just like keep talking. And then that’s the document.

273 00:27:45.480 00:27:51.190 Amber Lin: So we kind of want to give them a bit more structure. So this is kind of what this is, for

274 00:27:51.880 00:27:53.380 Amber Lin: if that makes sense.

275 00:27:57.080 00:27:57.740 Casie Aviles: Okay.

276 00:27:58.540 00:27:59.210 Amber Lin: Yeah.

277 00:28:00.490 00:28:14.810 Amber Lin: And so once we have say, like a ideal structure, which kinda is what we’re talking about, not only for organization, but also, for like, say within each message, then we can say.

278 00:28:14.910 00:28:22.970 Amber Lin: Okay, review the original, Doc. Apply that structure and fine. Tell me if.

279 00:28:24.070 00:28:27.549 Amber Lin: like, which ones are not complete

280 00:28:28.120 00:28:33.350 Amber Lin: right? And then we can get a list of potential gaps.

281 00:28:34.180 00:28:35.390 Amber Lin: All right.

282 00:28:35.390 00:28:39.099 Miguel de Veyra: For this one review the entire original Doc.

283 00:28:39.920 00:28:50.960 Amber Lin: I think so, because step one is to figure out structure step 2 is to figure out within the current 200 page central Doc, what’s

284 00:28:51.350 00:28:58.690 Amber Lin: not ideal yet? So what’s what’s bad and what’s need? What needs fixing.

285 00:29:02.670 00:29:05.359 Casie Aviles: Okay, so it’s like reviewing the doc.

286 00:29:05.520 00:29:06.110 Casie Aviles: Yeah.

287 00:29:06.110 00:29:08.370 Miguel de Veyra: Yeah, it has to go through the door

288 00:29:08.980 00:29:16.810 Miguel de Veyra: one by one. So we have to divide it by section, because we can’t see that the entire thing. I don’t think so right, Casey.

289 00:29:18.280 00:29:21.210 Casie Aviles: Oh, you mean about like feeding the entire Central Dock.

290 00:29:21.400 00:29:30.819 Miguel de Veyra: No, no, we can feed it the entire Central Doc. But then, hey, analyze this one by one, chapter by chapter, and then check if it’s formatted, and if it’s not.

291 00:29:31.300 00:29:36.870 Miguel de Veyra: but I don’t think I think that’s I don’t think that’s gonna be accurate.

292 00:29:37.440 00:29:38.150 Amber Lin: Hmm.

293 00:29:41.590 00:29:42.420 Amber Lin: yeah.

294 00:29:42.990 00:29:52.869 Amber Lin: I think when we when? Because this is just a 1 time thing, right? Cause this is we’re just gonna run through it once. So it’s okay. If we have to chunk it

295 00:29:54.390 00:30:03.880 Amber Lin: like this doesn’t need to be the most automated, because ideally, if we fix the current ones. Then everything else we need to fix will be like new

296 00:30:04.030 00:30:09.619 Amber Lin: one. Say one message, one idea they wanna add in. So we don’t have to do this again.

297 00:30:10.250 00:30:15.320 Miguel de Veyra: Yeah, this, the the one for.

298 00:30:16.240 00:30:29.569 Miguel de Veyra: The suggestion for feedback is already working. We have that, I mean sorry, the the one for ideal structure. We have that already for the, but I’m not sure how. It’s probably possible.

299 00:30:29.960 00:30:34.659 Miguel de Veyra: I’m just not sure how that will work out yet. I think we have to look into.

300 00:30:35.590 00:30:36.000 Amber Lin: Yeah.

301 00:30:36.000 00:30:41.979 Miguel de Veyra: Or we have to do it manually, like chapter by chapter. If there’s like 40 chapters, we could just run it individually.

302 00:30:43.430 00:30:46.480 Amber Lin: Sure, it might take some time, but it helps.

303 00:30:46.640 00:30:50.310 Amber Lin: It’ll help set the general.

304 00:30:50.310 00:30:53.439 Miguel de Veyra: Is that gonna is that gonna be us, or is that gonna be them.

305 00:30:54.370 00:30:59.180 Amber Lin: Well, if we give them like a

306 00:30:59.810 00:31:03.049 Amber Lin: bot that does that, it could be done.

307 00:31:03.350 00:31:09.179 Amber Lin: If we don’t give them a bot that does that then it will have to be us.

308 00:31:10.840 00:31:14.660 Miguel de Veyra: The bot that we currently have. The one I sent, I think, does this already.

309 00:31:16.098 00:31:40.369 Amber Lin: Well, it does it cause there’s 2 structures we’re talking about, right? There’s structure and there’s formatting. So I think the bot we have now kind of goes over formatting of is this the most bot friendly way? And right now we don’t have a structure. So the 1st step is actually coming up within within structure.

310 00:31:40.750 00:31:48.500 Amber Lin: And then once we come up with that we can confirm with their team, and then

311 00:31:48.880 00:31:57.130 Amber Lin: sort of edit both the formatting and like fill out any missing structures in the current document.

312 00:32:02.120 00:32:09.089 Miguel de Veyra: Okay, yeah. I mean, if it’s not easy that it makes sense. Yeah, I’ll agree to it.

313 00:32:09.600 00:32:10.500 Amber Lin: Hmm sorry.

314 00:32:10.500 00:32:11.790 Miguel de Veyra: Both have agreed to it.

315 00:32:12.819 00:32:21.450 Amber Lin: No, no, this is Scott like we were talking about like, I don’t think Utah has much capacity to do any engineering work anymore.

316 00:32:22.308 00:32:28.349 Miguel de Veyra: No, no, because I’ll I’ll let’s. I’ll ask about this. If this is the way we should do it.

317 00:32:28.660 00:32:35.600 Amber Lin: Yeah, well, we’ll talk about it. This is what I’m shared like. I think this is what they shared in when in our meeting

318 00:32:36.000 00:32:38.220 Amber Lin: of this is a part of how

319 00:32:38.330 00:32:42.290 Amber Lin: how knowledge bases can be organized.

320 00:32:42.808 00:32:51.599 Amber Lin: There’s other parts that we can just search on Google, that’s very specific to like customer service center knowledge bases.

321 00:32:52.870 00:33:10.690 Amber Lin: So since we’re talking, we’ll also talk about this. But Casey, probably maybe later, maybe later today or tomorrow, after a meeting. We have to do a little bit of work for ABC. Just, we just haven’t decided on what we’re gonna work on.

322 00:33:13.270 00:33:13.980 Amber Lin: Yeah, anyway.

323 00:33:13.980 00:33:14.380 Casie Aviles: Okay.

324 00:33:14.380 00:33:21.279 Amber Lin: That’s the 1st part. And then this is like additional steps. This is more on the bot. So like

325 00:33:21.860 00:33:31.440 Amber Lin: right now, we have one that helps with the formatting right? And then we kind of also want when we generate new ideas.

326 00:33:33.160 00:33:46.269 Amber Lin: so the trainer bot can gather new information from subject matter. Experts like, say, Denise, want to update the trainer. Bot will ask different questions to

327 00:33:47.108 00:33:49.149 Amber Lin: make sure how it is.

328 00:33:49.460 00:33:52.129 Amber Lin: Right? So maybe I can ask

329 00:33:52.510 00:34:02.520 Amber Lin: what it is, when and why it applies how to communicate, clearly, making sure that everything follows this structure, which

330 00:34:02.950 00:34:07.160 Amber Lin: somewhat we determined here originally.

331 00:34:07.721 00:34:14.200 Amber Lin: All of that structure, and like this is the sample example of the conversation. Of

332 00:34:16.710 00:34:20.370 Amber Lin: what do you want to add? The answer may organize that?

333 00:34:20.710 00:34:24.400 Amber Lin: No, it applies and then understood.

334 00:34:25.159 00:34:28.879 Amber Lin: So this is kind of like the conversation flow. We’re looking for.

335 00:34:30.570 00:34:31.429 Casie Aviles: Okay.

336 00:34:32.429 00:34:39.109 Amber Lin: Yeah, I mean, we could. I feel like this is pretty

337 00:34:40.909 00:34:49.739 Amber Lin: having just adding a prompt would be pretty easy. I think the main part about it would be later on iterating with

338 00:34:49.989 00:34:59.059 Amber Lin: the people at ABC of, okay, is this actually the best structure like, is it asking you all the necessary questions. And this is sort of like

339 00:35:00.639 00:35:06.469 Amber Lin: organization depth that we need to. They need to do both

340 00:35:06.689 00:35:14.759 Amber Lin: categorizing and like polishing up their current documents. So that’s on my side, what’s going on here?

341 00:35:17.599 00:35:19.339 Amber Lin: Okay, great.

342 00:35:20.759 00:35:22.189 Amber Lin: And

343 00:35:26.329 00:35:28.419 Amber Lin: okay. Sounds good.

344 00:35:41.419 00:35:42.289 Amber Lin: Yeah.

345 00:35:43.369 00:35:48.999 Amber Lin: Oh, does does that make sense to you guys?

346 00:35:52.229 00:35:54.849 Amber Lin: And how do you think that could be done.

347 00:35:55.930 00:35:58.390 Miguel de Veyra: Do we need to deliver day fri. This by Friday.

348 00:35:59.476 00:36:09.290 Amber Lin: Well, like a little part of it. I don’t think we. Wanna I don’t think we should complete it, because I feel like if we complete it. It’s lots of points.

349 00:36:09.480 00:36:12.670 Amber Lin: I just kinda just wanna know, like what you guys think is

350 00:36:12.860 00:36:17.650 Amber Lin: possible and sound like feels like a big enough thing.

351 00:36:18.030 00:36:22.069 Miguel de Veyra: Yeah, I think we can go back tomorrow. Okay? So you can do something on it today.

352 00:36:22.350 00:36:26.180 Miguel de Veyra: See what’s possible. See what’s not. And then we can reconvene tomorrow.

353 00:36:26.180 00:36:34.969 Casie Aviles: From from what I’m seeing, it’s going to be a prompt adjustment to the existing bot that you have created.

354 00:36:34.970 00:36:35.650 Miguel de Veyra: Yeah, yeah.

355 00:36:37.230 00:36:43.289 Casie Aviles: Okay? And just to, I guess it should just follow the kind of the flow that Scott was.

356 00:36:44.670 00:36:45.810 Casie Aviles: Suggesting, okay.

357 00:36:46.570 00:36:55.160 Miguel de Veyra: And then we should probably have some sort of table of contents. Yeah, I think this has. This is something that we need to close to work closely with them.

358 00:36:55.420 00:36:56.099 Miguel de Veyra: Here’s your number.

359 00:36:56.100 00:36:57.939 Miguel de Veyra: Yeah, yeah, because it’s there.

360 00:36:57.940 00:37:01.489 Miguel de Veyra: Document. We don’t want to structure it without you know them or.

361 00:37:01.490 00:37:02.910 Amber Lin: Totally, totally.

362 00:37:02.910 00:37:10.670 Miguel de Veyra: So we want to know what’s the separation like? Which, because ideally, we’re gonna create like a table of contents, I think that’s the core here.

363 00:37:11.500 00:37:40.250 Amber Lin: Yeah, table contents and then make them finish up what’s not like, what’s missing. So you’re totally right that this is a lot of them. Thank you for clarifying that. So I guess what we could do is I could even run like the Central Doc through Gpt, and just give it. Have a give, some sample structure, and then the most important part is confirming with like confirming with those ABC people to make sure. Hey, is this the right one.

364 00:37:40.370 00:37:41.779 Amber Lin: Yeah. And then we

365 00:37:42.590 00:37:50.860 Miguel de Veyra: We can even like we can try to, you know. Give it to Gpt or anthropic Gemini, whatever. Basically. Hey, can you generate like a sample table of contents.

366 00:37:51.460 00:37:55.059 Amber Lin: Yeah, yeah, I feel like it’s not.

367 00:37:55.390 00:38:07.169 Amber Lin: oh, cause right now, you technically can do a table contents, even with the current like headings in Google docs. So I think you meant like a organized, structured way.

368 00:38:07.170 00:38:08.040 Miguel de Veyra: Yeah, yeah.

369 00:38:08.040 00:38:08.640 Amber Lin: That’s okay.

370 00:38:08.660 00:38:14.349 Miguel de Veyra: How you can categorize it, because, like billing is, the billing is everywhere. So yeah.

371 00:38:14.350 00:38:20.760 Amber Lin: Hmm, okay, that’s awesome. So having having that

372 00:38:22.420 00:38:38.099 Amber Lin: But my problem is that a lot of it? It’s like is talk on our end. How do you think I just want us to look good in our meeting with them? Because that’s gonna keep them from keep them to keep paying us like. What do you think

373 00:38:38.680 00:38:41.350 Amber Lin: we can do on that.

374 00:38:43.300 00:38:46.789 Miguel de Veyra: I’m sorry. What? What? Again I was reading the off the record thing.

375 00:38:47.120 00:38:55.239 Amber Lin: Oh, yeah, let me go read that, too, but mostly it’s what do we wanna do this week

376 00:38:55.610 00:39:02.120 Amber Lin: just to summarize that in that you feel like in the meeting, would also like.

377 00:39:02.390 00:39:07.159 Miguel de Veyra: I think, prompt, adjust the prompt adjust the existing one

378 00:39:07.640 00:39:11.320 Miguel de Veyra: cause it already has a ui. We worked on that last week, and then.

379 00:39:11.819 00:39:20.620 Amber Lin: The existing one. And then, like, we can just record or screenshot a sample flow of yes.

380 00:39:20.620 00:39:22.090 Amber Lin: serving the agents. Okay.

381 00:39:22.090 00:39:25.430 Miguel de Veyra: Yeah, yeah, just a loom video, don’t don’t demo it. Just a loom. Video.

382 00:39:25.430 00:39:29.520 Amber Lin: Okay, that is awesome. So we’ll have that. And then.

383 00:39:29.700 00:39:34.210 Amber Lin: like, I’ll gpt, generate like a sample structure.

384 00:39:34.860 00:39:38.350 Amber Lin: Sample table contents like, we can also do that.

385 00:39:38.350 00:39:39.010 Miguel de Veyra: Okay.

386 00:39:39.360 00:39:46.090 Amber Lin: Yeah. Okay, so that’s that sounds like a good, quick work. Right? How? How long does this sound like it’ll be.

387 00:39:48.440 00:39:52.740 Casie Aviles: Yeah, I think 2, 2 points or 3 points.

388 00:39:53.340 00:39:59.169 Amber Lin: Hmm, awesome. Okay? It seems like something we can do today, since, like AI stuff is a little bit

389 00:39:59.660 00:40:12.959 Amber Lin: okay, sounds good. And that helps Miguel. Your day is ending, so you don’t have to do that. Once we finish up the ABC part. And, Casey, we have the offer off the record thing today. So

390 00:40:13.130 00:40:13.769 Amber Lin: should be good.

391 00:40:13.770 00:40:15.500 Miguel de Veyra: Yeah. 2 am. Our time.

392 00:40:15.880 00:40:16.520 Casie Aviles: Okay.

393 00:40:17.190 00:40:25.910 Amber Lin: Dang okay. Alright. I’m looking at like what’s left in the AI team. There is like.

394 00:40:25.910 00:40:29.499 Miguel de Veyra: I think the bot. Let’s initialize that first.st So it’s done

395 00:40:29.610 00:40:36.260 Miguel de Veyra: the pool parts. It’s just adding, it’s because for some reason my account can’t add bots.

396 00:40:36.790 00:40:37.659 Miguel de Veyra: I don’t know why.

397 00:40:37.930 00:40:39.489 Amber Lin: This, what it is.

398 00:40:39.650 00:40:44.569 Miguel de Veyra: But I think that one. So we have initialized everything, and then should be good.

399 00:40:45.480 00:40:46.120 Amber Lin: Yeah.

400 00:40:46.120 00:40:53.230 Amber Lin: And then there’s like, I am just looking at what’s remaining. So there’s Kyle stuff, and there’s like

401 00:40:53.640 00:41:00.079 Amber Lin: Zoom has to backfill for other client meetings, linear

402 00:41:00.680 00:41:05.390 Amber Lin: to super base, and then adding Github to all agents.

403 00:41:05.390 00:41:06.399 Miguel de Veyra: Yeah, yeah, yeah.

404 00:41:07.010 00:41:13.130 Amber Lin: Like, did you? Were you able to get for Github all of the oh.

405 00:41:13.130 00:41:16.470 Miguel de Veyra: No, no, no, no different focus yesterday and today.

406 00:41:16.470 00:41:18.129 Amber Lin: Okay, that’s okay. That’s okay.

407 00:41:18.130 00:41:25.690 Miguel de Veyra: We can probably do it tomorrow, I think. Depends on if autumn wants to. You know, prioritize the slack thing.

408 00:41:26.170 00:41:31.580 Amber Lin: Hmm, okay, so we’ll talk with them. But right now I think we have a good idea of what’s left.

409 00:41:34.260 00:41:38.769 Amber Lin: Alrighty sounds good. Thank you all for the meeting. Miguel. I’ll see you soon.

410 00:41:39.090 00:41:42.469 Miguel de Veyra: Okay, thanks everyone. Will you be the one sending the call? Or should I.

411 00:41:43.142 00:41:44.510 Amber Lin: I sent it already.

412 00:41:44.750 00:41:45.750 Miguel de Veyra: Let me check.

413 00:41:48.730 00:41:50.759 Amber Lin: In like 20 min or so.

414 00:41:50.760 00:41:51.699 Miguel de Veyra: Oh, okay. Here.

415 00:41:52.220 00:41:54.560 Amber Lin: Alright, hi guys.

416 00:41:54.560 00:41:55.140 Miguel de Veyra: Okay. Thanks.

417 00:41:55.140 00:41:55.540 Casie Aviles: Thank you.

418 00:41:55.540 00:41:56.130 Miguel de Veyra: Bye-bye.