Meeting Title: Brainforge Final Interview Date: 2026-05-04 Meeting participants: Arish Alam, Samuel Roberts, Pranav, Pranav Narahari, Uttam Kumaran


WEBVTT

1 00:00:21.420 00:00:22.310 Samuel Roberts: Blue.

2 00:00:23.240 00:00:25.819 Arish Alam: Hey, hi, hi Sam, how are you?

3 00:00:25.820 00:00:26.859 Samuel Roberts: How are you doing?

4 00:00:27.110 00:00:28.510 Arish Alam: Amgot, how are you?

5 00:00:28.960 00:00:30.360 Samuel Roberts: Good, good.

6 00:00:32.250 00:00:38.930 Samuel Roberts: Yeah, we should be waiting for at least one other person, maybe two, so, give them… give them a minute to join.

7 00:00:40.560 00:00:41.520 Arish Alam: Yeah, sure.

8 00:00:42.100 00:00:43.130 Samuel Roberts: Yeah, totally.

9 00:00:54.260 00:00:55.050 Samuel Roberts: Excuse me.

10 00:01:05.450 00:01:06.779 Arish Alam: Hi, Pranav.

11 00:01:09.270 00:01:12.900 Pranav: Hey, nice to see you again.

12 00:01:12.900 00:01:14.080 Arish Alam: with you as well.

13 00:01:14.080 00:01:15.230 Pranav: Hey, Sim.

14 00:01:15.710 00:01:16.849 Samuel Roberts: How’s it going?

15 00:01:22.820 00:01:24.139 Samuel Roberts: Sweet 9…

16 00:01:45.200 00:01:55.730 Pranav: Just wanted to give you guys a heads up, my… Internet has been… I just kind of…

17 00:01:55.950 00:02:01.719 Pranav: Got to a new spot today, since I’m driving down to Austin, in the next month or so.

18 00:02:01.720 00:02:02.190 Samuel Roberts: Absolutely.

19 00:02:02.460 00:02:04.999 Pranav: spending a month in Chicago.

20 00:02:05.000 00:02:05.610 Samuel Roberts: Oh, cool.

21 00:02:05.610 00:02:06.599 Pranav: Yeah, can you guys hear me?

22 00:02:07.860 00:02:10.569 Samuel Roberts: Cutting in and out a little bit as you were talking? Yeah, I…

23 00:02:13.090 00:02:15.470 Pranav: I’m gonna see if turning off my video helps.

24 00:02:16.100 00:02:17.000 Samuel Roberts: Yeah, totally.

25 00:02:19.770 00:02:22.550 Samuel Roberts: Yeah, you sound better there.

26 00:02:22.550 00:02:24.409 Pranav: Okay, that’s good, that’s good.

27 00:02:40.890 00:02:42.959 Pranav: Utam’s running a few minutes behind.

28 00:02:43.910 00:02:44.570 Samuel Roberts: Okay.

29 00:02:48.230 00:02:54.099 Samuel Roberts: How, like, did he say… Like, should we wait, or how… what’s a few, you know?

30 00:02:55.210 00:02:56.160 Pranav: Yeah, yeah.

31 00:02:56.420 00:02:58.090 Pranav: Okay. Yeah, let’s get started.

32 00:02:58.090 00:02:58.660 Samuel Roberts: Sounds good?

33 00:03:00.640 00:03:02.400 Pranav: Sam, I’ll let you.

34 00:03:02.400 00:03:06.050 Samuel Roberts: Oh, oh, you want to get started? Sorry, I thought you were saying to hold on, I… you were cutting out a little bit.

35 00:03:06.320 00:03:09.380 Pranav: Sorry, okay, I’ll let you… let’s get started, yeah.

36 00:03:10.310 00:03:16.289 Samuel Roberts: Sure, sure. Okay. Yeah, okay, so welcome. This is the third…

37 00:03:16.530 00:03:19.610 Samuel Roberts: Interview. So, thank you for the submission.

38 00:03:20.830 00:03:25.700 Samuel Roberts: I started going through it a little bit. I saw your video as well. I don’t know, Pranav, did you have a chance to watch the video?

39 00:03:26.700 00:03:27.640 Samuel Roberts: -Oh.

40 00:03:30.040 00:03:32.100 Samuel Roberts: Double… okay, single printout now.

41 00:03:32.210 00:03:35.169 Pranav Narahari: Yeah, sorry. I did get to watch the video, yes.

42 00:03:35.170 00:03:37.750 Samuel Roberts: Okay, okay, cool. So, there’s…

43 00:03:37.910 00:03:54.859 Samuel Roberts: a bunch of stuff there, but I think I just want to start with, you know, you kind of giving us a quick overview, maybe not as quite as, long as that video, because we have plenty of questions, but, if you could just kind of quick intro for us so that it’s, complete today, yeah.

44 00:03:55.130 00:03:57.600 Arish Alam: Sure. Amorable, right?

45 00:03:58.960 00:03:59.500 Pranav Narahari: Yep.

46 00:03:59.880 00:04:00.250 Arish Alam: Cool.

47 00:04:00.250 00:04:00.850 Samuel Roberts: Yeah.

48 00:04:00.850 00:04:15.160 Arish Alam: Yeah, so, as far as the problem statement was described in the README file, right, this is a product safety compliance where we have an option to upload any kind of media asset, or any kind of product,

49 00:04:16.390 00:04:26.099 Arish Alam: built in any mind type, right? Text, PDF, PNG, or ZRPG, and then get a verdict whether the ingredients of the product is harmful or not.

50 00:04:26.280 00:04:38.809 Arish Alam: this checks across a frozen set that we have defined, that was given in the README again, that was given in the repository, and then it’s expandable through UI as well.

51 00:04:39.280 00:04:50.920 Arish Alam: That was the overview of the problem statement. I have approached this problem statement, or the core algorithm focuses on a three-layered approach. One is direct resolution through the frozen set.

52 00:04:50.920 00:05:00.599 Arish Alam: The second one is basically resolution through PubChem repository, which checks across a specific CID for different chemical formulas or chemical names.

53 00:05:00.650 00:05:16.300 Arish Alam: And the third is kind of a… specifically for tail ends, which is a LLM-based judge which decides whether the… all of the ingredients listed in the product are safe to… safe for consumption or not, basically, yeah.

54 00:05:17.960 00:05:23.820 Samuel Roberts: Great. So, maybe we’ll start with what, drove you to do a kind of three-layer approach like that?

55 00:05:24.150 00:05:36.089 Arish Alam: Sure. So, if you think about, approaching this problem statement with just a single vanilla LLM call, right, there are two problems here. One is, cost and latency.

56 00:05:36.090 00:05:49.749 Arish Alam: So, LLMs are very expensive for this kind of operations, right? And latency-wise, also, you’ll see that it takes up to around 1 or 2 seconds for something, like this to process. Whereas, as opposed to something,

57 00:05:49.990 00:05:58.069 Arish Alam: a direct lookup or a resolution through chemical or CLD layer, right? Pubchem CID layer. It’s…

58 00:05:58.130 00:06:14.169 Arish Alam: sub… it’s very optimal and millisecond latency, route. And we also want to make sure that all of the results are reproducible, which LLM does not guarantee. So LLM is specifically kept for, solving

59 00:06:14.210 00:06:22.780 Arish Alam: The tail end of the distribution… tail end of the data distribution here, and the two layers are practically for faster iteration and for,

60 00:06:23.160 00:06:30.650 Arish Alam: Better auditing, or better, you know, getting better reasoning out of why this was rejected.

61 00:06:33.900 00:06:34.879 Samuel Roberts: Sorry, excuse me.

62 00:06:34.880 00:06:38.780 Pranav Narahari: I have a quick question, Sam, unless you have something else.

63 00:06:39.040 00:06:40.619 Samuel Roberts: No, no, no, go ahead, go ahead.

64 00:06:40.640 00:06:46.420 Pranav Narahari: Yeah, Arish, the three-layer approach sounds great, in terms of, you know.

65 00:06:46.850 00:06:52.629 Pranav Narahari: The… assessing the data on different levels, making sure if one layer doesn’t…

66 00:06:53.090 00:07:00.869 Pranav Narahari: process to extract the data that you’re looking for. It makes sense how, like, the future layers can also assist with, analysis.

67 00:07:02.240 00:07:12.859 Pranav Narahari: one question I have is, what did you do for guardrails? So, the prompt was kind of for… imagine this is a production application that’s supposed to mimic a lot of the…

68 00:07:12.880 00:07:27.829 Pranav Narahari: clients that we may have internally, and when you’re putting something into production and shipping it to a client, you’re not in full control of how they’re going to use it. So, what are certain ways that you thought about that and shipped it with certain guardrails?

69 00:07:27.830 00:07:47.599 Arish Alam: Yes. So for purpose of guardrails, right, which actually would be built upon the instrumentation layer which I’ve integrated. So this entire, three-layered approach is, instrumented, right? And specifically, if you have to look at guardrails, we’ll obviously look at layer A, Layer B, and layer B, layer C, all three of them.

70 00:07:47.600 00:08:03.999 Arish Alam: But, more on the layer C as well. So, what ideally we’ll do is that we’ll keep on instrumenting these, layers, and then probably have some kind of human-in-the-loop process to actually, have a judgment on whether that, specific,

71 00:08:04.470 00:08:20.009 Arish Alam: ingredient, was present in the list of forbidden, was actually present in the forbidden list, right? So that is one way. Another approach is that we can run another, LLM message on top of these audit trails, which we are curating through instrumentation, right?

72 00:08:20.160 00:08:24.399 Arish Alam: So, those are the two approaches where we can put in the guardrails here.

73 00:08:25.870 00:08:29.119 Pranav Narahari: Okay, so kind of a follow-up to that is,

74 00:08:29.220 00:08:44.930 Pranav Narahari: a specific guardrail that I’m thinking of, and I don’t know if you gave this any thought for your submission, was what if a user uploads something that… and the nutrition facts are not extracted?

75 00:08:45.340 00:08:56.770 Pranav Narahari: Are you aware of, like, what happens with, the application that you deployed? Like, what happens with that? Or just, you know, if a user just uploads any type of, file?

76 00:08:57.040 00:08:59.490 Pranav Narahari: Yeah, just wondering what you thought about that.

77 00:08:59.910 00:09:16.219 Arish Alam: So if you see that the very first phase is the, extraction router process, right, which extracts text, and then the text is ideally transformed to a Pydantic schema, which is structured output for getting specifically the ingredients out of the product list, right?

78 00:09:16.220 00:09:23.010 Arish Alam: So we have enforced that of any text that is… that does not contain any kind of specific ingredient.

79 00:09:23.040 00:09:31.890 Arish Alam: in the image or in the text that we are passing, right? That’s going to produce a null output, and it’s going to exit out early.

80 00:09:31.900 00:09:51.539 Arish Alam: from that loop. So that is one thing which is in place there. But, you’re right to question, that we need to extend the capability of the guardrail here in a sense that, the OCR in itself might be, flaky or might be faulty here, right? So let’s suppose if I, upload

81 00:09:51.850 00:09:58.470 Arish Alam: Oh… A specific product which has something, let’s suppose, marked as,

82 00:09:59.270 00:10:13.749 Arish Alam: C12H15 kind of compound, right? But it gets detected as chlorine there. So we don’t actually want to send that, you know, misidentified ingredient downstream, the three-layered approach.

83 00:10:14.030 00:10:18.290 Arish Alam: So, yeah, that is something we should actually look at evaluating as well.

84 00:10:20.110 00:10:28.240 Pranav Narahari: Okay, one, I guess, question I have for… in terms of when you say it exits early, is…

85 00:10:28.760 00:10:30.549 Pranav Narahari: How have you thought about…

86 00:10:30.600 00:10:42.249 Pranav Narahari: the system end-to-end when… because I did a little bit of testing, when I just uploaded a screenshot of, like, you know, something that was a Nutrition Facts, it actually did end up saying that it was accepted.

87 00:10:42.250 00:10:52.440 Pranav Narahari: Instead of rejected. So, you know, I mean, this is just for an interview, it’s not pushing it to production, you had a limited amount of time, so no worries on that. Is this…

88 00:10:52.450 00:10:57.890 Pranav Narahari: something that you thought of, in terms of, like, a use case, or if…

89 00:10:58.300 00:11:02.419 Pranav Narahari: And if you did, like, do you know why this must… might have failed?

90 00:11:03.050 00:11:18.420 Arish Alam: Right. So I did think through this problem, but because this was for a POC-level implementation, I kind of skipped it. So the very first layer, which would be here, is the classification node, or the classification layer, which actually classifies the type of media that you are passing through, right?

91 00:11:18.420 00:11:27.899 Arish Alam: So whether it’s actually a product, or whether it’s something junk or not, right? That is the kind of verdict we also want to get out from the structured output extraction here.

92 00:11:27.950 00:11:49.610 Arish Alam: So, in this scenario where you have uploaded a screenshot, right? Ideally, what would have happened is that there might be no ingredients present in the structured output, or hallucinated ingredients which are present in the structured output generation, right? So, we ideally want to avoid those cases, and specifically, we can avoid those cases by adding a category layer to that

93 00:11:49.670 00:11:55.450 Arish Alam: Structured output detection, where we determine the category of the media that has been uploaded.

94 00:11:57.540 00:12:01.019 Pranav Narahari: Yeah. Yeah, that’s one way to do it. One thing I thought, too, is just,

95 00:12:01.120 00:12:13.929 Pranav Narahari: within that, too, is, you know, categories could be important. Whether or not you just were able to extract nutrition information is probably what’s most important, right?

96 00:12:13.930 00:12:14.250 Arish Alam: Correct.

97 00:12:14.360 00:12:15.530 Pranav Narahari: So…

98 00:12:16.270 00:12:28.709 Pranav Narahari: Yeah, here, like, what type of output would you want to give to the user? So, I think right now you probably only have two states. What other states potentially may be interesting from a user’s perspective to know what is going on?

99 00:12:29.520 00:12:44.220 Arish Alam: So I think we already have a list of detected ingredients and whatever has been classified as, you know, harmful ingredient, or whatever is present on the forbidden list. We kind of also show where the

100 00:12:45.890 00:13:05.429 Arish Alam: where practically this resolution has happened. One thing which we might want to add here is a kind of audit trail, which is, again, the instrumentation which is present in the landfill space, where it shows you the layered approach on how that reasoning, or how that entire process has happened. So this is, you know, this interests the…

101 00:13:06.000 00:13:10.320 Arish Alam: Basically, clarity in whatever we are doing in this process.

102 00:13:10.980 00:13:15.050 Pranav Narahari: Gotcha. Yeah, sorry, Sam, just one last question. I don’t want to take up… But.

103 00:13:15.050 00:13:16.380 Samuel Roberts: No, no, you’re good, these are good.

104 00:13:16.380 00:13:29.419 Pranav Narahari: Okay, cool, yeah. So, Arish, too, like, I guess, what my question is more directed towards is, from a user’s perspective, it’s a little bit confusing to see accepted when the information that I gave was,

105 00:13:29.790 00:13:52.179 Pranav Narahari: the… the screenshot that I gave, you know, didn’t have anything to do with, nutrition. You know, you mentioned how, like, you know, it’s something you thought of, it’s just a POC, you can’t really expect the whole world here, right? I’m guessing, just based on, you know, me focusing on this specific feature, how would you think about, you know, you talked about the AI analysis part of it, like, the categorization part, or just…

106 00:13:52.210 00:14:07.849 Pranav Narahari: doing a fact check to see if nutrition information was, extracted. What information would you then give back to the user? And, what are other edge cases that are in the similar vein, where it might have to…

107 00:14:07.850 00:14:14.049 Pranav Narahari: you know, you might have to give a different output to the user. So, like, right now, for example, you just give accepted or rejected to the user.

108 00:14:14.050 00:14:19.160 Pranav Narahari: What are some other, like, outputs that may be relevant here?

109 00:14:20.200 00:14:31.770 Arish Alam: So, right now, this is a binary, verdict that we are outputting, right? Maybe if we want to expand the scope of this, we can do that. Out of the detected ingredients in this

110 00:14:33.240 00:14:41.460 Arish Alam: This is slightly changing the problem statement as well, that, instead of showing detected ingredients, we can flag

111 00:14:41.470 00:14:58.070 Arish Alam: how harmful the detected ingredients are as well. But that is something which is different from the problem statement which is mentioned, which is, we already have, like, a direct lookup of blocked ingredients present here, right? So, we might not want to do that.

112 00:14:58.070 00:15:03.830 Arish Alam: One thing we are already showing is that we are highlighting the, blocked ingredients.

113 00:15:03.870 00:15:09.760 Arish Alam: Inside the detected ingredients block hit there. So that is already there.

114 00:15:10.540 00:15:18.699 Arish Alam: Another thing which I might show here is, the confidence score of the detection.

115 00:15:19.460 00:15:32.550 Arish Alam: Since layer A and B both are deterministic, the confidence should ideally amount to 1, but since layer C is probabilistic, we might want to introduce some kind of a confidence score to the label that we’re assigning here.

116 00:15:35.600 00:15:49.860 Arish Alam: Another list of information, I think, which we can show is that, the confidence of the detected ingredients, right, since this is an OCR-based strategy, and we are also doing structured output generation through LLM here.

117 00:15:50.420 00:16:02.440 Arish Alam: on top of the text that we have extracted. We also might want to show that the detected ingredients which have been captured by us, how confident are we that it’s also present in the

118 00:16:02.870 00:16:05.010 Arish Alam: Product image that you’ve uploaded.

119 00:16:06.280 00:16:07.689 Pranav Narahari: Yeah, that makes sense.

120 00:16:11.090 00:16:17.610 Samuel Roberts: I’m curious, say there was an enormous list of, forbidden ingredients.

121 00:16:18.110 00:16:20.549 Samuel Roberts: How do you think that would change…

122 00:16:21.060 00:16:28.060 Samuel Roberts: One, how it would behave right now, and if that’s changing a lot, what would you think about differently?

123 00:16:28.860 00:16:34.830 Arish Alam: Got it. So if it’s anonymously large, the problem that we might end up facing is…

124 00:16:34.940 00:16:35.960 Arish Alam: Oh, but…

125 00:16:36.450 00:16:48.590 Arish Alam: of latency, so the direct lookup part actually would be a little more expensive, then we might want to look at candidate selection process there, right? Specifically for this problem statement, I’ve…

126 00:16:48.590 00:16:58.890 Arish Alam: reduce the scope and not introduce semantic search, because there are a lot of problems and false positives and negatives, actually, both we might do, we might encounter here.

127 00:16:58.890 00:17:06.599 Arish Alam: But let’s suppose if this forbidden list ingredients span across a million or 10 million records, right?

128 00:17:06.920 00:17:10.799 Arish Alam: Then we might want to build a candidate selection process.

129 00:17:10.920 00:17:12.230 Arish Alam: Sorry.

130 00:17:12.930 00:17:19.880 Arish Alam: It does not have to be, strictly semantic search over the, entire pool of,

131 00:17:19.880 00:17:30.860 Arish Alam: blocked ingredients. We can do a hybrid approach. Maybe we can do fuzzy plus semantic, or maybe we use embeddings which are specifically created for chemical compounds.

132 00:17:30.860 00:17:43.300 Arish Alam: I have not read through those kind of semantic embeddings, but I’m sure that somebody might have a corpus out there for this specific purpose. So it might be interesting to look at those,

133 00:17:43.450 00:17:50.060 Arish Alam: embedding space as well. Yeah, that is how I think I would approach this problem.

134 00:17:51.980 00:17:53.109 Samuel Roberts: Great, thank you.

135 00:17:53.320 00:17:55.410 Samuel Roberts: For not the other, or…

136 00:17:56.670 00:17:59.810 Pranav Narahari: Yeah, that’s interesting, like, what…

137 00:17:59.810 00:18:00.320 Samuel Roberts: Yeah.

138 00:18:00.940 00:18:14.430 Pranav Narahari: What other, limitations did you notice with this implementation, kind of just going off of Sam’s, Sam’s question? You know, yeah, maybe the record size, or the number, the list of

139 00:18:14.510 00:18:28.930 Pranav Narahari: the acceptable compounds, whatever, is one that may affect the design of this implementation? What are some other things that may affect, you know, improvements in this design?

140 00:18:29.960 00:18:31.380 Arish Alam: Hmm, okay.

141 00:18:31.380 00:18:51.090 Arish Alam: So one thing is that, again, since the last layer is a probabilistic layer, right, so we might end up catching false positives or false negatives there, so we have to build up, build a harness over that layer to ensure that most of the resolution actually happens at layer A and B. If it’s throughput that we’re optimizing for.

142 00:18:51.090 00:19:02.830 Arish Alam: we should actually optimize for layer B, most of the resolution to happen at layer A and layer B, and only tail end of the data distribution to go towards layer C. Since all of the

143 00:19:02.830 00:19:12.269 Arish Alam: accepted criteria goes through Layer C. We have to ensure that there’s enough guardrail built around Layer C to

144 00:19:14.380 00:19:30.420 Arish Alam: to avoid any misclassifications. So that is one thing. Another thing is that, as I mentioned, right, we might have slight inaccuracies in the OCR detection as well. So maybe it’s better to look at,

145 00:19:30.710 00:19:33.879 Arish Alam: how much accurate our OCR strategy is.

146 00:19:34.000 00:19:50.530 Arish Alam: Right? So, let’s suppose if I’m right now using a vision-based model to detect text from input images or from any labels, right? I’d want to benchmark that against different OCR strategies, so that is the reason why I created this in a factory manner.

147 00:19:50.530 00:19:54.079 Arish Alam: That we can extend to any OCR-based strategy.

148 00:19:54.080 00:20:06.879 Arish Alam: So, as of now, we have EZ OCR and Gemini Vision. We might also want to benchmark, test reactor, maybe the Reductor as a parser as well, right? So, there, I think it’s,

149 00:20:07.410 00:20:10.589 Arish Alam: It’s got to be fruitful for us to benchmark on that front.

150 00:20:10.610 00:20:29.669 Arish Alam: Then again, we also have, like, extraction layer post, post-detection. Post-text detection, we have, you know, structured output detection, so that is, again, a node in our entire algorithm, so that needs to be validated as well. So, yeah, I’ll spend my time doing that.

151 00:20:31.960 00:20:40.239 Arish Alam: I would actually, as I said, right, I would actually want to optimize for layer A and B, layer B as well. So, layer B is, again.

152 00:20:40.240 00:20:55.550 Arish Alam: repository which is being maintained by PubChem, right? So we need to see what is the hit ratio for that repository. Are there any compounds which are missing in the PubChem repository? How much overlap do we have with their repository or database? Is it…

153 00:20:55.550 00:21:03.810 Arish Alam: viable for us to look at, you know, other databases as well. Do we want to include other databases as well? So, yeah.

154 00:21:05.640 00:21:06.950 Pranav Narahari: Great. Yeah.

155 00:21:08.200 00:21:18.250 Pranav Narahari: Yeah, we just kind of, talked about a few different, you know, limitations to this certain POC.

156 00:21:18.410 00:21:26.490 Pranav Narahari: And so, yeah, if you have any questions at Tom, you can pop in, but, yeah, one,

157 00:21:26.550 00:21:44.699 Pranav Narahari: one question that I have for you, Arish, is, like, we talked about this in a very technical manner so far. You know, you gave the multiple-layered analysis that this product does. You also talked about some of the technical implementation for how we would enhance this. Now, imagine you are…

158 00:21:45.720 00:21:54.349 Pranav Narahari: presenting this to a client as a deliverable for, like, the beginning of a product engagement, engagement.

159 00:21:54.660 00:22:04.820 Pranav Narahari: How would you kind of describe what this… what this, solution does from a more, business perspective, not just based on, like, a computer science technical implementation perspective?

160 00:22:05.370 00:22:06.170 Arish Alam: Okay.

161 00:22:06.850 00:22:15.639 Arish Alam: Cool. So, I’d frame it as a problem statement where you can bring in any kind of a product, right? Build any image, text, PDF.

162 00:22:15.710 00:22:21.059 Arish Alam: Any, any kind of data that we support to this platform, right?

163 00:22:21.060 00:22:43.240 Arish Alam: And through that platform, basically, you get a verdict whether your product is harmful or not for consumption, right? And the way to extend, or the way to classify whether that product is harmful or not, is practically by designing or defining lists of blocked ingredients, which is expandable at runtime, or can be configured at your backend.

164 00:22:43.240 00:22:44.170 Arish Alam: So…

165 00:22:44.170 00:22:51.320 Arish Alam: that is, like, a high-level overview of how I would explain this to the client. If he wants to, you know,

166 00:22:51.320 00:23:05.870 Arish Alam: run me, run him through the entire algorithm, I’d possibly take him to the loose traces where I explain him everything by showing him the input and the output of the system, and not diving deep into the technical details of how this all works.

167 00:23:07.990 00:23:08.590 Arish Alam: So…

168 00:23:08.590 00:23:09.660 Pranav Narahari: Okay. Yep.

169 00:23:12.680 00:23:29.949 Pranav Narahari: I’m just kind of giving you feedback based on, like, how things work, at, like, Brainforge, and how maybe you would want to enhance the solution going forward. So for one of our clients, you know, we’ll have humans be able to give feedback on how the automation performed.

170 00:23:32.860 00:23:45.969 Pranav Narahari: what… how would you design… because right now, there’s no way for me to give, like, a thumbs up or thumbs down on the feedback, right? Or on the output, I should say. So, how would you design that solution as… on top of what is currently built here?

171 00:23:47.130 00:23:51.549 Arish Alam: Cool. Actually, is it alright if I share my screen for this purpose?

172 00:23:52.120 00:23:53.569 Pranav Narahari: Yeah, that’d be great.

173 00:23:56.310 00:24:14.719 Arish Alam: Cool. So, I don’t currently have thumbs up or thumbs down in the UI section itself, but probably it’s a good idea to integrate feedback directly here. But there’s a second point where we can integrate feedback, right? Since we have already instrumentation inbuilt on this.

174 00:24:20.750 00:24:38.020 Arish Alam: Yep. This becomes a point of contact for the product owners to come and annotate the data here itself, right? So let’s suppose if I’m walking through this, structured output generation, right? And if I’m evaluating the accuracy or the precision of my output extraction system.

175 00:24:38.610 00:24:44.179 Arish Alam: I’ll see the media here as well here, and then I’ll see the output here.

176 00:24:46.190 00:24:52.460 Arish Alam: there’s a way for me to annotate or add it to the dataset, so let’s suppose if I can add it to a particular dataset.

177 00:24:53.330 00:25:04.350 Arish Alam: or, you know, annotate it as well. Then I should be able to mark that whether this extraction was successful for me or not.

178 00:25:05.140 00:25:24.310 Arish Alam: And in successive iterations, I can set up a LLM as a judge here. I can create an evaluator and then run it against the same dataset that we have… against the same trace which I’ve added, actually. So, as my product gets better, it should actually reflect across the scores I’m computing for that dataset.

179 00:25:26.940 00:25:29.469 Pranav Narahari: Yeah, definitely.

180 00:25:30.910 00:25:34.640 Uttam Kumaran: Yeah, I had maybe a question, Arish, maybe just a slightly different, like.

181 00:25:34.860 00:25:39.819 Uttam Kumaran: Can you… I don’t know if you can walk us through even, like, how you went from, like.

182 00:25:39.940 00:25:56.460 Uttam Kumaran: are challenged to maybe, like, a plan to finally, like, executing? Like, how do you think about kicking off a project like this? You know, I saw some of the stuff you did on the whiteboard, but, like, can you walk us through, like.

183 00:25:57.070 00:26:04.519 Uttam Kumaran: you’re given this, like, challenge. What is, like, your end-to-end process? You know, like, what… and… but also more than just, like, how you’re thinking about it, like.

184 00:26:04.650 00:26:14.180 Uttam Kumaran: Are you using AI for planning? Are you using AI just for execution, and you’re doing your own planning? Like, even just talk to us on a meta level, like, how you even approached it.

185 00:26:15.150 00:26:22.420 Arish Alam: Cool. I use AI, as my buddy, or, you know, as my, co-buddy for planning.

186 00:26:22.600 00:26:34.070 Arish Alam: So I ideally start with whiteboarding the solution. So, I start with the problem description, and then I, want AI to think edge cases for me.

187 00:26:34.180 00:26:50.369 Arish Alam: I try to resolve all the edge cases according to the best, aligned plan that I can come up with. Then I practically draw this whiteboard, or the entire flow of how it should look like, and then I feed it to the LLM to, come up

188 00:26:50.630 00:27:02.809 Arish Alam: with gaps in the implementation, right? I also kind of used cross-model, planning as well, so I don’t really rely on just one model for

189 00:27:02.810 00:27:22.120 Arish Alam: living, generating an implementation plan, right? I, in fact, use, Claude Code and Codex both to sort of set up a debate kind of a structure, where it’s able to argue through pros and cons of different approaches. And then once I fix the plan, I start with the implementation phase.

190 00:27:23.660 00:27:35.670 Uttam Kumaran: And then when you’re working on the plan, are you… are you literally, like, plan this out, or what is, like, what are the steps that you’re… you’re sort of… you’re doing? Are you have… are you using, like, a skill for that? Do you have typically, like.

191 00:27:35.670 00:27:36.300 Arish Alam: No, actually.

192 00:27:36.300 00:27:37.840 Uttam Kumaran: between… yeah.

193 00:27:38.230 00:27:58.620 Arish Alam: So, whatever from my experience, as far as I know, right? So, let’s suppose if the very first point here was to use an OCR strategy. So, I kind of know the benchmark across, different OCR strategy. So, I’d say that, hey, we do need an OCR strategy. Now, how do we want to integrate that? We want to integrate that in a factory manner.

194 00:27:58.620 00:28:01.849 Arish Alam: What would that enable us to do?

195 00:28:01.850 00:28:18.750 Arish Alam: implementing it in a factoring manner is basically, will basically help us to extend any kind of OCR strategy we might want to add in future, right? So I’ll say that, hey, let’s start with the easiest one out there, which is easy OCR, and then we’ll add up, keep on adding different strategies to this.

196 00:28:18.820 00:28:31.399 Arish Alam: So I’ll say that, hey, I need you to implement this OCR strategy in a factory pattern method. And then after you’re done, implementing the easy OCR strategy, then I would want you to add reductor to the…

197 00:28:31.400 00:28:40.510 Arish Alam: factory pattern as well. So that is how I, practically, give very verbatim, instructions to AI.

198 00:28:40.510 00:28:55.059 Arish Alam: So that it knows what tools to use, and it knows what skills to use as well. So, for instrumentation as well, there’s… there are official skills of landfills, but there are some self-curated skills of landfills as well. I use that to instrument my code and…

199 00:28:55.080 00:28:59.780 Arish Alam: Other engineering practices, but yeah, that is my development setup.

200 00:29:00.610 00:29:16.490 Uttam Kumaran: And then once you have, like, an MVP working, are you then just, like, trying it out, and then making fixes? Are you… do you have the AI actually test itself and iterate? Like, how do you think about that, like, test-based iteration loop? Yeah.

201 00:29:16.490 00:29:36.460 Arish Alam: build a small test suit once I have an MVP ready. I ask AI to iterate on the test suit so that, you know, at least minimal viable or the happy path is followed. Then we look for edge cases together, and then we keep on adding, different test cases to the eval pattern.

202 00:29:36.460 00:29:40.409 Arish Alam: Once, you know, there’s enough coverage in the, eval…

203 00:29:40.410 00:29:51.479 Arish Alam: once there’s enough data coverage in the eval pipeline that I’ve written, I can lie rest assured that, you know, at least I know where the gaps are in the pipeline.

204 00:29:52.930 00:29:53.640 Uttam Kumaran: Okay.

205 00:29:54.040 00:29:54.810 Uttam Kumaran: Great.

206 00:29:55.790 00:30:00.760 Pranav Narahari: Now, for, like, future enhancements to the product,

207 00:30:00.960 00:30:15.780 Pranav Narahari: say, let’s go back to that example where you’re presenting to, like, a client. You’re likely gonna get some feedback on, you know, future features, or enhancements, or bug fixes to what you currently have. How would you go about using that feedback to then

208 00:30:15.980 00:30:28.149 Pranav Narahari: integrating that with, like, the system that you just presented about, you know, starting from scratch. Like, what does your process look like when you’re working with an existing project and you need to make these enhancements, bug fixes, additional features?

209 00:30:29.280 00:30:46.569 Arish Alam: Right. So A, I’ll introduce a feedback system in the UI itself, and it’s going to be nested feedback system, so let’s suppose if you’re starting with binary classification here, so I’ll probably have a drop-down whether it’s correct or not, so it could be thumbs up or thumbs down, so let’s suppose if it’s a thumbs up.

210 00:30:46.570 00:30:55.420 Arish Alam: We can… we can stop there, because we know that, since it’s a binary classification problem, there’s no need to extend the feedback list.

211 00:30:55.420 00:31:10.540 Pranav Narahari: I think my question is actually not so… not just, like, the features you would build, but how does, you know, given, you know, you might be getting feedback in a… by the client, how would you then, you know, just kind of going off of, like, Utam’s question of, like, your…

212 00:31:10.540 00:31:15.960 Pranav Narahari: You’re… when you’re putting your developer hat on, and, like, what is your… your flow to, like, building?

213 00:31:15.990 00:31:30.119 Pranav Narahari: how does that flow change, if at all? Or how is it, how is it furthered when you’re just kind of not creating something from scratch, but you’re taking in feedback from a client and then building something out? Does that question make sense?

214 00:31:30.810 00:31:38.010 Arish Alam: Yes, it does. So basically, you want me to answer that, how do I utilize the feedback and then iterate on the product, right?

215 00:31:38.380 00:31:39.290 Pranav Narahari: Exactly, yeah.

216 00:31:39.810 00:31:55.180 Arish Alam: Cool, so if, it actually identifies, or categorizes the feedback into certain buckets, right? So, is it, something which requires a major refactor to the, product that we’ve built? Or is it a bug, or…

217 00:31:55.180 00:32:08.010 Arish Alam: Can it be an extension to the already running algorithm? Let’s suppose if it’s a bug, then, I mean, it’s straight up fixed to our, code, right? And let’s suppose if it’s something which requires,

218 00:32:08.510 00:32:17.799 Arish Alam: change in the algorithm, right? So that is something, we can add to our… we’ll probably wire it up in a way that,

219 00:32:18.030 00:32:29.399 Arish Alam: each of the independent nodes are replaceable, right? So, we’ll want to keep the product flexible so that we can easily replace the pieces or the core algorithm piece, without affecting the product that much.

220 00:32:29.400 00:32:43.660 Arish Alam: Again, then, whatever we are doing needs to be thoroughly tested, so we already have an eval pipeline in place as well. So any… any change we are doing, we are basically passing it through a regression pipeline as well.

221 00:32:43.880 00:32:48.660 Arish Alam: So, yeah, that is how I would, approach this.

222 00:32:51.680 00:32:55.960 Samuel Roberts: If, if you have evals, but maybe you’re not

223 00:32:56.090 00:33:03.929 Samuel Roberts: confident enough in all of them yet, but say the client is really pushing to ship faster.

224 00:33:04.610 00:33:19.220 Samuel Roberts: but you’re not yet confident that you’ve really resolved, you know, false positive, false negatives kind of thing. How much would you feel comfortable pushing back on that? Where do you feel is the line that, like, okay, no, we’re at a good point? Talk to me about, kind of, that thought process.

225 00:33:20.350 00:33:22.070 Arish Alam: Okay,

226 00:33:22.840 00:33:31.620 Arish Alam: So, I think this is tied to two coverage, right? So, what we are trying to do is basically get a sense of

227 00:33:31.730 00:33:43.920 Arish Alam: the data the client is exposing to us, right? Or what client actually wants this product to run on as well. And if we have enough coverage on the data, we can show him, the traces of the,

228 00:33:45.790 00:33:56.299 Arish Alam: exact coverage that we are, running through. And if the false positive or the false negative rate is, at the very tail end of the distribution, we can,

229 00:33:56.690 00:34:02.610 Arish Alam: We can probably hint towards an iterative improvement for that, which is going to take slightly longer time.

230 00:34:09.230 00:34:13.840 Pranav Narahari: One other, you know, common situation you’re gonna fall into here is…

231 00:34:13.960 00:34:21.919 Pranav Narahari: You’re going to be asked to do something from the client who’s likely not technical, and you’re going to need to…

232 00:34:22.260 00:34:37.060 Pranav Narahari: give… let them know whether or not a certain implementation is feasible or not. So for example, like, someone that may… that may not be, you know, have, like, a software engineering background, they may look at your implementation and think that.

233 00:34:37.770 00:34:46.880 Pranav Narahari: hey, it’s very easy, like, I can upload an image here, and I’m able to get all the nutrition facts. You should be able to pull in any other facts as well, right?

234 00:34:47.000 00:34:57.019 Pranav Narahari: Now, our back-end process, like how you built these things, is, with those 3 layers, is very specific to Nutrition Facts.

235 00:34:57.940 00:35:07.220 Pranav Narahari: So, yeah, let me try to think of an example here. So… maybe…

236 00:35:08.480 00:35:13.920 Pranav Narahari: Are you… maybe, you know, since you built something in the vein of nutrition, maybe,

237 00:35:14.490 00:35:19.629 Pranav Narahari: The client may think, oh, it’s… why don’t we add… like, what is the complexity of…

238 00:35:19.780 00:35:32.440 Pranav Narahari: adding a integration with a food database to fill in the gaps of all the other nutrition information that is likely a part of this product.

239 00:35:32.960 00:35:36.470 Pranav Narahari: How would you explain, like, the…

240 00:35:36.860 00:35:44.950 Pranav Narahari: the technical complexity of that to somebody, or I guess what a better question is, how would you explain to a client

241 00:35:45.640 00:35:47.499 Pranav Narahari: What would need to be done?

242 00:35:47.860 00:35:56.579 Pranav Narahari: On our end, or how long it may take you, to complete something like that with various levels of technical complexity.

243 00:35:56.870 00:36:08.839 Pranav Narahari: that, and just assuming that they’re not, you know, technically, you wouldn’t want to speak in terms of, like, you know, we had to build a pipeline, we have to build embeddings, things of that nature. You have to talk about in terms of,

244 00:36:09.150 00:36:12.359 Pranav Narahari: You know, business impact, or very high-level technical approach.

245 00:36:13.510 00:36:22.269 Arish Alam: Okay, I’d ideally want to iterate, on this problem, after giving it some thought.

246 00:36:22.380 00:36:36.319 Arish Alam: So once I’ve… once I’ve thought about it, I’ll ideally investigate what kind of tools needs to be integrated to satisfy a client’s requirement, and, would, so I’m not going to,

247 00:36:36.810 00:36:48.689 Arish Alam: overwhelm the client with all the technical implementation details, right? So, I’m probably going to walk him over all of the tools that we, that would re… that would need integration into our system, right?

248 00:36:48.690 00:36:59.919 Arish Alam: And the complexity of adding those tools as well. So that is how I would, you know, want… walk… walk the client through all of these details. If it’s something…

249 00:37:00.030 00:37:10.640 Arish Alam: not feasible, right? If there’s a blocker in getting access to the tool, or maybe, integrating it to our current pipeline, we’d ideally want to flag that out as well.

250 00:37:15.630 00:37:16.300 Arish Alam: Nope.

251 00:37:17.100 00:37:17.730 Samuel Roberts: Excuse me.

252 00:37:18.030 00:37:21.900 Uttam Kumaran: Yeah, there’s one… one question… oh, sorry, Sam, go ahead.

253 00:37:21.900 00:37:23.480 Samuel Roberts: No, no, no, go ahead, go ahead.

254 00:37:23.740 00:37:27.570 Uttam Kumaran: Yeah, I was gonna ask you how you… I was… I saw your… your,

255 00:37:27.750 00:37:33.179 Uttam Kumaran: that you were working on, sort of, light LLM, like, as an OSS maintainer. Like, how did you get that?

256 00:37:33.280 00:37:36.819 Uttam Kumaran: How did you get that gig, or, like, how did you end up, like, in their world?

257 00:37:37.580 00:37:50.889 Arish Alam: Yes, sure. So actually, I’ve been, integrating Lightroom into different client portfolio, client platforms, like, for past 6 to 8 months now. So, there have been, like,

258 00:37:51.100 00:37:59.649 Arish Alam: consumer-based companies where I’m integrating LightLam, right? So I was kind of contributing, since past 6 to 8 months, and they kind of…

259 00:37:59.670 00:38:14.319 Arish Alam: reached out to me that, hey, we see that you’re already contributing to the Lytium. Do you want to, like, do a sort of part-time maintenance job here, so that you can triage the APRs, or you can review them for us? So that is how I set it up there.

260 00:38:14.850 00:38:15.640 Arish Alam: Yep.

261 00:38:15.680 00:38:22.549 Arish Alam: Also in most of the client projects I’ve done, right, specifically for the enterprise clients, since we kind of did a…

262 00:38:22.560 00:38:40.340 Arish Alam: SaaS product and as well as on-prem deployment. Due to different… due to basically hopping between different clients and different cloud providers, we effectively wanted to switch models, between different providers, right? For AWS, it’s Bedrock, for…

263 00:38:40.340 00:38:45.139 Arish Alam: GCP, it’s Gemini-based models, whereas you’re OpenAI, so…

264 00:38:45.140 00:38:55.649 Arish Alam: We had to do the integration all over again if we are switching to the cloud provider, so we basically thought it’s better to get them gateway for this purpose, so that it makes all of our lives easier.

265 00:38:55.870 00:38:56.640 Arish Alam: Yep.

266 00:38:56.640 00:38:57.230 Uttam Kumaran: Great.

267 00:38:58.000 00:39:02.789 Uttam Kumaran: And the last two places you work, those are both kind of, like, consultancies, like Future Path.

268 00:39:03.030 00:39:07.589 Uttam Kumaran: And, you know, you’re at scale focus. Like, both are kind of consultancies.

269 00:39:07.590 00:39:09.389 Arish Alam: Yes, both are consultancies.

270 00:39:09.770 00:39:15.440 Uttam Kumaran: Okay, okay. Like, what kind of clients, and like, sort of, like, what is a team sort of structure internally?

271 00:39:15.700 00:39:30.740 Arish Alam: For Future Path, it was mostly Fortune 200 to 500 clients, so we dealt with clients like Gilead and, Johnson & Johnson’s as well. For Scale Focus, we work with, family offices for startups, and there’s also

272 00:39:30.740 00:39:36.200 Arish Alam: this startup called Little, Little Bird as well, for whom we worked with.

273 00:39:36.200 00:39:41.339 Arish Alam: So, at school focus, it was mostly startups, but at Future Path, it was enterprise companies.

274 00:39:41.940 00:39:42.570 Uttam Kumaran: Okay.

275 00:39:46.870 00:39:53.110 Samuel Roberts: Greatly. Well, we’re getting close to time. My last question, I guess, is, what…

276 00:39:53.290 00:39:57.539 Samuel Roberts: Part of this whole… Either the… the…

277 00:39:57.750 00:40:05.210 Samuel Roberts: the program itself or the process, are you most proud of? And what part do you think your…

278 00:40:05.890 00:40:13.289 Samuel Roberts: I don’t want to say, like, you know, embarrassed by or least proud of, but, like, where do you kind of see, like, this part I nailed, this part I could have used a few more hours on, or something?

279 00:40:14.120 00:40:26.049 Arish Alam: Right, so I think, the very first part of the product, where I think, Pranav also called out that, we’re not doing classification on top of the media that we’re uploading, or we’re not essentially, you know, dejecting the

280 00:40:26.130 00:40:38.309 Arish Alam: type of input, which would not get classified. That is a part I think I would want to mature more. The thing which I’m proud of is that, I’ve tried to complete the,

281 00:40:38.400 00:40:54.399 Arish Alam: the loop, so practically from, you know, passing the input to getting the output to evaluating it to, you know, making it usable. So, I’ve tried to complete the loop, and core architecture or the core algorithm is something we can, you know, iterate upon, so…

282 00:40:54.460 00:41:02.330 Arish Alam: Ideally would like to work that way only, so that we can build the entire loop first, and then we can keep on hammering the center of the loop.

283 00:41:06.780 00:41:09.019 Samuel Roberts: Are there other, kind of, last questions here?

284 00:41:09.730 00:41:16.679 Uttam Kumaran: Yeah, I guess I want to know if you’re, like, playing around on a personal level with, like, any agents or any architecture, like.

285 00:41:16.900 00:41:24.399 Uttam Kumaran: OpenClaw, or Hermes Agent, or, like, Nemo Claw, or any of that, if you’ve tried it out, or, like, what your perspective is.

286 00:41:24.670 00:41:34.540 Arish Alam: Yes, I’ve played around with Hermes Agents and OpenClaw both. Actually, I’ve had Hermes Agent deployed as well. I’ve integrated it with Slack and with WhatsApp as well.

287 00:41:35.440 00:41:53.089 Arish Alam: For both of the agents, I think that context management is really a problem here. So, for open claw, it says that thread context management is a problem, and for homies agent, it’s, it’s, again, the thread context problem. So, like, for example, on WhatsApp, it does not,

288 00:41:53.090 00:42:07.980 Arish Alam: gets the information from the existing chats we have done with it, right? So it only relies on the session-based information. So let’s suppose if I have, like, 4 to 5 messages in a fresh session, that’s the only context that the homeless agent is getting.

289 00:42:08.130 00:42:22.819 Arish Alam: So, that is kind of a bottleneck in Homeies Agent and OpenCloud both. So, that is the reason, actually, we kind of started building our own agent harness as well, which is built around Cloud Code.

290 00:42:23.460 00:42:39.300 Arish Alam: this, this, agent practically tries to solve for a multi-tenancy problem. So, OpenClaw and Hermes both are sort of personal assistants, right? They cannot be used across an organization with,

291 00:42:39.660 00:42:49.040 Arish Alam: n number of team members. So let’s suppose if you want a product which can be used across different team members, one agent which can be used across different team members, right?

292 00:42:49.210 00:42:54.239 Arish Alam: How do you bet for that product? So, that is what I’ve been actually working for the past 3 months.

293 00:42:55.050 00:43:10.799 Uttam Kumaran: Yeah, so internally, kind of like, we’re gonna… tackling a similar thing. We’re gonna wrap open code or something and deploy our own version, but that’s where I think we’ll probably start to implement some type of memory across the business, across teams, and individual, right? So we’ll have some type of provisioned memory.

294 00:43:11.050 00:43:19.610 Uttam Kumaran: Where, like, you can kind of pick it up wherever you are, and it starts to build that sort of stuff, but yeah, we’re just sort of also thinking about that now, so…

295 00:43:19.970 00:43:25.180 Arish Alam: Yes, right now, I think memory is a very interesting piece to work upon, and I mean…

296 00:43:25.180 00:43:44.989 Arish Alam: Cloud Code made it very clear that you can use file system as a memory system… memory system for yourself, and now there are different syncs as well, S3 file sync available as well, which is, like, very fast to, you know, boot up the entire file system memory to a sandbox system or something, so that… that makes it a very clean system.

297 00:43:46.450 00:43:47.130 Uttam Kumaran: Nice.

298 00:43:49.510 00:43:51.330 Uttam Kumaran: Any questions for us?

299 00:43:51.560 00:43:52.530 Samuel Roberts: Yeah, totally.

300 00:43:52.830 00:44:07.259 Arish Alam: Yes, I do. So, this is on the client side of the things, right? So, if you want to… if you would ideally have to categorize all of the different projects that you’re doing, right?

301 00:44:07.570 00:44:13.109 Arish Alam: Would you say that how much of your work is divided between, you know, different categories of the

302 00:44:13.240 00:44:31.410 Arish Alam: client problems that you’re trying to solve. So, is it something, let’s suppose, are you creating generic agents so that your clients can use those generic agents, and then can, you know, get any outcome in the future from those agents? Or is it, like, you’re solving a specific workflow problem for that?

303 00:44:31.410 00:44:38.990 Uttam Kumaran: Fairly scoped. Yeah, it’s fairly scoped. It’s not really, like… Generic agents without clear outcomes.

304 00:44:39.210 00:44:41.050 Uttam Kumaran: I mean, part of that is, like.

305 00:44:41.340 00:44:45.469 Uttam Kumaran: we have to be under constraints of security. It’s oftentimes also our clients

306 00:44:45.480 00:45:04.809 Uttam Kumaran: don’t really know what’s possible, so part of our job is to share, like, this is what’s possible, and to have very tight scope so that we can really over-deliver, versus, like, we’re just gonna give you agents and MCPs. A lot of our clients have already done that through ChatGPT, right? But what they’re finding is, like, you can’t do complex workloads

307 00:45:05.000 00:45:15.909 Uttam Kumaran: that are unique to my business. Like, writing me an email is one thing, but, like, taking an existing workload and moving it to… to something that they own is a different, you know, issue.

308 00:45:16.710 00:45:31.489 Arish Alam: Cool. Along the same lines, actually, I have another question as well. So, since there are so many agents already out there, right, like Hermes Agent, OpenCloud agent, Cloud Code also kind of ships, features at a very, you know.

309 00:45:31.620 00:45:39.660 Arish Alam: frequent pace. What is the… what is one kind of problem, recurring problem, that you keep on seeing that you might want to solve for?

310 00:45:42.070 00:45:47.369 Uttam Kumaran: Yeah, I mean, I think in our business, we’re seeing, like, we need to create probably something like a skill registry.

311 00:45:47.650 00:45:57.829 Uttam Kumaran: And it’s… I don’t think it’s, like, kind of a solved problem, which is, like, wherever you go, you maybe have personal skills, you have team skills, you have global skills.

312 00:45:57.990 00:46:12.079 Uttam Kumaran: I don’t think that’s solved, and I don’t think there’s, like, a great software for that. Right now, you have to figure out, like, submodules and… and local things, I don’t know. So that’s probably one thing that, you know, it would be great for us to kind of have a perspective.

313 00:46:12.710 00:46:20.509 Uttam Kumaran: Apart from that, I think we want to have… give clients, like, options. Like, some clients, they’re okay with using Cloud code. Some clients, they want the full telemetry.

314 00:46:20.520 00:46:32.769 Uttam Kumaran: like, in our business, I’m interested in having all of the telemetry of all of the interactions, so we need to move to an open code style model, or some type of service that we own that’s built on open source.

315 00:46:32.900 00:46:40.079 Uttam Kumaran: Some people also, they’re security-minded, so they want to own everything, they want to deploy something on-prem or locally.

316 00:46:40.110 00:46:53.400 Uttam Kumaran: And so those are, I think, some of the interesting things. Like, I’m sure some clients are gonna start asking us to have things that only run locally on machine, or locally within their servers, using open source models, right? So, like, how do we support that?

317 00:46:54.510 00:46:56.010 Arish Alam: Got it, okay.

318 00:46:56.670 00:46:57.430 Arish Alam: Okay.

319 00:46:58.200 00:47:13.339 Arish Alam: Cool, yeah, I think just one last question. The clients that you work with, do they have technical, team in-house, or is it, like, the folks that you work with do not have, like, technical expertise, or in-house technical expertise?

320 00:47:14.480 00:47:29.379 Uttam Kumaran: I think it kind of depends. I think for the most part, we’re… there are, like, technical… there’s a technical organization. Whether or not they’re able to do anything in AI is, like, a different story. So we’re not coming in through the technical org, we’re coming in through the business side.

321 00:47:29.910 00:47:39.399 Uttam Kumaran: So we’re solving, like, a business problem, so we’re not coming in as, like, okay, requirements are here, go build this. We’re helping shape, actually, the requirements.

322 00:47:39.640 00:47:48.480 Uttam Kumaran: And then also moving into basically building as well. So there’s a mix of, like, strategy and, like, product design, and then sort of moving into…

323 00:47:49.850 00:48:05.270 Uttam Kumaran: you know, actually, like, developing a solution, but for the AI stuff, I feel like rarely do we have a counterpart that’s, like, as sophisticated as us, or as far as we are, right? So oftentimes, we’re teaching, and we’re trying to drive them towards making decisions, you know?

324 00:48:05.810 00:48:06.580 Arish Alam: Okay.

325 00:48:07.160 00:48:11.609 Samuel Roberts: A lot of the technical people we deal with might be IT, you know, they’re managing…

326 00:48:11.610 00:48:27.689 Samuel Roberts: Maybe there are instances on certain things, and so they… we need to work with them to provision certain things or get access, and they might be the most technical people that we talk to, and they have maybe a little bit more of an idea of what the stuff can do, but they’re not necessarily the ones that we’re, you know, like he said, we’re coming in from a business angle more, so…

327 00:48:28.510 00:48:30.170 Arish Alam: Cool. Okay, alright.

328 00:48:31.970 00:48:32.500 Samuel Roberts: Sure.

329 00:48:33.970 00:48:34.940 Samuel Roberts: Yep. Weird.

330 00:48:35.560 00:48:39.500 Samuel Roberts: Kind of at time, but are there any last thoughts or questions from, I guess, any of us?

331 00:48:39.870 00:48:46.810 Arish Alam: Yes, I can. Just, I think, just one last question. Sorry for keeping, no problem.

332 00:48:47.620 00:48:48.450 Samuel Roberts: Like, we’re all good.

333 00:48:48.900 00:49:02.009 Arish Alam: So if I start, I mean, if I start working with you guys, how are my first few weeks gonna look like? What… what… I mean, what should I expect, or what do you expect out of me? That is what I want to ask, yes.

334 00:49:02.440 00:49:08.189 Uttam Kumaran: Yeah, I think, like, from the… from, like, sort of onboarding side, we typically will do, like, 30, 60, 90,

335 00:49:08.190 00:49:32.579 Uttam Kumaran: like, plans for everybody. So ultimately, like, you’re joining the delivery team, so we will have a client in which, like, we’ll see how you can fit in to actually deliver. But really, part of this is, like, making sure your whole setup is great, like, you have all the necessary, like, resources to start. Also, we have our, like, a platform and a little bit of, like, learning and development process already set up, so you’ll kind of go through our course.

336 00:49:32.630 00:49:45.350 Uttam Kumaran: internally, which is, again, if you’re on the AI team, I feel like it should be pretty easy. It’s like creating your first skill at Brainforge, understanding our platform and how to interact with all the integrations and MCPs in order to execute work.

337 00:49:45.350 00:50:02.000 Uttam Kumaran: And then ultimately, it’s like working with, you know, like, within your specific client to find out where you fit. And so usually, it’s like, within the first week, you’ll find there’s one client where you’ll start doing some work for, and then over that 30, 60, 90 day period, it’s just expanding, you know, your scope.

338 00:50:02.000 00:50:14.819 Uttam Kumaran: But you’ll also find that, like, we’re using every tool possible to speed up, you know, development work, so we’re also looking for your feedback, and, like, the reason why we’re here talking to you is you bring a lot of expertise and, like.

339 00:50:14.820 00:50:33.380 Uttam Kumaran: using codecs, cursor, whatever, to start building. And so we’ll be looking for, like, exactly what you’re saying. How do you scale some of these development practices across our organization? And so how are you actually contributing back to, like, how all of us develop for clients? Which, for example, even today, I’m like.

340 00:50:33.420 00:50:38.149 Uttam Kumaran: I was literally gonna tell the team, we should start using Excalibraw, because it’s been a while since I, like.

341 00:50:38.200 00:50:49.619 Uttam Kumaran: checked out the MCP to draw diagrams, maybe we should just start using that, right? So that would be a great example. I’d be like, Arish, can you show a bunch of people, like, how they can start using this for technical design diagrams?

342 00:50:49.620 00:50:59.249 Uttam Kumaran: Or for, like, sales wants it for something, marketing wants it, right? So that’d be an explanation of, like, contributing back to, like, the entire organization.

343 00:50:59.380 00:51:06.949 Uttam Kumaran: But it would sort of just be, like, jumping kind of headfirst into things. But you’ll find that I think we’re on your same wavelength of, like.

344 00:51:07.060 00:51:19.400 Uttam Kumaran: really forward on using AI for development. Part of the ask, though, are not just developing AI systems, it’s, like, pretty classic back-end, front-end, DevOps, CI-style work, in addition to, like.

345 00:51:19.670 00:51:33.240 Uttam Kumaran: you know, LLM-related capabilities. So, like, for example, we may have one client on GCP, we have to learn, like, some Gemini situation, or there’s some client where we’re, like, rolling our own stuff, and so being able to sort of roll with the punches.

346 00:51:33.530 00:51:37.310 Uttam Kumaran: That’s sort of probably how it would look like, I think.

347 00:51:38.240 00:51:40.350 Arish Alam: That sounds interesting, yeah.

348 00:51:42.610 00:51:45.620 Uttam Kumaran: Cool. Okay, well, thank you for the time, Arish, I appreciate it.

349 00:51:45.820 00:51:47.470 Arish Alam: Yeah, thanks as well.

350 00:51:48.540 00:51:49.140 Arish Alam: Who’s name?

351 00:51:49.430 00:51:50.450 Uttam Kumaran: Talk to everyone, too.

352 00:51:51.090 00:51:52.399 Pranav Narahari: Same, guys.