Meeting Title: Brainforge Interview w- Pranav Date: 2026-03-18 Meeting participants: Marcus, Pranav
WEBVTT
1 00:00:53.440 ⇒ 00:00:54.270 Pranav: Hey, Marcus.
2 00:00:54.870 ⇒ 00:00:56.270 Marcus: Hey, hello, can you hear me?
3 00:00:56.980 ⇒ 00:00:58.340 Pranav: Yeah, I can hear you. Can you hear me?
4 00:00:58.580 ⇒ 00:00:59.560 Marcus: Yeah, I can.
5 00:01:00.630 ⇒ 00:01:01.340 Pranav: Perfect.
6 00:01:01.970 ⇒ 00:01:03.169 Pranav: How’s your day, then?
7 00:01:04.120 ⇒ 00:01:07.600 Marcus: Yeah, it’s just morning, it’s 7am, so that just started.
8 00:01:07.600 ⇒ 00:01:10.470 Pranav: Okay. You’re, where are you based out of?
9 00:01:10.900 ⇒ 00:01:16.739 Marcus: So, I am based out of Virginia, but currently I am in, California, with my parents, so…
10 00:01:18.320 ⇒ 00:01:23.960 Pranav: That’s awesome. Yeah, so,
11 00:01:24.890 ⇒ 00:01:28.349 Pranav: It sounds like you’ve already spoke to Sam, right, and Kayla?
12 00:01:28.670 ⇒ 00:01:29.300 Marcus: Yep.
13 00:01:30.490 ⇒ 00:01:35.389 Pranav: Great. How did the conversation with Sam go? Because he’s another AI engineer here.
14 00:01:36.590 ⇒ 00:01:40.670 Marcus: Yeah, I don’t think it was Sam, I think it was Martin…
15 00:01:40.780 ⇒ 00:01:44.260 Marcus: Something, he was Director of Engineering.
16 00:01:46.720 ⇒ 00:01:48.460 Marcus: Let me just check.
17 00:01:49.460 ⇒ 00:01:51.160 Marcus: The name of the person.
18 00:02:00.310 ⇒ 00:02:05.310 Marcus: oh, okay, yeah, sorry, it was Sam.
19 00:02:05.430 ⇒ 00:02:06.740 Marcus: I didn’t remember.
20 00:02:07.570 ⇒ 00:02:19.529 Pranav: Yeah, all good, all good, yeah. So he is, yeah, he does have, like, that senior role of, like, an AI engineer here at Brainforge. I kind of wanted to start off there, just,
21 00:02:19.830 ⇒ 00:02:25.590 Pranav: what has he kind of told you? If not, if you guys haven’t already talked about it, we can talk about it a little bit as well, about, like.
22 00:02:26.050 ⇒ 00:02:31.490 Pranav: what you have heard about, like, being an AI engineer at Brainforge, and what, like, excites you about that?
23 00:02:32.090 ⇒ 00:02:37.609 Marcus: Sure thing. So, I just got a gist of it. So, I basically talked about
24 00:02:37.880 ⇒ 00:02:46.379 Marcus: the projects, the number of projects they are going to be, so I can be a part of a team where, if I remember it correctly.
25 00:02:46.380 ⇒ 00:03:04.569 Marcus: So, I would be a part of a team where there are 4 or 5 different AI engineers, too, working in that particular team, and then there would be some itch and Tech workflows projects. There can be some machine learning model trainings as well, where we can be a part of it. So, yeah, it…
26 00:03:04.570 ⇒ 00:03:08.710 Marcus: Would be a very diverse kind of a role, where we have to just
27 00:03:08.740 ⇒ 00:03:23.940 Marcus: put ourselves in, and just work, around with what the requirements of the team are, and what everything… So, if that’s exactly what I remember, and if it’s correct. So, currently, I’m, interviewing with a lot of companies right now, so I… if that’s correct.
28 00:03:24.000 ⇒ 00:03:25.639 Marcus: And I would love to know a bit more.
29 00:03:26.910 ⇒ 00:03:32.440 Pranav: Yeah, totally. So A little bit about…
30 00:03:32.560 ⇒ 00:03:43.709 Pranav: what’s, I think, unique about Brainforge is that, yeah, you do work kind of, like, on… in small teams, and now that we are… you’re kind of joining the AI service line,
31 00:03:44.740 ⇒ 00:03:57.150 Pranav: you’re going to be working full stack. So the… you should be able to kind of be working full stack, but then even on top of that, working with clients as well, like, it should be something that you’re comfortable with.
32 00:03:57.530 ⇒ 00:04:09.259 Pranav: And so, I also would just kind of like to know, where is your, like, current standing with, like, using, like, different AI tools? What AI tools do you use currently, like, in your workflow?
33 00:04:09.840 ⇒ 00:04:10.910 Marcus: So,
34 00:04:10.910 ⇒ 00:04:30.049 Marcus: I think the AI tools are amazing. They’re doing an amazing job. I have leveraged, started with, the co-pilot, then moved towards Cursor, and now I’m currently leveraging Cloud Code. I think Cursor and Cloud Code are the groundbreakers. The reason is they do…
35 00:04:30.060 ⇒ 00:04:34.050 Marcus: A lot of stuff that you… So let’s just say.
36 00:04:34.050 ⇒ 00:04:56.370 Marcus: Previously, we had 3, 4 points, 5… sorry, 3 or 5 points of tickets that we would have done over the week, but now that we can do it in a day, in less than a day, the reason is you don’t have to code everything yourself, so that’s amazing. Just one thing that I try to be really careful when using these tools, so these tools try to implement a lot.
37 00:04:56.370 ⇒ 00:04:57.060 Marcus: Dope.
38 00:04:57.060 ⇒ 00:05:15.149 Marcus: sometimes they don’t really get, to the solution, to the architecture level solution, or the nitty-gritties of the solution that you are looking for, and you have to make sure that those things are intact. The reason is, you have to make sure that your application scales. So, it works for 5 users, it’s worked for 100 users, it
39 00:05:15.150 ⇒ 00:05:22.680 Marcus: it’s not sure that it would work for 100,000 or million users. So basically, those are the things that I try to be really careful about.
40 00:05:22.700 ⇒ 00:05:45.870 Marcus: whenever I get a problem, I don’t just copy-paste that problem to Cursed and just try to get the whole code out of it, and then just make a PR and just ping the channel that the PR is ready for review. What I do is that first I try to brainstorm that solution with Curset or with codecode. So, I think that’s really what helped me throughout, so I try to spend an hour or so
41 00:05:45.870 ⇒ 00:06:01.830 Marcus: with that, tool, brainstorming the solution, so if I have one thing in mind, I try to spit that solution out, and I try to get the solution of that. And then, once we come to a proper solution, then I try to make that solution optimized
42 00:06:02.050 ⇒ 00:06:18.849 Marcus: as much as possible. If that there’s a function that I can optimize, if there’s a query that I can optimize, if there’s somewhere on the front-end side of things, if we can implement ego loading or lazy loading, so just to load the data out of it, so our application’s experience remains intact, everything. So yeah, the N plus 1 queries, duplications.
43 00:06:18.910 ⇒ 00:06:43.810 Marcus: anything like that. So, yeah, I try to make sure, that these kind of things are there, and then once, I’m pretty sure with, with the solution that we have come up with during our brainstorm session. So, yeah, I then try to implement it and then, raise a PR. And Cloud Code, if you know, it has a slap backslash to view feature too, so it also reviews your PR. It’s a bit expensive, but it is really
44 00:06:43.810 ⇒ 00:06:47.730 Marcus: code, review of your code, so I try to use that, too.
45 00:06:47.730 ⇒ 00:06:48.380 Pranav: Right.
46 00:06:48.750 ⇒ 00:06:52.450 Pranav: Yeah. I kinda wanna…
47 00:06:52.560 ⇒ 00:07:04.500 Pranav: talk a little bit about, like, any LLM or RAG project that you’ve shipped, preferably, like, end-to-end, or, you know, tell me about your… your stake in that project.
48 00:07:04.500 ⇒ 00:07:13.069 Pranav: And let’s just start high level from, like, just, like, a customer’s perspective, like, what was the user workflow, just so I can get an understanding of, like, what the project was.
49 00:07:13.070 ⇒ 00:07:18.799 Pranav: And then, like, what were you trying to accomplish in terms of, like, a feature set?
50 00:07:19.470 ⇒ 00:07:42.360 Marcus: Sure thing. So, I think the top of my mind for RAG and LM, so there are two different projects that I would love to share. Those are really cool projects, I think, from my perspective. So, the most recent one was, an interviewer. So, as, as you know, there are, usually the companies have a recruiter, or a recruiting firm, who takes the first step interviews and vets the candidates for the next rounds.
51 00:07:42.360 ⇒ 00:07:57.370 Marcus: Just to make sure that everything is intact and everything is in place. So, what we did is, our company, where I was working, so they had another company who had a lot of job postings that they… so they wanted… so, of course, they had more than 10, 20,
52 00:07:57.380 ⇒ 00:08:08.349 Marcus: 30 jobs a day, so which means that there would be a huge set of candidates, and of course, any company, would not have that kind of manpower to interview that kind of candidate, in the
53 00:08:08.770 ⇒ 00:08:32.960 Marcus: time, just to make sure that every candidate gets the equal opportunity. So they wanted it to be automated. So, yeah, this is what we discussed in our hands, all-hands meeting. I came up with the idea that why not just build an AI bot, which can take that interview? You don’t have to vet the candidates from the start, you just can feed those questions that you would have. So every recruiter has all set, same set of questions that they would be asking all the candidates.
54 00:08:32.960 ⇒ 00:08:33.889 Marcus: In those 30-minute.
55 00:08:33.890 ⇒ 00:08:42.399 Pranav: So is this asking it, like, in a chat interface, or was it asking it in a, like, a video call? Kind of tell me, like, what the product looked like, yeah.
56 00:08:42.559 ⇒ 00:09:07.439 Marcus: Yeah, sure, like, so what happened is, so whenever a candidate applied to a particular role, or a job posting, so he would send with an AI interviewer link on their email. So whenever they click on that email, so there would be a screen open to them, that’s where they will be basically joining that interview. So, the interview was… basically, you have to make sure that you are enabled with your video and audio.
57 00:09:07.529 ⇒ 00:09:15.299 Marcus: just to… the audio and video was to… just to make sure that the candidate is a real candidate, they are not just, any other…
58 00:09:15.299 ⇒ 00:09:36.349 Marcus: a tool talking, or a recording talking, just like that. So, audio of that, and then we had a set of questions that we were providing, the AI to ask. So let’s just say this interview had 5 questions, okay? So we had… on the corner, we had a question, a progress bar, which was basically making sure, how to, or when to end the, end that interview.
59 00:09:36.349 ⇒ 00:09:47.729 Marcus: So, basically, this experience was quite important for us. The reason is, if we… so they… firstly, what we were doing is that we were ending the interview based on the time, but we’re in… sometimes in…
60 00:09:47.799 ⇒ 00:09:49.259 Marcus: Here, can you hear me?
61 00:09:52.759 ⇒ 00:09:54.159 Marcus: Hello, can you hear me?
62 00:09:59.369 ⇒ 00:10:00.419 Marcus: Hello.
63 00:10:02.090 ⇒ 00:10:03.030 Pranav: Yes, I can still…
64 00:10:03.800 ⇒ 00:10:04.710 Marcus: Hello, hello.
65 00:10:11.170 ⇒ 00:10:12.379 Marcus: Hello, can you hear me?
66 00:10:12.480 ⇒ 00:10:13.960 Marcus: Yeah, you’re screened.
67 00:10:14.940 ⇒ 00:10:15.860 Marcus: Icon.
68 00:10:16.950 ⇒ 00:10:18.309 Pranav: You can’t, or you can?
69 00:10:18.310 ⇒ 00:10:20.449 Marcus: Yeah, I can now, yep.
70 00:10:20.930 ⇒ 00:10:25.830 Pranav: Okay, perfect. Yeah, one thing I was just gonna ask you was,
71 00:10:27.060 ⇒ 00:10:31.670 Pranav: So, was this a… was this a personal project? Was this part of, like, a previous company you were working at?
72 00:10:31.670 ⇒ 00:10:33.679 Marcus: Previously, yeah, right.
73 00:10:33.680 ⇒ 00:10:38.880 Pranav: Okay. So, okay. I was gonna ask if you could demo something, but no, that’s fine.
74 00:10:38.880 ⇒ 00:10:40.490 Marcus: Yeah, it’s a better product.
75 00:10:41.000 ⇒ 00:10:50.659 Pranav: Yeah, no, no, no, that totally makes sense. Okay, that sounds great, and then one just question I have before we kind of do more of a deep dive was just, like…
76 00:10:51.850 ⇒ 00:10:54.150 Pranav: Yeah, so one question I had before we…
77 00:10:54.830 ⇒ 00:10:56.769 Pranav: I can hear you. Yeah, yeah, can you hear me?
78 00:10:57.100 ⇒ 00:10:57.939 Pranav: You’re just a little bit chunky.
79 00:10:57.940 ⇒ 00:10:58.930 Marcus: Yeah, I can hear you.
80 00:11:00.220 ⇒ 00:11:00.900 Pranav: Okay.
81 00:11:01.340 ⇒ 00:11:03.220 Marcus: Yes, sorry.
82 00:11:03.820 ⇒ 00:11:04.800 Marcus: Internet.
83 00:11:05.860 ⇒ 00:11:11.729 Marcus: One thing that I can demo is, another personal project.
84 00:11:13.080 ⇒ 00:11:19.780 Pranav: I think we need to move on a little bit, to just kind of go into a little bit more of a deep dive of that project, is what I’d prefer.
85 00:11:19.780 ⇒ 00:11:20.200 Marcus: Nothing.
86 00:11:20.700 ⇒ 00:11:21.579 Marcus: Sure, sure, sure.
87 00:11:22.500 ⇒ 00:11:23.240 Pranav: Yeah
88 00:11:23.550 ⇒ 00:11:30.280 Pranav: So, yes, I think… I’m not sure whose internet it is, it’s just a little bit choppy, but I think hopefully it gets a little bit better.
89 00:11:30.540 ⇒ 00:11:42.350 Pranav: Yeah, so I kind of wanted to just ask one more question, just, like, on the product itself. So, was there a audio component from the AI as well, or was it just asking these questions in the chat?
90 00:11:43.220 ⇒ 00:11:56.550 Marcus: No, it was from the audio component from the OpenAI itself, so there’s a real-time API. OpenAI provides a real-time API, which basically allows you to do text-to-speech and speech-to-text, so basically that is what we were leveraging.
91 00:11:57.450 ⇒ 00:12:05.379 Pranav: Gotcha. And so, if you were to kind of just describe this architecture in, like, about a minute, how would you describe the architecture of this entire app?
92 00:12:05.890 ⇒ 00:12:20.039 Marcus: So, basically, this application, this was a feature of a complete application. So, basically, what happened is, so we had the front end in Next.js, also the backend was in Next.js, so, client-side-in server was there, and then we had OpenAI,
93 00:12:20.040 ⇒ 00:12:44.910 Marcus: ahead of it. So we were leveraging TypeScript, that was the language, that we had intact, throughout the project that we were having. And, yeah, basically the, the complete architecture would be that, the screen appeared, the front end was just, your video, and a mic button was there, just to make sure that you were speaking, and just for the user experience. And then on the back-end side of things, again.
94 00:12:44.910 ⇒ 00:12:45.540 Marcus: engine.
95 00:12:45.540 ⇒ 00:13:05.430 Marcus: we had the code in Next.js itself, and there was an OpenAI key, real-time API that we were leveraging for this particular thing, and then there was a WebSocket connection that had to be made, and that WebSocket was a real, important thing. The reason is we had to make sure that this particular WebSocket connection is to be maintained in
96 00:13:05.430 ⇒ 00:13:29.380 Marcus: us too, let’s just say we are… you are the AI interviewer, and I am the candidate who is interviewing, we have to make sure that no other candidate can access this particular interview, right? So a WebSocket is an open connection, so there was an authentication also implemented on that, in that particular thing, so basically just, like, a room ID that we provide in a WebSocket. So, yeah, that was intact. And then finally, as I mentioned, there was another component, which was one, the progress bar of
97 00:13:29.380 ⇒ 00:13:38.499 Marcus: questions that we had. So we implemented another agent. I would… I call it a small agent that we implemented, which was basically keeping the track of the questions asked.
98 00:13:38.500 ⇒ 00:13:57.859 Marcus: it was basically the progress of the questions asked. So, we had those set of questions, I provided those set of questions to that particular agent, and basically asked to evaluate if the other agent, the interviewer itself, has asked those questions or not. So, if that particular agent has asked that question.
99 00:13:57.860 ⇒ 00:14:03.360 Marcus: So basically, the progress would be increased, so 1 would be moved towards 2, and then…
100 00:14:03.420 ⇒ 00:14:28.110 Marcus: And once, we have… we are 5 out of 5. So, basically, I provided with an instruction that you have to say this, concluding line, so thank you for taking the time. Our interviewer team will be processing your, or basically taking the notes of this interview and reaching back to you with the next steps. So yeah, basically, this was the entire structure that we had. Our application itself was in, a Dockerized application, and we had it, on a remote containerization service, which is Fly.io.
101 00:14:28.260 ⇒ 00:14:34.010 Marcus: So, they had a huge set of credits there, that’s why they were leveraging Fly. So, yeah.
102 00:14:34.420 ⇒ 00:14:35.200 Marcus: Definitely.
103 00:14:35.200 ⇒ 00:14:44.260 Pranav: That’s great, yeah. Let’s maybe just dive a little bit deeper into the AI component of things. So you mentioned, like, you have this progress bar, yeah.
104 00:14:44.600 ⇒ 00:14:51.030 Pranav: Were all the… were the questions basically… were they static? Or were they dynamically asking? Okay.
105 00:14:51.220 ⇒ 00:14:54.550 Marcus: Yeah, questions were static, so we had this,
106 00:14:54.580 ⇒ 00:15:08.260 Marcus: prompt given to the AI, that if there’s some technical question in this particular set of questions, then you can ask a follow-up question, or you, if you don’t understand the… or you don’t think that this particular thing or answer is
107 00:15:08.260 ⇒ 00:15:18.710 Marcus: accurate or elaborative enough, so you can ask a follow-up question, but there’s no need to ask follow-up questions if everything’s correct, going good. Basically, it was disrupting our flow.
108 00:15:18.710 ⇒ 00:15:19.849 Marcus: Entirely.
109 00:15:19.850 ⇒ 00:15:20.450 Pranav: I see.
110 00:15:20.840 ⇒ 00:15:27.289 Pranav: So, for this, specific application, what model did you use, and why did you end up choosing that model?
111 00:15:27.680 ⇒ 00:15:30.229 Marcus: Yeah, so started with GPT-4.0.
112 00:15:30.230 ⇒ 00:15:53.960 Marcus: So, and afterwards, we moved towards GPT-5. The reason is, we wanted, to make sure that we get the technologies as, the latest technologies. The reason is, just wanted to get… and GPT, the reason of choosing GPT was this company already had a huge contract with OpenAI itself, so they were working directly with OpenAI, and they had,
113 00:15:53.970 ⇒ 00:16:18.909 Marcus: Yeah, they were working with Anthropic too, but they had, huge contracts with OpenAI, so OpenAI was giving them other side projects too, so that’s why they were leveraging OpenAI, and we had to use that OpenAI. And I think, for this type of conversation, where you don’t have to make a lot of researching decisions or some, simple, yeah, simple textual, outputs and all of that, so I think GPD works perfectly fine, there’s no issue.
114 00:16:18.910 ⇒ 00:16:19.530 Marcus: doing that.
115 00:16:20.440 ⇒ 00:16:39.739 Pranav: Yeah, I agree. In terms of actually analyzing the answers after the fact, you know, after the screening, what type of models did you use there? What did you do to actually assess whether this person can go past AI screening into future rounds with, like, a real engineer?
116 00:16:40.150 ⇒ 00:16:41.520 Marcus: Yeah, so, yeah.
117 00:16:41.520 ⇒ 00:17:01.050 Marcus: A very good question. So, this is what we implemented afterwards. So, we made sure that now we are getting the transcript of that interview, now we are getting the audio, now we are getting the video. So, basically, this cut down the time of the recruiter itself from 30 minutes to around 3 to 5 minutes, right? So, that was more than enough for them. But what I…
118 00:17:01.050 ⇒ 00:17:03.870 Marcus: Another thing that we proposed is that why not just
119 00:17:03.870 ⇒ 00:17:23.790 Marcus: pass that transcript to another agent, and ask that agent to evaluate that, the answers. So, you know, so you have a very good gist of, that this particular person is scoring 80%. So, let’s just see his answers first, and not any other who scored 50% of the, out of 100%.
120 00:17:23.790 ⇒ 00:17:30.210 Marcus: So, this is what we implemented, and what we did is that we provided the job description.
121 00:17:30.210 ⇒ 00:17:54.819 Marcus: as a prompt… as a part of a prompt to that agent, then we provided the ground truth of the answers. So ground… by ground truth is what, I mean, is that if… what is a SQL, or what is SQL query? So, we have a set of definition provided, right? So, what we, what we provided to… in the prompt is that, this is a ground truth, and what you have to do is that you don’t have to be
122 00:17:54.820 ⇒ 00:18:04.579 Marcus: basically evaluate that if that particular person answered exactly this particular thing. You have to make sure that the semantic, the context of their answer is exactly similar to this. So if they miss.
123 00:18:04.580 ⇒ 00:18:09.649 Pranav: And so how did you make this a semantic comparison? Was it, was the…
124 00:18:10.010 ⇒ 00:18:18.720 Pranav: the ground truth answer is just part of the system prompt? Was it… were you guys using RAG? Okay. So how many questions did you guys have?
125 00:18:19.640 ⇒ 00:18:24.079 Marcus: Yeah, so they were usually, ranging from 6 to 10 questions.
126 00:18:24.690 ⇒ 00:18:29.060 Pranav: I see, so the system prompt only had max 10 questions. Okay, that makes sense.
127 00:18:29.480 ⇒ 00:18:33.129 Pranav: Okay. That sounds great. What,
128 00:18:33.600 ⇒ 00:18:38.479 Pranav: what did you feel about, like, the analysis? Is there anything here where,
129 00:18:38.480 ⇒ 00:18:54.180 Pranav: you know, 10 questions, let’s say you wanted to roll this out to the entire company, where maybe you’re not just onboarding certain types of roles, maybe you’re onboarding every type of role. And you can think even bigger at, like, you know, a company like Microsoft or something, where they’re hiring hundreds and hundreds of people.
130 00:18:54.180 ⇒ 00:19:01.439 Pranav: you won’t be able to fit that much context into a system prompt. How would you redesign that evaluation part of things?
131 00:19:01.980 ⇒ 00:19:15.310 Marcus: Okay, so we won’t have to redesign anything. So, one part that I missed is templating of that interview. So, we had templates based on a job, on a particular job. So, for example, one job is for AI,
132 00:19:15.310 ⇒ 00:19:30.789 Marcus: AI engineer, the other job is for a full-stack engineer. You won’t, you don’t need a full-stack engineer to know anything about AI, right? Yeah, you might need, but not that he should know anything about RAG, or langchain, or implementations of that.
133 00:19:30.790 ⇒ 00:19:55.760 Marcus: So, what we did is that we had this templating for a particular job itself. So, for AI, the questions would be AI questions. For a full-stack engineer role, those questions would be full-stack engineer roles. Then, if there’s any SQL moderator, if there’s a research programmer, or a research student, or if there was a mathematician’s role, if there were chemistry teacher roles, so basically, these were the kind of roles we would
134 00:19:55.760 ⇒ 00:20:20.439 Marcus: dealing with, research, physics researcher, mathematics researcher, chemistry researcher, so these kind of roles that we were having. And as I mentioned, this was directly, they had a huge contract with OpenAI itself, so these researchers were basically directly working with the OpenAI in just making sure that the data is correct, and the validation of the models, are they performing really good or not? So, basically, these kind of things that we were working with, and that’s where they were
135 00:20:20.440 ⇒ 00:20:36.869 Marcus: hiring a lot of people on the research side of things. So, yeah, every job had different set of questions, and based out of those templates, based out of those questions, we were providing those contexts to the AI itself, and basically that’s where they were getting answered.
136 00:20:38.190 ⇒ 00:20:44.500 Pranav: Gotcha. Yeah, one other question that I just want to get through quick, is,
137 00:20:44.670 ⇒ 00:20:59.939 Pranav: With this product, when it was in production, what failures did you run into? What things did you have to patch, and what was the process to… to get a full cycle back working, or, that specific issue fixed in production?
138 00:21:00.650 ⇒ 00:21:05.979 Marcus: Sure thing. So, it was… I don’t want to say it, but it was pain.
139 00:21:06.010 ⇒ 00:21:28.890 Marcus: literal pain. The reason is when you work on a local host, it’s just one host taking up your requests and processing it, performing it, everything is done perfectly and pretty smoothly. But once I shipped the first version of it, so I was facing a lot of issues. There were WebSocket persistence issues.
140 00:21:28.890 ⇒ 00:21:30.499 Marcus: The reason is,
141 00:21:30.500 ⇒ 00:21:42.169 Marcus: And one issue was that there were two ways to end the interview. One way is whenever the progress bar completes, the interview ends automatically. Then there was another way, which is basically end interview button there. So.
142 00:21:42.170 ⇒ 00:21:52.119 Marcus: if you don’t want to continue, you should have that opportunity, right, to end that interview, and if you are feeling good that now is a good time to end the interview, we should be there. So what I did is that I…
143 00:21:52.120 ⇒ 00:22:15.700 Marcus: on end interview, I implemented an API call, which was basically ending that interview, but the issue here was that we had a WebSocket connection open, and I didn’t take that in mind, that if we have a WebSocket connection, we have to end that connection, not an API call to end that interview manually. So, it was working perfectly fine on the local host. The reason is there’s one host working on one particular operation, right?
144 00:22:15.700 ⇒ 00:22:17.449 Marcus: Which is my, one…
145 00:22:17.450 ⇒ 00:22:39.180 Marcus: interview with a candidate and an interviewer. So, it was ending the interview, but when it went to production, so there are different hosts, any host can pick up any request. So, if I… if one… my interview or my WebSocket connection is up on host number one, host number four can pick up the request of API call. So, that’s not where the interview, interview is host… getting hosted.
146 00:22:39.220 ⇒ 00:22:49.249 Marcus: So, yeah, that was the first issue. I didn’t, get it, until the day two. The reason is no AI tool can tell you about this, no, nothing,
147 00:22:49.250 ⇒ 00:23:04.179 Marcus: I was just brainstorming, I just went through a whole lot of googling, whole lot of paper reading, and just at one time I was sitting, I just had my lunch, and I was just discussing it with my friend, it with my friend, and just got
148 00:23:04.180 ⇒ 00:23:13.479 Marcus: Gotcha. But why not, this particular thing that, I think I might be missing this? And yeah, that was the only, thing that, basically, yeah, so yeah, this was the…
149 00:23:15.080 ⇒ 00:23:27.480 Pranav: kind of move on… yeah, so that’s great. I want to kind of move on to, like, some specific scenarios that, we’ve run into at Brainforge. So for one of my clients, this is kind of like a scenario that I like to bring up.
150 00:23:27.480 ⇒ 00:23:35.239 Pranav: is that, for a chatbot where they would… where we had integrated MCP servers and knowledge bases.
151 00:23:35.240 ⇒ 00:23:49.419 Pranav: We noticed that the data was faulty, that it was using to make its analysis claims on. So to give a little bit more context, let’s, let’s say it was, like, a Shopify MCP, and so…
152 00:23:49.510 ⇒ 00:23:58.759 Pranav: how it should be functioning, it should be pulling the, let’s say, yesterday’s orders, and it should be giving you a report of those orders.
153 00:23:58.850 ⇒ 00:24:14.739 Pranav: based on some system prompt, or based on the prompt you provide in the chat, right? For whatever reason, the data that was coming up was… actually, it was just generating sample data instead of actually fetching the real data from Shopify.
154 00:24:15.780 ⇒ 00:24:17.279 Pranav: Can you give me, like.
155 00:24:17.510 ⇒ 00:24:25.859 Pranav: Given that information, what would you do to, like, further look into the issue? How would you diagnose it, and then how would you kind of try to, fix it?
156 00:24:27.450 ⇒ 00:24:28.800 Marcus: Sure. So…
157 00:24:29.140 ⇒ 00:24:42.390 Marcus: the thing that is happening is that the AI is not getting the exact data out of… if it’s a database, it should be getting the data out of the database, just like a SQL agent is doing. So basically, it should query the… the…
158 00:24:42.600 ⇒ 00:24:59.519 Marcus: orders from yesterday, and it should have that response out of it, and then should be basically processing that, right? If that is the use case we are. And now what’s happening is that it’s not getting the data of the yesterday, of yesterday, so basically it might be getting any random data or dummy data, right?
159 00:25:00.340 ⇒ 00:25:04.870 Pranav: Yeah, so it’s generating dummy data. It’s not fetching it from anywhere.
160 00:25:05.950 ⇒ 00:25:06.660 Marcus: Okay.
161 00:25:08.500 ⇒ 00:25:14.470 Marcus: So, basically, I would be… Checking…
162 00:25:16.480 ⇒ 00:25:28.650 Marcus: So, I would be checking, if the system prompt, if it’s the exact system prompt, or the user prompt that we have provided, is it going there? It is getting that exact prompt.
163 00:25:28.650 ⇒ 00:25:41.610 Marcus: Or the user prompt and the system prompt, both, and then we are accessing the database, or if we are not… so you mentioned that we are not accessing the database, it is generating random data, so it should be based out of any
164 00:25:41.610 ⇒ 00:25:43.759 Marcus: Ground truth, or any…
165 00:25:44.000 ⇒ 00:25:53.559 Marcus: additional data, just like yesterday’s orders. So, basically, that is what the ground truth is, and that is what we would be having on the… on the database side of things. Got it.
166 00:25:53.560 ⇒ 00:26:02.670 Pranav: So, system prompt updates, definitely. I guess, let me give you, like, a little bit more information, kind of a different scenario now, though.
167 00:26:03.310 ⇒ 00:26:06.290 Pranav: Let’s say that… with…
168 00:26:06.520 ⇒ 00:26:25.820 Pranav: this data, it’s just… it’s not giving very deterministic outcomes on, on the report. What are, like, what’s, like, a quick parameter fix that you can use to, like, update that so that you can feel like your LLM is being a little less wide-scoped in its thinking, and it’s just more deterministic?
169 00:26:26.800 ⇒ 00:26:33.209 Marcus: So, it’s the temperature that we set, usually. So, we don’t want the LLM to hallucinate.
170 00:26:33.210 ⇒ 00:26:54.220 Marcus: On any kind of data, or we don’t want it to just bring up things. It should make… we want to make it… we make… want to make sure that it is getting exact things that we are providing, not any other things. So we have the data of a Shopify store, it shouldn’t tell you anything about AI services that we provide. So basically, that’s what I do, is we set the temperature.
171 00:26:55.330 ⇒ 00:26:59.070 Pranav: Yeah, totally, totally. That’s great.
172 00:26:59.150 ⇒ 00:27:12.529 Pranav: You mentioned how there was, like, one other project as well that, you wanted to talk about. So, yeah, we just have 2-3 minutes left. If you want to talk about that a little bit, we can. If you want to ask any questions, we can.
173 00:27:12.530 ⇒ 00:27:19.700 Pranav: Also, I am interested, just kind of, like, what you’re excited to build here, based on your conversation with Sam.
174 00:27:19.700 ⇒ 00:27:26.650 Pranav: With Kayla, if anything came up in this interview where you’re like, oh, that would be cool to, like, build something like that.
175 00:27:26.660 ⇒ 00:27:29.450 Pranav: Yeah, all of those topics go whatever direction you want.
176 00:27:30.200 ⇒ 00:27:31.289 Marcus: Sure thing. So…
177 00:27:31.290 ⇒ 00:27:56.089 Marcus: I would just give a 30 seconds brief of the other project that I was talking about, and it was just a simple RAG implementation. So basically, it was a services company who wanted an audio bot to be there, and so if any customer comes in and asks about the services that they provide, so that audio bot, just like a customer support, guy, or the team, just two, so basically, if they wanted to know, basically, if a user comes in and asks
178 00:27:56.090 ⇒ 00:28:03.270 Marcus: So, do you provide any services in healthcare? So, basically, the audio bot would be getting the data out of the vector DB and telling, yeah, we…
179 00:28:03.290 ⇒ 00:28:22.310 Marcus: do the data, do, sport, healthcare, platforms. So, this, this, this particular, for example, this, this, this particular product is what we build in healthcare. So, with examples, with other things. So, yeah, this was a complete and easy implementation of RAG, what I built. So, yeah, so this, was the project.
180 00:28:22.310 ⇒ 00:28:47.210 Marcus: On the other side of things, so you mentioned, so I didn’t get into any specific details of the projects that we are going to work in here. So, yeah, I don’t have any gist of it. So, just the thing that I know is that this is a newer team that I’m going to work with, so, there are people, experienced people, working in the AI team, so I’m going to work with them. So we… I am going to interact with the clients directly, directly, so you… I might be
181 00:28:47.210 ⇒ 00:29:03.579 Marcus: getting assigned to, on a project which is directly client-facing, or might be, client-facing, or I’ll be getting directly feedbacks from the clients, and I’ll have to integrate. So basically, these kind of questions, was, answered by, Sam, but, yeah, I don’t have really…
182 00:29:03.580 ⇒ 00:29:11.910 Marcus: enough idea of the products that we are going to build in here. The thing that fascinates me, or allows me to interview more here, is
183 00:29:11.910 ⇒ 00:29:13.180 Marcus: the set of…
184 00:29:13.190 ⇒ 00:29:24.919 Marcus: or the set of growth that I see, the technical growth I see in here. So, I have been, for quite some time in Intelligent, and I think I’ve worked a lot there. I’ve worked on more than 7 different projects, so…
185 00:29:24.920 ⇒ 00:29:35.529 Marcus: it was a huge opportunity working there, but now I think it’s time for me to work on something different, with some different people, and some different,
186 00:29:35.810 ⇒ 00:29:48.509 Marcus: set of, or I would say technical set, skill set. So, yeah, that’s what I’m looking for. I’ve been, working on AI agents for past 2 years, completely on, on these agents. I think, I possess a…
187 00:29:48.730 ⇒ 00:30:03.570 Marcus: a bit of good knowledge of there, so yeah, implementing those things, working directly with clients, this is what I’m looking for, and I think it would be really good for me, at this point of my career. I think in the next 5 years, I’ve been looking myself
188 00:30:03.570 ⇒ 00:30:09.400 Marcus: in this… in the senior hierarchy, so I think client-facing roles or client-interacting roles would be really good for me.
189 00:30:10.830 ⇒ 00:30:20.809 Pranav: Well, that’s great. Yeah, so, yeah, like I said before, that would be something here at Brainforge that is, like, definitely, like, necessary in most, instances for applicants, so…
190 00:30:20.810 ⇒ 00:30:32.490 Pranav: It’s good to know, yeah. Yeah, if there’s no, like, questions, we can hop off here, but yeah, you have my email, so feel free to shoot anything over. And then, yeah, Kayla should be in touch with you.
191 00:30:33.080 ⇒ 00:30:34.430 Marcus: Sure thing. Sounds good.
192 00:30:34.840 ⇒ 00:30:36.660 Pranav: Cool. Thanks, Marcus. See ya.
193 00:30:37.030 ⇒ 00:30:37.800 Marcus: Right.