Meeting Title: Brainforge Interview w- Sam Date: 2026-04-21 Meeting participants: Naveen Karuppasamy, Samuel Roberts
WEBVTT
1 00:02:41.980 ⇒ 00:02:43.830 Naveen Karuppasamy: Hi Sam, how are you?
2 00:02:43.830 ⇒ 00:02:47.149 Samuel Roberts: Good, good, how are you? Sorry to switch up my audio a little bit here.
3 00:02:49.890 ⇒ 00:02:51.240 Samuel Roberts: How are you, Naveen?
4 00:02:51.910 ⇒ 00:02:54.300 Naveen Karuppasamy: Yeah, good, good job. Good Sam.
5 00:02:54.430 ⇒ 00:02:56.560 Naveen Karuppasamy: Let me turn on my video as well.
6 00:02:56.810 ⇒ 00:02:57.670 Samuel Roberts: Sure, sure.
7 00:02:59.050 ⇒ 00:03:00.039 Naveen Karuppasamy: Oops, yeah, hi.
8 00:03:00.810 ⇒ 00:03:02.680 Samuel Roberts: Great. Well, thanks for taking the time.
9 00:03:03.970 ⇒ 00:03:12.379 Samuel Roberts: So, just a quick intro, my name is… sorry, one sec. My name is Sam Roberts. I am the AI tech lead here at Brainforge.
10 00:03:12.960 ⇒ 00:03:21.929 Samuel Roberts: The way I guess this will go today, we have about 30 minutes, so, I’ve got some questions for you. I imagine you might have questions for me about…
11 00:03:21.930 ⇒ 00:03:35.300 Samuel Roberts: the role of the company, so I want to kind of make sure we have time for that, so I’ll probably stop my questions about halfway, let you ask some questions, maybe jump back to my questions if you don’t have too many, and we’ll just kind of converse from there.
12 00:03:35.300 ⇒ 00:03:41.759 Samuel Roberts: But I think to start, I would love us a quick, intro from you, in your own words. That’d be great.
13 00:03:42.720 ⇒ 00:03:44.410 Naveen Karuppasamy: Sure, yeah, sure.
14 00:03:45.620 ⇒ 00:03:58.930 Naveen Karuppasamy: So, Sam, like, I’m an AML engineer with around 4 years of experience, building, end-to-end machine learning and, GenA solutions. In my current role at AdventHealth, I worked on
15 00:03:58.970 ⇒ 00:04:16.599 Naveen Karuppasamy: developing LLM-based applications using, RAG embeddings, fine-tuning techniques like LoRa, QLoRa. I built, like, systems, that combines, structured and unstructured data to, deliver real-time insights, especially, like, in healthcare use cases.
16 00:04:16.640 ⇒ 00:04:32.979 Naveen Karuppasamy: I have a strong experience in, like, taking solutions from idea to production, like building data pipelines, training models, optimizing performance, and, deploying them using Forest API, Docker, and AWS.
17 00:04:32.980 ⇒ 00:04:46.720 Naveen Karuppasamy: I’ve also worked, on, improving model accuracy, reducing latency, and setting up monitoring and evaluation to ensure reliability. And, like, overall, like, I enjoy working on a problem, you know, like.
18 00:04:46.720 ⇒ 00:05:01.200 Naveen Karuppasamy: problems where I can combine A with real, world applications. Recently, I also, built an, like, system that, called, like, A Interview Coach, using LLMs and, vector databases.
19 00:05:01.200 ⇒ 00:05:14.920 Naveen Karuppasamy: Which can… it was a personal, project, by the way, which can, like, generate… Oh, okay. Yeah. Which can, generate questions and, give personalized feedback, based on your performance, based on your confidence, and everything.
20 00:05:14.920 ⇒ 00:05:28.969 Naveen Karuppasamy: So, I’m really, like, interested in this role because, like, I enjoy building, user-focused GAE products, you know, like, experimenting with LLMs and working in fast-paced environments, like, where I can take ownership and create impact.
21 00:05:28.980 ⇒ 00:05:30.999 Naveen Karuppasamy: Yeah, that’s pretty much about me.
22 00:05:31.260 ⇒ 00:05:42.469 Samuel Roberts: Great, thank you so much. Yeah, that’s good context. Okay, so let’s, jump in. I’ve got a whole bunch of things here, but I’m trying to figure out where best to start based on that.
23 00:05:42.640 ⇒ 00:05:56.550 Samuel Roberts: Okay, let’s talk about… so you kind of started to talk a little bit about one of those projects. Let’s jump in a little bit more. So, either that project or anything else that you’ve mentioned, I want you to just talk about the project that was shipped to production.
24 00:05:56.550 ⇒ 00:06:02.690 Samuel Roberts: And, talk me through the problem that it solves, and tell me more about that feature or product.
25 00:06:02.690 ⇒ 00:06:06.499 Samuel Roberts: So, just kind of dig in a little bit to one of those.
26 00:06:07.040 ⇒ 00:06:16.570 Naveen Karuppasamy: So, like, you want me to, like, explain about the project that, like, deployed by me in a production, and explain about that?
27 00:06:17.060 ⇒ 00:06:29.720 Samuel Roberts: Yeah, so, I mean, you mentioned the personal project, you mentioned a couple other ones that work. I just, any of those, just like, I just want to hear about going from, like, you know, starting to production, and talk about the problem that it solved, and how it solved it.
28 00:06:30.120 ⇒ 00:06:38.899 Samuel Roberts: Not necessarily too technical, we don’t have to get too in the weeds, I’m just kind of looking for the, like, high-level, like, what was it solving, what was the solution, and how did it go?
29 00:06:39.440 ⇒ 00:06:48.090 Naveen Karuppasamy: So, yeah, like, one of the important projects I had, like, worked on was, like, building a production-ready LLM-based system.
30 00:06:48.130 ⇒ 00:07:06.780 Naveen Karuppasamy: For a clinical decision support, like, where I work at AdventHealth there. So, where we faced a major issue with, inaccurate and inconsistent responses from the model, because of, initially, the system was, like, directly using LLM outputs, like, without proper context,
31 00:07:06.780 ⇒ 00:07:20.539 Naveen Karuppasamy: Which caused, hallucinations, you know, like, and, reduced stress from users. To solve this, I… and also, clinicians were confused about the results and the responses of the, like, LLMs.
32 00:07:20.540 ⇒ 00:07:34.970 Naveen Karuppasamy: To solve this, I redesigned the system using a RAG approach. I created a pipeline where we converted clinical documents into embeddings and stored them in a vector database, and then,
33 00:07:34.970 ⇒ 00:07:50.409 Naveen Karuppasamy: retype the most relevant context before passing it to the LLM. Even after implementing the RAG, you know, we noticed performance issues, like, like, slow response time, and, inconsistent answer quality.
34 00:07:50.410 ⇒ 00:08:06.049 Naveen Karuppasamy: So I worked on optimizing embeddings and improving, like, chunking strategies and, refining prompts and everything. So, I also, like, introduced evaluation, mechanisms, using, like, metrics and, test datasets to.
35 00:08:06.180 ⇒ 00:08:13.550 Naveen Karuppasamy: measure, like, responses, quality, and, consistently improve. So, I try to consistently improve it.
36 00:08:13.620 ⇒ 00:08:33.360 Naveen Karuppasamy: So, we had to go through a lot of calls with, clinicians, a lot, like, a lot of discussion with clinicians and, with my teams, because it’s, healthcare data, you know? Like, small, mistake or, small, like, inappropriate data response will, give a big impact on the…
37 00:08:33.380 ⇒ 00:08:39.530 Naveen Karuppasamy: And, on the production side, like, we had, like, latency challenges.
38 00:08:39.970 ⇒ 00:08:57.290 Naveen Karuppasamy: So, I optimized the APA layer using, like, FastAP and, Docker, which helped, helped, pretty much helped reduce the response time significantly, I’ll say. Like, I also, like, implemented monitoring and, retraining pipeline, to handle, model drift.
39 00:08:57.340 ⇒ 00:09:01.980 Naveen Karuppasamy: And, ensuring, stable performance, over time.
40 00:09:02.110 ⇒ 00:09:13.769 Naveen Karuppasamy: So, as a result, we improved, like, prediction accuracy, reduced hallucinations, and made the system reliable for real-time usage by clinicians. That’s it.
41 00:09:14.380 ⇒ 00:09:32.300 Samuel Roberts: Great, great. So actually, that kind of dovetails to my next question is. So if you’re working with, medical professionals, maybe they’re non-technical, but, they probably have heard about AI and LLM, so they have some idea, but who knows how accurate that is for them. What would be…
42 00:09:32.320 ⇒ 00:09:41.529 Samuel Roberts: How would you go about explaining the limitations of these tools and these, models to someone who’s non-technical? Like, a non-technical stakeholder?
43 00:09:41.990 ⇒ 00:09:46.540 Naveen Karuppasamy: Do you mean, like, how do I approach to explain the non-technical stakeholders?
44 00:09:46.540 ⇒ 00:09:48.760 Samuel Roberts: Yeah, especially when it comes to the limitations.
45 00:09:49.190 ⇒ 00:09:51.099 Samuel Roberts: about AI, and they think it can do everything.
46 00:09:51.100 ⇒ 00:10:00.120 Naveen Karuppasamy: Yeah, they don’t… so, for example, with clinicians, I had to say, like, with the… we had a lot of discussion with clinicians about the…
47 00:10:00.120 ⇒ 00:10:22.980 Naveen Karuppasamy: So, we had a thing called, like, a SHAP explainability. I know, like, we know most of us are, like, aware of this as SHAP explainability. So, like, with medical professionals, I explained the, like, limitations of the AA, tool in a very, like, so clear, very clear and practical way, so they can understand, right?
48 00:10:22.980 ⇒ 00:10:43.689 Naveen Karuppasamy: how to use it safely. And I usually, like, tell them that, the model is support tool, not a, like, decision maker entirely. It can, like, provide suggestions based on the available data. But it may not, like, always be 100% accurate, especially in, complex or rare cases in healthcare.
49 00:10:43.760 ⇒ 00:10:57.739 Naveen Karuppasamy: So, I also explained that the model depends on the data. It was, trained on, and so if the input is incomplete or, like, unclear sometimes, we get those kind of situations.
50 00:10:57.740 ⇒ 00:11:06.209 Naveen Karuppasamy: output may not be reliable. For LLM-based systems, I mentioned that sometimes the model can generate confidence, but
51 00:11:06.210 ⇒ 00:11:17.850 Naveen Karuppasamy: incorrect answers as well. So, validation is always important from our side. So, to make it easier, I gave, like, I give, like, real examples of where the system works well, and .
52 00:11:17.850 ⇒ 00:11:18.250 Samuel Roberts: Okay.
53 00:11:18.250 ⇒ 00:11:37.040 Naveen Karuppasamy: where it might fail. So I also, like, encourage them to treat the outputs as a second opinion, and always combine it with the clinical judgment. Overall, like, my goal is to build trust while making the… sure they clear and understand the boundaries of the system. That’s it, yeah.
54 00:11:37.040 ⇒ 00:11:48.810 Samuel Roberts: Great. Was there a time that a user misunderstood what the feature was doing, or how… or, like, that you had to kind of re… realign with them, or anything like that?
55 00:11:48.810 ⇒ 00:11:56.579 Naveen Karuppasamy: Yeah, so we had a situation that I had to, to, like, highlight. So, the,
56 00:11:56.830 ⇒ 00:12:08.929 Naveen Karuppasamy: Patient was not, flagged as a high-risk patient, but he was, previously, like, in a previous, like, you know, previous session of this, like, attending.
57 00:12:09.020 ⇒ 00:12:19.270 Naveen Karuppasamy: So, what happened, he was flagged already, but again, when he was cured and he was not a high, like, risk patient, he was flagged again.
58 00:12:19.330 ⇒ 00:12:38.380 Naveen Karuppasamy: So, everyone was confused. We just treated him well, and every data of his very clear, and why he was flagged again, why he’s still flagged again. So, sometimes systems, will have those issues, like, we had to, like, figure out what could be the reason.
59 00:12:38.380 ⇒ 00:12:55.060 Naveen Karuppasamy: So we, searched on the, like, feature, feature… we… I showed them with the explain… like, SHAP explainability, we got the data, so we had to go through each and everything, and, we had, like, so there we found that, it was,
60 00:12:55.090 ⇒ 00:13:05.089 Naveen Karuppasamy: his, clinic, like, one of the, what do I say, like, where he attend the, like, attend the session previously.
61 00:13:05.130 ⇒ 00:13:13.170 Naveen Karuppasamy: the data was not updated. So data was, like, there was inconsistent data, like, relayed from their side.
62 00:13:13.310 ⇒ 00:13:30.140 Naveen Karuppasamy: So, we had to go through, particularly for that, because it’s not a big thing, because it’s, it’s like, you know, he was flagged as a high-risk patient, but it’s a bigger, it would be a bigger problem if the high-risk patient was flagged as non-high-risk patient.
63 00:13:30.140 ⇒ 00:13:30.760 Samuel Roberts: Sure.
64 00:13:30.760 ⇒ 00:13:36.639 Naveen Karuppasamy: So, it’s not a big problem, but it’s still a problem, so we had to go through these issues, sometimes.
65 00:13:36.860 ⇒ 00:13:41.569 Samuel Roberts: Alright, good, good. Let’s talk about,
66 00:13:42.480 ⇒ 00:14:02.009 Samuel Roberts: the trends in AI and LLMs and models and frameworks and all these other things. So there’s a lot, things are changing quickly. Is there a trend or anything that you’ve seen that you were initially excited about, but for some reason decided not to adopt when you actually tried to use it in a kind of production environment or something?
67 00:14:03.040 ⇒ 00:14:07.580 Naveen Karuppasamy: Mmm… Yeah, I can think of it, like,
68 00:14:07.980 ⇒ 00:14:15.099 Naveen Karuppasamy: Trend… trending tool that, you meant to say, like, so… Yep.
69 00:14:23.680 ⇒ 00:14:33.950 Naveen Karuppasamy: So, yeah, I couldn’t recall it, anything that, there are so many, like, tools that I wanted to use.
70 00:14:33.950 ⇒ 00:14:34.370 Samuel Roberts: Yeah.
71 00:14:34.370 ⇒ 00:14:49.140 Naveen Karuppasamy: Like, what would you say, like, currently the… one of the biggest trends, in, what I recall now is, AML is using generative AI and LLM to build, real-world applications.
72 00:14:49.140 ⇒ 00:15:07.400 Naveen Karuppasamy: So, many companies are using techniques like RAG to improve accuracy by combining models with external data sources. That’s what we try to do also in our system. There is also a strong focus on making models more efficient using, like.
73 00:15:07.400 ⇒ 00:15:12.620 Naveen Karuppasamy: Modest, like, quantization, like, and fine-tuning.
74 00:15:12.620 ⇒ 00:15:17.199 Naveen Karuppasamy: So they can run faster and at a lower cost as well. At the same time.
75 00:15:17.290 ⇒ 00:15:21.559 Naveen Karuppasamy: Not all the A solutions are ready for production use.
76 00:15:22.660 ⇒ 00:15:26.090 Naveen Karuppasamy: One major challenge is, reliability, because, like.
77 00:15:26.270 ⇒ 00:15:43.340 Naveen Karuppasamy: LLMs can sometimes generate incorrect, as, like, I just told you, incorrect or misleading answers. There are also concerns around, like, data privacy, security, and bias, especially in sensitive domains like healthcare and finance.
78 00:15:43.340 ⇒ 00:15:47.779 Naveen Karuppasamy: In addition, maintaining consistent, performance in production.
79 00:15:47.790 ⇒ 00:15:55.389 Naveen Karuppasamy: It’s, difficult due to the, like, due to issues like model drift and changing data. Because of these challenges.
80 00:15:55.400 ⇒ 00:16:09.820 Naveen Karuppasamy: Companies are, like, focusing more on evaluation, monitoring, and building, like, like, God, call trails, and before, deploying AI systems in, production.
81 00:16:10.120 ⇒ 00:16:20.839 Naveen Karuppasamy: So, the trend’s shifting from the just building models to make them safe, reliable, and, like, scalable for, real-world use. So, yeah.
82 00:16:21.810 ⇒ 00:16:26.369 Samuel Roberts: Okay, cool. So, how do you decide when, like, a model or framework is production ready?
83 00:16:28.250 ⇒ 00:16:34.420 Naveen Karuppasamy: So, like, first, like, we try to, like,
84 00:16:34.920 ⇒ 00:16:42.599 Naveen Karuppasamy: there are, like, a different kind of, problem statement, right? Like, in our case, it was a classification type of problem.
85 00:16:42.600 ⇒ 00:16:42.990 Samuel Roberts: Okay.
86 00:16:42.990 ⇒ 00:16:52.929 Naveen Karuppasamy: So, when it comes to classifications, we have to go with, like, classification metrics like AUC, recall, and persistence.
87 00:16:53.270 ⇒ 00:17:08.920 Naveen Karuppasamy: So, we… what, our team will do is, like, we, we will use multiple different kind of, metrics, and, we try to, like, adjust, like, try to, like, check the accuracy, over accuracy.
88 00:17:08.920 ⇒ 00:17:17.369 Naveen Karuppasamy: We try to check the AUC score, our recall score, that’s very, very important. It’s not, it’s not always about accuracy, right?
89 00:17:17.369 ⇒ 00:17:20.419 Naveen Karuppasamy: So, we try to focus on,
90 00:17:20.940 ⇒ 00:17:26.110 Naveen Karuppasamy: The metrics, the important metrics for the particular problem.
91 00:17:26.300 ⇒ 00:17:29.610 Naveen Karuppasamy: Like, first,
92 00:17:30.060 ⇒ 00:17:36.580 Naveen Karuppasamy: first, like, it should, like, perform consistently well, not just on training data, but also, like.
93 00:17:36.580 ⇒ 00:17:51.910 Naveen Karuppasamy: Validation and, real-world test data as well. Like, next, first I do, look at the metric side. Next, I look at the reliability and, stability. The models should give, like, consistent, outputs to handle…
94 00:17:51.910 ⇒ 00:17:58.999 Naveen Karuppasamy: Edge cases properly, and, not fail, with, like, unexpected inputs.
95 00:17:59.000 ⇒ 00:18:10.889 Naveen Karuppasamy: For an LLM-based system, I also evaluate if the responses are accurate, relevant, and, like, free from my hallucinations as much as possible.
96 00:18:10.890 ⇒ 00:18:23.150 Naveen Karuppasamy: Also consider performance factors like, latency, scalability, the model should, like, respond within, like, acceptable, time limits.
97 00:18:23.190 ⇒ 00:18:35.710 Naveen Karuppasamy: And, handle, real-time or high-volumes without breaking, right? So, another important factor that I can think of is, like, monitoring and, maintainability, like.
98 00:18:36.490 ⇒ 00:18:53.639 Naveen Karuppasamy: There should be a, like, that should be a proper login monitoring and, alerts, in place to track, like, performance and, detect issues, like, model drift over time. So, that’s a very important thing, right? So, finally.
99 00:18:54.010 ⇒ 00:19:13.129 Naveen Karuppasamy: I check business readiness, so the model should solve the actual user problem, be… be easy to integrate into, like, existing systems, so… and follow security, and, com… mainly, compilence, requirements.
100 00:19:13.130 ⇒ 00:19:19.180 Naveen Karuppasamy: If all these accepts or, like, satisfied, so, I consider the model is ready for production.
101 00:19:20.210 ⇒ 00:19:28.950 Samuel Roberts: Alright, well, I got one more question, I know we’re halfway, but I want to make sure we leave time, but if you had 6 months with no obligations, what would you work on?
102 00:19:31.030 ⇒ 00:19:39.129 Naveen Karuppasamy: So, a 6… a 6-month, with, no obligations? You mean, like, what system I work on?
103 00:19:39.130 ⇒ 00:19:42.340 Samuel Roberts: Well, yeah, what would you do? How would you spend your time? What would you, you know…
104 00:19:42.340 ⇒ 00:20:02.310 Naveen Karuppasamy: Exactly, so that we had a… so, my friend is there, you know, like, he’s my, he was my, college mate, in Masters, in, Clark New City. So, we always wanted to build up, build some projects, and, I recently watched a movie, it’s called the Indian Movie, where they build a…
105 00:20:02.310 ⇒ 00:20:08.109 Naveen Karuppasamy: YA system for, love, mean, dating.
106 00:20:08.610 ⇒ 00:20:09.030 Samuel Roberts: Okay.
107 00:20:09.030 ⇒ 00:20:27.339 Naveen Karuppasamy: So it’s kind of dating app, and it’s like a fun project that we are planning to do. So what it actually do is, it will, like, collect the data from the both sides, like, the one guy, or girl, or whatever it is, like, collect, they have to log in.
108 00:20:27.450 ⇒ 00:20:41.139 Naveen Karuppasamy: they have to… she has to log in, and, it will collect the data from their social medias. I don’t know how… how big… like, it’s not… I don’t know whether it’s gonna happen or not, but in the future, you know, maybe, like, it’s just…
109 00:20:41.140 ⇒ 00:20:41.810 Samuel Roberts: Yeah, yeah, yeah.
110 00:20:41.810 ⇒ 00:20:53.520 Naveen Karuppasamy: Any idea. So, it will collect the data from the both sides, like a social media, like, their day-to-day life, and how they spend their time. Because most of us are, like, we are…
111 00:20:53.520 ⇒ 00:21:05.430 Naveen Karuppasamy: now showing what we are doing to the people, and we are now excited about the future, and excited about what we are gonna do. We are… everybody are, like, excited about their, like, dreams.
112 00:21:05.430 ⇒ 00:21:13.210 Naveen Karuppasamy: Everything. So, it can collect the data, yeah, it can collect the data from the both sides, and it can give the score, based on their,
113 00:21:13.210 ⇒ 00:21:25.689 Naveen Karuppasamy: how he act on this situation, like, how he’s hanging lever. He’s, like, a polite guy, or he’s, like, this kind of guy. So… so it gives a great report kind of, report for the guy.
114 00:21:25.690 ⇒ 00:21:34.259 Naveen Karuppasamy: on the same, same time, the other, like, other girl, like, where they are gonna get into relationship. So, based on that data, it will give you a score.
115 00:21:34.340 ⇒ 00:21:41.379 Naveen Karuppasamy: It’s called, like, love score. So, how much score you will have for a relationship to get in.
116 00:21:41.380 ⇒ 00:21:42.470 Samuel Roberts: Right, right, right. Oh, cool.
117 00:21:42.470 ⇒ 00:21:46.610 Naveen Karuppasamy: Good score, so we get into relationships, kind of, like, it’s a fun project that…
118 00:21:46.610 ⇒ 00:21:57.649 Samuel Roberts: Yeah, yeah, that sounds fun, that sounds fun. Cool, cool. Alright, so yeah, I think, let’s shift gears a little bit. So any questions you have for me, role, company, whatever I can answer, I’m happy to do that now.
119 00:21:57.650 ⇒ 00:22:15.750 Naveen Karuppasamy: So, Sam, like, I wanted to ask, like, what kind of project our team is now working on? So, to, to, like, I need to be prepared, right, when I’m gonna join you guys, so that I’ll come up with, great, ideas, and I’ll be ready for my challenges, so…
120 00:22:15.990 ⇒ 00:22:35.340 Samuel Roberts: Yeah. So, yeah, so we… just a little background on Brainforge, we started as a data consultancy, so there’s a big data, engineering side of the company, we have a lot of data clients, some of whom become AI clients, some of whom, AI clients come in separate from data, so we have a number of different projects that we kind of work on, and we kind of,
121 00:22:35.460 ⇒ 00:22:41.049 Samuel Roberts: we kind of run the whole spectrum of, like, of projects. So we’ve done things as simple as,
122 00:22:41.050 ⇒ 00:22:57.679 Samuel Roberts: you know, I’ve been using Claude, but I have to copy and paste all the time. We put together an automation for you that, you know, iterates through that and refines things for people. We’ve done things like RAG pipelines for a chat agent, for customer service agents, so we have done similar things there.
123 00:22:57.680 ⇒ 00:22:58.790 Naveen Karuppasamy: That’s interesting.
124 00:22:58.790 ⇒ 00:23:01.589 Samuel Roberts: So, we’ve done other things now,
125 00:23:01.750 ⇒ 00:23:08.679 Samuel Roberts: where we’ve built kind of a whole UI for, people to talk over the data of the,
126 00:23:09.170 ⇒ 00:23:12.929 Samuel Roberts: accounts that they were managing. Imagine, like, ad accounts, they’re managing lots of different ad accounts.
127 00:23:12.930 ⇒ 00:23:13.570 Naveen Karuppasamy: Okay.
128 00:23:13.570 ⇒ 00:23:25.340 Samuel Roberts: Using MCP, ingesting some data, using the LLMs to do the, you know, analysis, things like that. Basically, like, trying to give them a nicer ChatGPT clod that connects right to their data and is more tuned to them.
129 00:23:25.340 ⇒ 00:23:29.399 Naveen Karuppasamy: So, on the financial side, like, for the account side, like, how the.
130 00:23:29.400 ⇒ 00:23:32.839 Samuel Roberts: Yeah, so we were doing things for that project, it was,
131 00:23:33.110 ⇒ 00:23:49.809 Samuel Roberts: they were… they were managing ads for a number of different brands, and so you can imagine they wanted to know what was going on currently, what was the historical like, how does it look like projected out, and so there was some amount of just processing the data, but also some amount of surfacing that to the LLM and trying to draw insights.
132 00:23:50.180 ⇒ 00:24:04.299 Samuel Roberts: So, a lot of different stuff, you know, we’re… we… we’re not really… okay, we only do this one RAG thing, we only do this one chat thing, we do lots of different stuff, so it’s an exciting place to be because we get to play with lots of different… lots of different tools that way.
133 00:24:05.120 ⇒ 00:24:09.750 Naveen Karuppasamy: Yeah, it sounds great, Sam. Thank you for explaining it, yeah?
134 00:24:09.750 ⇒ 00:24:10.360 Samuel Roberts: Sure.
135 00:24:11.360 ⇒ 00:24:11.880 Samuel Roberts: How’s it?
136 00:24:11.880 ⇒ 00:24:12.359 Naveen Karuppasamy: Oh, gosh.
137 00:24:12.360 ⇒ 00:24:13.030 Samuel Roberts: Yeah.
138 00:24:15.240 ⇒ 00:24:23.070 Naveen Karuppasamy: So, like, is there, like, any time, like, you, you want me to… how fast you want me to join, like…
139 00:24:23.200 ⇒ 00:24:37.039 Samuel Roberts: Yeah, so, the process is, you know, you submitted a loom, that got screened, made, got you here, if you pass through this gate, you go to a second interview that’s more role-focused, a little more technical.
140 00:24:37.040 ⇒ 00:24:49.949 Samuel Roberts: But, if you pass that one, we then give you a technical challenge to, like, take home and work on, and then after that, it would be a third interview with a few of us and a little presentation about the work you did.
141 00:24:49.970 ⇒ 00:25:09.380 Samuel Roberts: And then after that would be an offer or not. So, a few different things there. We like to move relatively quickly, so assuming scheduling works in terms of getting with the right interviewers, it should only take a few weeks that way, hopefully. But yeah, we don’t want to drag this out. We try to keep it… keep it as tight as possible. Yeah.
142 00:25:09.380 ⇒ 00:25:12.580 Naveen Karuppasamy: Yeah, thank you, thank you, Sam, yeah. That’s it from my side, yeah.
143 00:25:12.580 ⇒ 00:25:24.879 Samuel Roberts: Okay, okay, great. Yeah, we’ve got a few more minutes here. I’m just trying to think if there’s anything else I didn’t get to, or… Yeah. I think overall, I feel like I’ve got most of it.
144 00:25:25.090 ⇒ 00:25:32.890 Samuel Roberts: Kind of covered a bunch. Is there anything I didn’t ask you about that you want to, you know, kind of let us know, or anything like that? I hadn’t, you know…
145 00:25:32.890 ⇒ 00:25:45.689 Naveen Karuppasamy: I don’t remember, like, I’m ready to, like, once I… once I’m shortlisted, I’m ready to join, like, in 2 or 3 weeks, if you want me to… if you want to know how fast I can, like…
146 00:25:45.690 ⇒ 00:25:48.370 Samuel Roberts: Oh, yeah, yeah, you’re currently working, where did you say? I’m sorry, I forgot.
147 00:25:48.370 ⇒ 00:25:53.390 Naveen Karuppasamy: or three, weeks, so I can speak with my manager and make a smooth transition.
148 00:25:53.390 ⇒ 00:25:54.539 Samuel Roberts: Okay, cool, cool.
149 00:25:54.760 ⇒ 00:26:00.239 Naveen Karuppasamy: So, any other question if, people ask, like, is, it’s a remote,
150 00:26:00.550 ⇒ 00:26:03.200 Samuel Roberts: Yes, oh yeah, so sorry, the whole company is fully remote.
151 00:26:03.350 ⇒ 00:26:20.510 Samuel Roberts: Yeah. We work all number of time zones, kind of work U.S. hours, but, I’m on the East Coast, some people are on the West Coast, some people are in Central, so I kind of have to have that overlap. But, yeah, everything’s remote, very Slack-heavy, meetings on Zoom, you know.
152 00:26:20.510 ⇒ 00:26:22.290 Naveen Karuppasamy: So we work on yesterday’s?
153 00:26:23.420 ⇒ 00:26:24.520 Samuel Roberts: It’s kind of…
154 00:26:25.010 ⇒ 00:26:31.960 Samuel Roberts: across the, you know, the U.S. So, like, some people… I work earlier than I get offline before some people on the West Coast are fully done.
155 00:26:31.960 ⇒ 00:26:32.280 Naveen Karuppasamy: Whoa!
156 00:26:32.280 ⇒ 00:26:34.570 Samuel Roberts: So there’s, like, as long as there’s an overlap, kind of, there with each.
157 00:26:34.570 ⇒ 00:26:39.310 Naveen Karuppasamy: Oh, okay, okay, got it, got it. Yeah, yeah, yeah. I live in Massachusetts more so.
158 00:26:39.310 ⇒ 00:26:44.079 Samuel Roberts: Okay, cool, cool. I’m from Boston, actually, originally, so yeah, where are you in? Are you still in, Worcester?
159 00:26:44.460 ⇒ 00:26:45.980 Naveen Karuppasamy: Yeah, I live in Worcester, yeah.
160 00:26:45.980 ⇒ 00:26:47.320 Samuel Roberts: Okay, yeah, yeah.
161 00:26:47.320 ⇒ 00:26:52.400 Naveen Karuppasamy: That’s why I had to go with Boston, so that they can, get the idea.
162 00:26:52.400 ⇒ 00:26:58.879 Samuel Roberts: Yeah, yeah, totally. I recognize Clark University, so I, I mean, I don’t know it very well, but I knew the name, yeah. Alright, great.
163 00:26:58.880 ⇒ 00:27:00.569 Naveen Karuppasamy: I completed my master’s here in computers.
164 00:27:00.570 ⇒ 00:27:18.890 Samuel Roberts: Congratulations, yeah, that’s exciting, that’s exciting. Cool, cool. Alright, yeah, so then you’re set then. You know, we have some people in, you know, Philippines, some people in Pakistan and India, and they work kind of different hours for them, but, you know, if you’re in the US, your hours should be fine then.
165 00:27:18.890 ⇒ 00:27:19.410 Naveen Karuppasamy: Thank you.
166 00:27:19.410 ⇒ 00:27:31.860 Samuel Roberts: And it’s relatively flexible, you know, we work with clients that have business hours, so we kind of have to match them a little bit to be able to communicate, but, you know, if you’re working late or working early, it’s not too much of a problem, so it’s a little flexible that way.
167 00:27:32.060 ⇒ 00:27:34.539 Samuel Roberts: But yeah, anything else then? I think…
168 00:27:35.580 ⇒ 00:27:39.670 Naveen Karuppasamy: That’s it, Sam. Naya, thank you. It was nice meeting you.
169 00:27:39.670 ⇒ 00:27:40.160 Samuel Roberts: You as well.
170 00:27:40.160 ⇒ 00:27:55.630 Naveen Karuppasamy: speaking with you, and, we had, we had a, like, conversation. It was great, and we shared the ideas, and you explained about the project, and what you’re working on. It was great, totally. Like, thank you for the opportunity, Sam.
171 00:27:55.630 ⇒ 00:28:12.009 Samuel Roberts: Awesome, yeah. Alright, well, yeah, so you should hear back from the recruitment team, pretty soon, and then, hopefully moving forward with, you know, whatever. And, there’s one other thing I was gonna say, and I forgot what it was now, but, we hit everything there, but da-da-da-da-da. Yeah, alright, well, thank you for the time. Appreciate it.
172 00:28:12.380 ⇒ 00:28:14.299 Naveen Karuppasamy: Yeah, thank you. Thank you, Steph.
173 00:28:14.300 ⇒ 00:28:15.080 Samuel Roberts: Yeah, have a good rest of your.
174 00:28:15.080 ⇒ 00:28:16.380 Naveen Karuppasamy: Have a good one, man. Thank you.
175 00:28:16.380 ⇒ 00:28:17.660 Samuel Roberts: You too. Bye-bye.