Meeting Title: Brainforge Interview w- Pranav Date: 2026-03-06 Meeting participants: Pranav Narahari, Srinivas Saiteja Tenneti
WEBVTT
1 00:00:49.010 ⇒ 00:00:49.950 Srinivas Saiteja Tenneti: Hello?
2 00:00:53.100 ⇒ 00:00:54.170 Srinivas Saiteja Tenneti: Hello?
3 00:00:54.170 ⇒ 00:00:56.119 Pranav Narahari: Hey, Srin of us, how’s it going?
4 00:00:56.300 ⇒ 00:00:58.450 Srinivas Saiteja Tenneti: Am I audible and clearly visible?
5 00:00:58.450 ⇒ 00:01:05.819 Pranav Narahari: Yeah, yeah, it’s great. My laptop behaving weird, so that’s fine. Oh, okay. Yeah, no, everything’s… everything’s fine for right now.
6 00:01:06.090 ⇒ 00:01:12.499 Srinivas Saiteja Tenneti: Yeah, yeah. Basically, like, yeah, I was running something, so it stuck a few times today, so that’s why.
7 00:01:12.790 ⇒ 00:01:16.710 Pranav Narahari: Gotcha, gotcha. Just give me one minute, if you don’t mind.
8 00:01:16.710 ⇒ 00:01:17.360 Srinivas Saiteja Tenneti: grocery.
9 00:01:29.530 ⇒ 00:01:31.720 Pranav Narahari: Perfect. Yeah, I think I’m ready to go.
10 00:01:31.990 ⇒ 00:01:35.270 Pranav Narahari: Nice to meet you.
11 00:01:35.510 ⇒ 00:01:47.779 Pranav Narahari: we have, like, 30 minutes for this interview, and there’s, like, a few topics that I want to cover, but I think just starting off, can you just give me, like, a brief background in, like, 2-3 minutes about yourself, and…
12 00:01:47.780 ⇒ 00:01:48.150 Srinivas Saiteja Tenneti: Sure.
13 00:01:48.320 ⇒ 00:01:50.739 Pranav Narahari: What brought you to interviewing here at Brainforge?
14 00:01:51.230 ⇒ 00:02:06.399 Srinivas Saiteja Tenneti: Yeah. So, yeah, hi, my name is Srinvas Haiti, and of course, like, you can call me Srinvas or Srini, whichever suits you. So, like, I’m an AIML engineer right now, like, with around, like, I would say 4 plus years of experience, including my
15 00:02:06.400 ⇒ 00:02:16.920 Srinivas Saiteja Tenneti: data scientist experience as well, working with machine learning and also, like, gen AI solutions. So, most of my work has been in, like, you know, fin- I mean, fintech and also, like, healthcare domains.
16 00:02:16.920 ⇒ 00:02:27.549 Srinivas Saiteja Tenneti: Where I focus on, like, building practical AI systems that solve real business problems. So, currently, I work at UnitedHealthcare, which is, like, a big firm where I develop a
17 00:02:27.550 ⇒ 00:02:37.699 Srinivas Saiteja Tenneti: GPT-4-based healthcare assistant for the internal, not for, like, external. It was that, the main use of it, like, that helps clinical staff and also, like, you know.
18 00:02:37.700 ⇒ 00:02:57.100 Srinivas Saiteja Tenneti: it actually first designed for the clinical staff to quickly, like, you know, find medical guidelines and all… and also, like, internal, policy documentations, and also, like, policies and information, few other things, which, which are internal. And later on, it has been, like, we updated it to, like, you know, for the new joiners also.
19 00:02:57.100 ⇒ 00:03:09.210 Srinivas Saiteja Tenneti: So, basically, what happens in big firms, like, big companies, like, whenever people use it to, like, you know, new join us, or else, like, old, like, you know, already existing people want to refer any dogs or anything.
20 00:03:09.210 ⇒ 00:03:18.609 Srinivas Saiteja Tenneti: Few companies use it to go to, like, SharePoint, which use it to have a lot of files, or otherwise their database, their own, like, data store where they have all the files.
21 00:03:18.610 ⇒ 00:03:20.840 Srinivas Saiteja Tenneti: So, it will take time manually checking
22 00:03:21.290 ⇒ 00:03:28.599 Srinivas Saiteja Tenneti: Excuse me. Yeah, manually checking all the documents, which one is the relevant ones. So, instead of that.
23 00:03:28.860 ⇒ 00:03:46.760 Srinivas Saiteja Tenneti: this bot simply, like, if you ask anything, of course, it accesses you, it authenticates you, and then it will give you whatever the documents which is required, including the citation, like, from this source, from this document, or anything. So, I built this solution using, like, RAG, Langchain, and also the vector search.
24 00:03:46.820 ⇒ 00:04:03.890 Srinivas Saiteja Tenneti: Which helped how, like, you know, improved, like, you know, how doctors and staff access important information, and it gradually reduces, like, onboarding time for new employers. It used it to take times to give them the relevant docs and everything, instead of that.
25 00:04:03.890 ⇒ 00:04:09.150 Pranav Narahari: They use it to, like, simply follow those dogs, like, quickly get access to those dogs.
26 00:04:09.150 ⇒ 00:04:22.729 Srinivas Saiteja Tenneti: So, before this, as I’ve told, I have worked as a data scientist in India, where I built a churn prediction system, which is, like, a model using, like, machine learning. The goal was to help the business identify customers who might be, like, you know.
27 00:04:22.840 ⇒ 00:04:34.789 Srinivas Saiteja Tenneti: I mean, might leave early and take action on them, like, improve retention to keep their valued customers. So, technically, I mainly work with Python, machine learning models, LLMs.
28 00:04:34.860 ⇒ 00:04:52.480 Srinivas Saiteja Tenneti: And, like, I’m… I’m used to, like, AWS and, of course, Azure. GCP sometimes, like, I did for a few projects. Azure also, like, I did for a few projects, but I’m good with, like, you know, about the tool, but, I mean, I’m good with hands-on of it, and… but AWS, I have been, like.
29 00:04:52.480 ⇒ 00:05:08.010 Srinivas Saiteja Tenneti: Using it from such a long time, and of course, MLOps tools, like, such as Docker for containerization and MLflow. I enjoy working on projects where I can, like, you know, take an idea from data analysis all the way to deploying a real AI solution.
30 00:05:08.010 ⇒ 00:05:26.530 Srinivas Saiteja Tenneti: And you have asked me about why Brainforge, right? So, I know I took a lot of time for the question, but yeah. The… why Brainforge is that, I have actually spoke, in our… in my earlier call as well. See, this is like an… not such a huge team, or not a small team. It is like a small, medium team, which is, like.
31 00:05:26.530 ⇒ 00:05:37.460 Srinivas Saiteja Tenneti: Which helps me, you know, to work much more collaboratively, and I feel like, in these scenarios, I could deliver, like, if there is any chance of me, like, building any product.
32 00:05:37.460 ⇒ 00:05:47.259 Srinivas Saiteja Tenneti: I want to be a part of it. In the end of the day, I should… I will be feeling like, okay, this is my baby product, or I was being a part of that product, so that…
33 00:05:47.410 ⇒ 00:05:57.519 Srinivas Saiteja Tenneti: You know, at some point of time, this is… this has been, like, this is the ownership of Sreenvas, or something like that, so I will be very happy for that.
34 00:05:57.520 ⇒ 00:06:04.870 Pranav Narahari: for that. That’s great. So, I’ll just take a minute to kind of just tell you about my background, just so we have a little bit more.
35 00:06:04.870 ⇒ 00:06:06.070 Srinivas Saiteja Tenneti: Sure, sure, of course.
36 00:06:06.070 ⇒ 00:06:17.689 Pranav Narahari: And so, yeah, specifically at Brainforge, what I do is I’m part of the delivery team, and so, like you said, building these products from, like, the ground up, making it feel like, you know, it’s our baby.
37 00:06:17.690 ⇒ 00:06:18.680 Srinivas Saiteja Tenneti: 11, yeah.
38 00:06:18.680 ⇒ 00:06:29.460 Pranav Narahari: It’s a very exciting thing, and it’s, I’m glad that you said that, because that is, like, a feeling that you’ll have here, because you will be, most of the time, building something from the ground up. Sure.
39 00:06:29.780 ⇒ 00:06:42.099 Pranav Narahari: And the teams are pretty small, so you’ll have a great impact on whatever you’re building. Prior to Brainforge, I’ve been working as, like, a AI engineer for the… for the last, like, year and a half.
40 00:06:42.990 ⇒ 00:06:54.690 Pranav Narahari: And then prior to that, working as a cloud engineer for a regional bank in the U.S. And so, I have a pretty traditional, like, software engineering background, and then in the last two-ish years,
41 00:06:54.690 ⇒ 00:07:07.609 Pranav Narahari: really taking advantage of the AI dev tools that are out there, and understanding, like, what we can do with these LLMs. And that kind of transitions me into, like, my first, like, set of questions, which is, like.
42 00:07:07.670 ⇒ 00:07:20.910 Pranav Narahari: how do you assess whether or not an LLM is the right solution for the problem at hand? So basically, when do you know it’s the right solution? When do you, think it’s, like, the wrong solution?
43 00:07:21.600 ⇒ 00:07:39.159 Srinivas Saiteja Tenneti: That’s actually a critical question, because, like, most of the people don’t ask this, they simply, like, do their… I mean, ask for the normal, basic questions, but I feel like this is really a great question, because when we use LLM, or let’s say when we use RAG, okay, simple… in simple terms, when we use LLM or RAG,
44 00:07:39.160 ⇒ 00:07:49.170 Srinivas Saiteja Tenneti: So, when the, like, the data which we have, which is, like, unpredictable or changes every time, okay, and, you know, it is constantly, like, evolving.
45 00:07:49.170 ⇒ 00:07:57.839 Srinivas Saiteja Tenneti: So, at that time, we use a LLM or RAG solution. So, I feel like the first thing, I look at the type of problem we are trying to solve.
46 00:07:57.840 ⇒ 00:08:14.360 Srinivas Saiteja Tenneti: So, because, like, let’s take my experience. In my, like, data… when I was in data scientist, for that, I have used machine learning models. Why? Because the data was, like, pretty, you know, it was neatly aligned. You can say as, like, properly… it was, like, properly…
47 00:08:14.560 ⇒ 00:08:18.980 Srinivas Saiteja Tenneti: like, set. Instead of, like, you know, we,
48 00:08:19.010 ⇒ 00:08:36.929 Srinivas Saiteja Tenneti: Sorry, instead of, like, it was, like, properly… it was, like, evolving. So, I would say the LLMs are very useful when the problem involves, like, unstructured data in clear. So, like, documents, like, text, whereas, like, conversations, so… or knowledge retrieval, so these kind of scenarios.
49 00:08:36.950 ⇒ 00:08:46.920 Srinivas Saiteja Tenneti: In my role, like, right now, which I was… I’m doing, like, I built a GPT-4 base for healthcare assistant that helps, like, you know, clinical staff, medical guidelines.
50 00:08:47.020 ⇒ 00:09:06.770 Srinivas Saiteja Tenneti: and policy documents. So, already the documents are there, but in future, we’ll be, like, uploading more documents, whatever there are. So, the model should be, like, you know, it should be, like, understanding the new, I mean, new documents as well. And it is also responding to the chat-to-chat interface. So, at that time, LLMs are really good, I feel.
51 00:09:06.820 ⇒ 00:09:13.930 Srinivas Saiteja Tenneti: And, I also evaluate whenever that… I mean, whether the task, requires reasoning, summarization, or like, you know.
52 00:09:14.120 ⇒ 00:09:29.019 Srinivas Saiteja Tenneti: question answering, because those are areas where LLMs usually perform really well, I feel like. So, on the other end, I think LLMs are the wrong solution when the task is very structured, or requires highly, you know.
53 00:09:29.030 ⇒ 00:09:40.590 Srinivas Saiteja Tenneti: what do we say? Deterministic outputs. So, for example, like, if the problem is simple data processing, no need to, like, stuff everything in that. Sorry for that. Yeah, no need to stuff every, like…
54 00:09:40.590 ⇒ 00:09:42.469 Pranav Narahari: That’s exactly right. Yeah, yeah, so…
55 00:09:42.470 ⇒ 00:09:43.330 Srinivas Saiteja Tenneti: Boom.
56 00:09:43.330 ⇒ 00:09:47.670 Pranav Narahari: I love that answer. Now, moving on to, kind of.
57 00:09:47.920 ⇒ 00:09:53.809 Pranav Narahari: if you just look at the landscape of all the LLMs out there and all the providers,
58 00:09:53.920 ⇒ 00:09:57.660 Pranav Narahari: Even each provider, they have different models.
59 00:09:58.030 ⇒ 00:09:58.620 Srinivas Saiteja Tenneti: Yes, of course.
60 00:09:58.620 ⇒ 00:10:07.850 Pranav Narahari: focus on, you know, you can probably just look at one provider, whatever example you want to use. How do you decide which model to use for which use case?
61 00:10:08.540 ⇒ 00:10:13.370 Srinivas Saiteja Tenneti: Just really, like, one of, like, you know, very important if you’re building, like, something.
62 00:10:13.430 ⇒ 00:10:28.540 Srinivas Saiteja Tenneti: using LLM. I feel like, see, there are a few, like, OpenAI, I have… I’m… right now, I’m using Claude also for my, like, personal project for athletic performance, so where we upload videos of athletic performance or gym videos, it will give you the…
63 00:10:28.540 ⇒ 00:10:48.440 Srinivas Saiteja Tenneti: like, personal data of, like, per, like, you know, monthly or weekly plans, like, how to improve for your athletic performance. That is one personal project I’m doing. I’m using Claude for that. So, basically, like, if you take, like, GPT-4, for example, if I look at OpenAI models, like GPT-4, GPT-3.5, or GPT, like, let’s say GPT-5 also.
64 00:10:48.490 ⇒ 00:10:59.840 Srinivas Saiteja Tenneti: So, let’s say I would choose GPT-4 for tasks that require strong reasoning, like complex question answering or handling sensitive information. I feel like GPT is also quick.
65 00:10:59.840 ⇒ 00:11:13.910 Srinivas Saiteja Tenneti: Like, for giving answers, responses. Like, in my current project at, UnitedHealthcare, we use, like, GBD4 with a rag pipeline, because the system needed to understand detailed medical guidelines and, like, policy documents.
66 00:11:13.910 ⇒ 00:11:25.570 Srinivas Saiteja Tenneti: And accuracy, of course, was very important to clinical staff, because, of course, it is in medical field. We need to follow HIPAA rules. So, on the other hand, like, let’s say Claude. Claude is also very good.
67 00:11:25.750 ⇒ 00:11:47.970 Srinivas Saiteja Tenneti: to analyze pro… like, you know, Claude is, you know, I feel like Claude analyzes the completely, you know? Let’s say if you’re coding, or if you’re doing anything, like, if you give any, like, document, or if you give any video, or anything, it analyzes, it analyzes end-to-end, it has few, like, you know, it will… it is a little bit, like, slow compared to GPT, I feel like.
68 00:11:47.980 ⇒ 00:11:54.860 Srinivas Saiteja Tenneti: But, it analyzes really well. On the other hand, it gives you… it gives you, like, a whole context.
69 00:11:54.880 ⇒ 00:11:58.329 Srinivas Saiteja Tenneti: thorough reasoning and a lot, like, more information.
70 00:11:58.360 ⇒ 00:12:02.039 Srinivas Saiteja Tenneti: For, for the chatbot, like, from my personal experience.
71 00:12:02.310 ⇒ 00:12:19.020 Srinivas Saiteja Tenneti: For the chatbot, you no need, like, you no need to have a whole full of, like, that much of reasoning, I feel like. Only, it should be, like, in chat-to-chat, end-to-end, like, like a chat-to-chat basis. It should give the relevant documents and the relevant, summary to the people, not, like…
72 00:12:19.020 ⇒ 00:12:24.850 Srinivas Saiteja Tenneti: not needed, like, unnecessary… I felt like Claude was unnecessary for that, like, that particular case.
73 00:12:24.850 ⇒ 00:12:32.130 Srinivas Saiteja Tenneti: But yeah, on the other hand, if the use case is, like, I have told you, like, basic summarization, simple, like, these kind of scenarios.
74 00:12:32.160 ⇒ 00:12:51.139 Srinivas Saiteja Tenneti: There are, like, multiple, yeah, but these ones I use more, like, Cloud and OpenAI. These are really, like, good, depending on the use cases, but I’m trying to use, like, much more different ones. I’m, like, even I’m trying to… I have once used Gemini also, like, for a similar case, but those were also good, yeah.
75 00:12:51.440 ⇒ 00:12:56.879 Pranav Narahari: Gotcha, okay. So, yeah, you’ve kind of talked a little bit about, like, some projects that you’ve built in the past.
76 00:12:57.440 ⇒ 00:13:01.460 Pranav Narahari: These projects, can you tell me about an evaluation framework that you built?
77 00:13:02.030 ⇒ 00:13:19.460 Srinivas Saiteja Tenneti: Yeah, that’s great, because whenever, like, you know, in my, like, graduation projects, when I built, at that time, while I was learning, I didn’t know, like, about evaluation frameworks. I knew about some, but a few missed. Because of that, I again learned and did that in my graduation days, but of course.
78 00:13:19.460 ⇒ 00:13:26.009 Srinivas Saiteja Tenneti: in my, like, you know, in my, previous work experience, in my… of course, in my recent, UnitedHealthcare.
79 00:13:26.010 ⇒ 00:13:31.870 Srinivas Saiteja Tenneti: where we built the GPT-4 per medical assistant. We also created a simple evaluation framework.
80 00:13:31.870 ⇒ 00:13:37.390 Srinivas Saiteja Tenneti: to make sure the responses from the LLM were, like, you know, accurate and reliable.
81 00:13:37.390 ⇒ 00:13:43.700 Srinivas Saiteja Tenneti: The first step was, like, creating a test dataset, which we call, like, you know, of real user questions.
82 00:13:43.700 ⇒ 00:13:58.429 Srinivas Saiteja Tenneti: So, that doc… like, you know, the doctors and clinical staff, like, usually ask. So, I… we set a meeting with them, we had, like, you know, built that data set with a set of queries, which… which is, like, we kept it as an, like, you know.
83 00:13:58.440 ⇒ 00:14:07.750 Srinivas Saiteja Tenneti: As a test data set, such as medical questions, like, medical guidelines, asking about patient policies, and, you know, like, few procedures.
84 00:14:07.910 ⇒ 00:14:12.519 Srinivas Saiteja Tenneti: For each question, we also define what a correct and helpful answer should look like.
85 00:14:12.600 ⇒ 00:14:27.100 Srinivas Saiteja Tenneti: So, this helpful… I mean, this helped us, you know, like, create a baseline to evaluate the system. We… we use it to tell as an, like, goal set or something like that. So, next, we evaluated the system in a few ways.
86 00:14:27.100 ⇒ 00:14:41.379 Srinivas Saiteja Tenneti: We checked a retrieval quality from the rack pipeline, meaning whether the system was, like, pulling the correct documents from the vector database or not. That is, like, I feel like retrieval, I mean, retrieval is actually very important.
87 00:14:41.430 ⇒ 00:14:44.669 Srinivas Saiteja Tenneti: If it is wrong, the whole system goes wrong, so…
88 00:14:44.700 ⇒ 00:14:57.949 Srinivas Saiteja Tenneti: Yeah, before, like, generating the answer. Then, like, we evaluated the LLM, like, you know, response quality, looking at things like, accuracy, completeness, and also, like, of course, hallucination risk can also drift.
89 00:14:57.950 ⇒ 00:15:16.180 Srinivas Saiteja Tenneti: So, I took… I took care of, like, you know, hallucination for this, like, medical field. I took care of, like, keeping the temperature value of the LLM model from 0 to 0.3, which is pretty much required, because it keeps grounded. If you keep it, like, more than 0.3, it will be, like, behaving nuts. Sorry, my…
90 00:15:16.180 ⇒ 00:15:20.649 Pranav Narahari: It’s funny that you mention that right now, because that was going to be my next question.
91 00:15:20.650 ⇒ 00:15:21.630 Srinivas Saiteja Tenneti: Oh, yeah.
92 00:15:21.630 ⇒ 00:15:38.560 Pranav Narahari: So that’s great. I think temperature is one thing that, you can use to modulate and really refine the output that you get. I’m curious, what are some other ways that you’ve… that you can think of, maybe you’ve used it in the past or not, like, to handle hallucinations?
93 00:15:38.990 ⇒ 00:15:43.579 Srinivas Saiteja Tenneti: Of course, like, see, hallucinations happen, like,
94 00:15:44.040 ⇒ 00:16:00.479 Srinivas Saiteja Tenneti: if the model, you know, like, it… hallucination is something which is where the model, you know, goes vague, I would say, like, it gives a lot of, like, over… I mean, thinking over… out of the box, it gives you much more, like, unnecessary information or not unrelevant information.
95 00:16:00.480 ⇒ 00:16:18.110 Srinivas Saiteja Tenneti: So, I feel like, with LLM-based systems, in my projects, I try to reduce them using a few, like, different approaches, not only, like, based around temperature. Sometimes, like, one of the most effective methods, I have used it in a rag, like, instead of letting the model
96 00:16:18.120 ⇒ 00:16:22.200 Srinivas Saiteja Tenneti: Like, you know, generate answers purely from its training data.
97 00:16:22.340 ⇒ 00:16:28.250 Srinivas Saiteja Tenneti: We retrieve, like, relevant documents from a vector database and provide the context to the model.
98 00:16:28.530 ⇒ 00:16:39.689 Srinivas Saiteja Tenneti: And, before generating the response. And, in the healthcare assistance project at UnitedHealthcare, we use it like Langchain with the vector search, so the model could answer questions.
99 00:16:39.690 ⇒ 00:16:55.989 Srinivas Saiteja Tenneti: based on, like, verified, clinical documents and policy documents. Of course, one more thing I want to say is, like, you need to keep the prompt, you need to, like, have the prompt well that only refer to those documents and answer it. If it is going out of context, you need to, like.
100 00:16:56.000 ⇒ 00:17:14.099 Srinivas Saiteja Tenneti: you know, like, refusing, like, we say as a refusal, so if it is going out of context, you should tell that, sorry, this is, we don’t have the information. You can, like, it is like a re-asking, so I would say, like, strong prompt design and guardrails.
101 00:17:14.109 ⇒ 00:17:26.359 Srinivas Saiteja Tenneti: For example, we instruct the model to only answer based on the provided context, and to say something like, I don’t have enough information or something, to answer this question. If you need, like, a…
102 00:17:26.660 ⇒ 00:17:34.530 Srinivas Saiteja Tenneti: Or something like that, with proper citations. This is what helps us. Otherwise, even the user who will be using will be confused.
103 00:17:34.650 ⇒ 00:17:36.940 Pranav Narahari: So, yeah, that is there, and .
104 00:17:36.960 ⇒ 00:17:47.919 Srinivas Saiteja Tenneti: like, this prevents the model, you know, from making up information, this is what’s necessary, I feel. I also like to use, like, you know, confidence checks and validation steps, so sometimes we add an extra step.
105 00:17:48.030 ⇒ 00:18:05.860 Srinivas Saiteja Tenneti: Where the system verifies whether the answer actually matches the retrieved documents. So, I feel like halogenation can be checked through multiple processes, like, another thing is, if the confidence is low, the system can either, like, you know, ask the user for multiple clarification or return a safer response.
106 00:18:05.880 ⇒ 00:18:20.069 Srinivas Saiteja Tenneti: Because, in few scenarios, like, healthcare or, like, government-related scenarios, these kind of things will be, like, pretty critical. Like, these are, like, secure… security, you need to follow HIPAA rules, you need to follow a few rules, so…
107 00:18:20.070 ⇒ 00:18:30.329 Srinivas Saiteja Tenneti: for that, you can’t, like, you know, trade off these kind of scenarios. It should be pretty grounded. So, finally, I feel like human feedback and monitoring are also very important.
108 00:18:30.420 ⇒ 00:18:46.549 Srinivas Saiteja Tenneti: So, we can use, like, Langfuse, which is, like, a great tool to, like, check it, and a few other tools, like Evidently AI, which is also a great tool. So, we track, of course, like, user feedback. We also, like, we can also track user feedback and review some
109 00:18:46.770 ⇒ 00:18:56.270 Srinivas Saiteja Tenneti: responses regularly to identify patterns, whether… Right. I mean, where a hallucination might happen. So, this helps us improve, like, yeah, overall.
110 00:18:56.270 ⇒ 00:19:14.989 Pranav Narahari: So, I have a question about… and so, this is kind of leading me down, like, kind of how you think about systems design with AI systems, and any type of, maybe even specifically RAG systems that you’ve built. You mentioned this UnitedHealthcare GPT-4, chatbot that you created.
111 00:19:15.080 ⇒ 00:19:17.090 Pranav Narahari: Were you working with sensitive data?
112 00:19:17.890 ⇒ 00:19:21.429 Srinivas Saiteja Tenneti: I mean… If you ask me clearly.
113 00:19:21.640 ⇒ 00:19:41.429 Srinivas Saiteja Tenneti: I would say it is not sensitive data, but it was, like, you know, university… I mean, sorry, not universities, sorry, it was, like, the company’s internal, referral process, so I feel like it was sensitive data, like, it can’t be gone into the public, but it was not people’s data, not, the…
114 00:19:41.430 ⇒ 00:19:41.880 Pranav Narahari: So.
115 00:19:41.880 ⇒ 00:19:42.529 Srinivas Saiteja Tenneti: Outside you…
116 00:19:42.530 ⇒ 00:19:50.860 Pranav Narahari: to ask that question is, I’m wondering how you think of, in a design for a system like that, did you have to…
117 00:19:50.860 ⇒ 00:20:04.559 Pranav Narahari: scrub certain parts of the data before having it, as, like, create embeddings on the text, or… how did you think about anonymizing or scrubbing the data so that it was ready to be used by the chatbot?
118 00:20:04.960 ⇒ 00:20:10.190 Srinivas Saiteja Tenneti: So basically, the… I mean, to handle this, like,
119 00:20:10.350 ⇒ 00:20:17.040 Srinivas Saiteja Tenneti: I mean, to handle the sensitive data. I would say it is not sensitive data of the people who use UnitedHealthcare.
120 00:20:17.040 ⇒ 00:20:24.879 Pranav Narahari: I mean, not the customers, but the internal people’s data, so it is like a semi-sensitive, I would say. I see. Not completely no, or not completely yes.
121 00:20:24.880 ⇒ 00:20:38.759 Srinivas Saiteja Tenneti: But that was definitely an important part of the system design Azure tool. Since we were, like, working in the healthcare domain, we had to be very careful with sensitive data, and you need to, like… you will be having a set of, like, things.
122 00:20:38.760 ⇒ 00:20:53.959 Srinivas Saiteja Tenneti: we need to follow. So, before creating embeddings or storing documents in the vector database, we made sure the data went through a data cleaning and anonymization process, as you were told. So, the first step was, like, removing or, like, you know, masking,
123 00:20:53.960 ⇒ 00:21:01.399 Srinivas Saiteja Tenneti: personally identifiable information, which is known as P2, so… such as, like, patient names.
124 00:21:01.520 ⇒ 00:21:12.320 Srinivas Saiteja Tenneti: IDs, addresses, or any direct identifiers, if present in the documents. Right. If those are there, then we try to redirect it, or, like, remove it.
125 00:21:12.320 ⇒ 00:21:26.659 Srinivas Saiteja Tenneti: In most cases, we did not include raw, like, patient records in the system, because it was, like, an internal document, so there will be, like, for internal policies and internal, like, refer… internal, like, learning,
126 00:21:26.660 ⇒ 00:21:45.160 Srinivas Saiteja Tenneti: kind of scenario. So, we tried not to include, I mean, raw patient records. Instead of, like, we focused around embedding clinical guidelines, policy documents, and also, like, approved, knowledge-based content. For those scenarios, you won’t be having, like, patient, I mean, patient details, right?
127 00:21:45.160 ⇒ 00:21:59.899 Srinivas Saiteja Tenneti: It is, like, the knowledge… I mean, we usually use documents for KT, right, knowledge transfer. So those kind of documents, which are safer sources for a rack-based chatbot. In cases where documents might contain sensitive information.
128 00:21:59.900 ⇒ 00:22:08.479 Srinivas Saiteja Tenneti: So, for that scenario, we designed, like, you know, we had… we use the data preprocessing pipelines, like, to scrub or redact those particular fields.
129 00:22:08.770 ⇒ 00:22:09.160 Pranav Narahari: Gotcha.
130 00:22:09.160 ⇒ 00:22:24.099 Srinivas Saiteja Tenneti: Before generating, embeddings. So, this ensured that the whole, like, you know, that the vector store only contained those, like, sanitized and approved content, not, like, you know, semi-seemi, or not having the sensitive content. So…
131 00:22:24.410 ⇒ 00:22:27.660 Srinivas Saiteja Tenneti: We also added access controls and logging, so…
132 00:22:27.810 ⇒ 00:22:36.789 Srinivas Saiteja Tenneti: only authorized users, like, internal users can use it, like doctor or internal staff could query the system, and the people who are, like, new joiners.
133 00:22:36.940 ⇒ 00:22:54.130 Srinivas Saiteja Tenneti: we give them access, like, I mean, we means, of course, the company gives them access based on that, so… So, overall, the approach was to combine data anonymization, document filtering, and, like, sec- secure system design before the data ever reached the embedding.
134 00:22:54.230 ⇒ 00:23:04.719 Srinivas Saiteja Tenneti: Okay. Instead of, like, you know, having that in the embedding, so yeah. Which helped us, safely build the chatbot while respecting, of course, HIPAA rules and healthcare data regulations.
135 00:23:04.990 ⇒ 00:23:16.300 Pranav Narahari: Right, that makes a lot of sense. Zooming out a little bit, just kind of talking about RAG systems in general, how do you think about embedding updates and just vector store maintenance?
136 00:23:16.810 ⇒ 00:23:19.829 Srinivas Saiteja Tenneti: Oh, yeah, that is actually something, like, you know.
137 00:23:19.930 ⇒ 00:23:28.030 Srinivas Saiteja Tenneti: day-to-day embedding updates or something, like, I mean, not day-to-day, but some embedding updates or anything might lead to, like, you know, embedding drift.
138 00:23:28.250 ⇒ 00:23:43.249 Srinivas Saiteja Tenneti: Which, most people tell, like, you know, we can face sometimes. So, when I think about embedding updates and vector store maintenance in a rack system, I usually focus on keeping the knowledge base fresh, like, you know.
139 00:23:43.400 ⇒ 00:24:01.349 Srinivas Saiteja Tenneti: it is, like, you know, I keep it, like, recent and in that way, like, the documents, whatever we have, and also, like, accurate and efficient for retrieval. First, for embedding updates, I design a pipeline that can detect when, you know, new documents or updated content are added, and
140 00:24:01.350 ⇒ 00:24:11.899 Srinivas Saiteja Tenneti: You know, for example, if new clinical guidelines or policy documents are published, the system usually, like, I mean, automatically processes those documents.
141 00:24:11.900 ⇒ 00:24:24.940 Srinivas Saiteja Tenneti: and create new embeddings, and stores them in the vector database. This will help us to keep everything in loop. So, this ensures the chatbot always, you know, retrieves the most recent information.
142 00:24:25.150 ⇒ 00:24:31.559 Srinivas Saiteja Tenneti: So, another thing I consider the… in re-embedding, like, when the re-embedding model changes.
143 00:24:31.560 ⇒ 00:24:50.449 Srinivas Saiteja Tenneti: if we switch to a better embedding model or, like, something like that, we usually, like, regenerate embeddings for the existing documents so that semantic search quality improves, because sometimes few documents are overdated, we need to remove it, and few new updated documents we need to add it. At that time, like, re-embedding really works.
144 00:24:50.480 ⇒ 00:24:58.620 Srinivas Saiteja Tenneti: For vector store maintenance, I try to keep the databases, like, clean and efficient. So, that includes, like,
145 00:24:58.840 ⇒ 00:25:02.770 Srinivas Saiteja Tenneti: Removing outdated documents, as I’ve told, and avoiding,
146 00:25:02.930 ⇒ 00:25:08.040 Srinivas Saiteja Tenneti: I feel like duplicate embeddings, like, sometimes the same content, little bit of updates.
147 00:25:08.110 ⇒ 00:25:22.209 Srinivas Saiteja Tenneti: like, have duplicate… will cost to duplicate embeddings. So, I feel like… and also, like, organizing data with metadata filters, like document type, and also, like, version or department.
148 00:25:22.260 ⇒ 00:25:29.240 Srinivas Saiteja Tenneti: and IDs, or something like that. So, this helps, like, improve retrieval accuracy and filtering during search.
149 00:25:29.370 ⇒ 00:25:46.840 Srinivas Saiteja Tenneti: I also like to, like, you know, I mean, monitor retrieval performance, such as checking whether the system is consistently retrieving the most relevant, I mean, documents for user queries or not. If we see issues, of course, we… I mean, we may adjust things like chunk size.
150 00:25:46.850 ⇒ 00:25:56.819 Srinivas Saiteja Tenneti: or indexing strategy, or metadata filters, something like that. So overall, I treat the vector database like, you know, a knowledge system.
151 00:25:56.940 ⇒ 00:26:15.400 Srinivas Saiteja Tenneti: Where, I mean, where we regularly update embeddings, manage document versions, and also, like, monitor retrieval, like, I mean, monitor retrieve quality to make sure the rack works well, and the system, like, not for the particular movement, but it continues to perform well
152 00:26:15.460 ⇒ 00:26:17.419 Srinivas Saiteja Tenneti: I mean, for the future as well.
153 00:26:18.220 ⇒ 00:26:25.789 Pranav Narahari: That’s great. Yeah, just last question, and then we can kind of get into any questions you may have for me. Sure, sure.
154 00:26:26.370 ⇒ 00:26:30.750 Pranav Narahari: How do you, in your experience, feel LLM systems usually break under load?
155 00:26:31.520 ⇒ 00:26:35.210 Srinivas Saiteja Tenneti: Oh, sorry, when LLM systems break overload.
156 00:26:35.450 ⇒ 00:26:47.289 Pranav Narahari: Like, when they’re having a lot of load, like, what is usually the cause for the break? What have you felt, you can change to mitigate that? Has that happened in your experience?
157 00:26:48.030 ⇒ 00:26:51.720 Srinivas Saiteja Tenneti: Okay, yeah, so sometimes, you know…
158 00:26:51.950 ⇒ 00:26:59.319 Srinivas Saiteja Tenneti: I feel like when an LLM system is overloaded, I feel like when… due to…
159 00:26:59.740 ⇒ 00:27:06.970 Srinivas Saiteja Tenneti: I’ve seen few situations, I will tell you from my personal experience also. Sure. Like, a project experience or something like that.
160 00:27:06.970 ⇒ 00:27:07.460 Pranav Narahari: Yeah.
161 00:27:07.460 ⇒ 00:27:16.709 Srinivas Saiteja Tenneti: Sometimes LLM systems struggle under, like, you know, high load, especially when, like, many users are sending requests at the same time.
162 00:27:16.930 ⇒ 00:27:22.179 Srinivas Saiteja Tenneti: And, even, like, if you, in my experience, the most common, like,
163 00:27:22.310 ⇒ 00:27:40.419 Srinivas Saiteja Tenneti: reason, like, systems usually have, like, a break under load is, like, a latency and, of course, API bottlenecks, which we, like, which are there. And LLM models are, like, you know, computationally heavy, so if too many requests, hit them, like, you know, systematic ones.
164 00:27:40.420 ⇒ 00:27:49.520 Srinivas Saiteja Tenneti: Response time can increase significantly because they can’t think at that time, like, that it’s something which are there, even with our personal GPDs which we use.
165 00:27:49.780 ⇒ 00:27:50.520 Pranav Narahari: Right.
166 00:27:50.520 ⇒ 00:27:51.190 Srinivas Saiteja Tenneti: No.
167 00:27:51.270 ⇒ 00:28:09.449 Srinivas Saiteja Tenneti: So, of course, the service can start failing. So, another common issue is with the retrieval pipeline in rack systems. I would say, like, when the system needs to perform, like, vector search, document retrieval, and LLM generation for every request, the overall pipeline can be slow, like, become slow.
168 00:28:09.450 ⇒ 00:28:16.709 Srinivas Saiteja Tenneti: If the, like, vector base or, I would say the embedding search is not properly optimized, so it will take time.
169 00:28:16.730 ⇒ 00:28:20.629 Srinivas Saiteja Tenneti: So, these two scenarios might be, and another…
170 00:28:20.730 ⇒ 00:28:31.439 Srinivas Saiteja Tenneti: Yeah, the way to improve it, yeah, of course, to answer it, like, for un… I mean, sorry, for to… one way to… we can mitigate this is by adding, like, you know.
171 00:28:31.760 ⇒ 00:28:44.390 Srinivas Saiteja Tenneti: I would say as, request queues and, rate limiting. So, it will, like, which helps, I mean, it actually helps to control the number of requests, you know, being processed at the same time.
172 00:28:44.390 ⇒ 00:28:56.530 Srinivas Saiteja Tenneti: So, it can, like, you know, keep the requests in a queue, so, in that way, or something like that. So, this gives the system, you know, stable even during, like, you know, peak usage. You also optimize performance by, you know, caching.
173 00:28:56.530 ⇒ 00:28:57.600 Pranav Narahari: So… Yep.
174 00:28:57.600 ⇒ 00:29:09.869 Srinivas Saiteja Tenneti: frequent, like, you know, caching frequent queries in LLM, that is, like, something useful. If you ask me, like, it keeps the… like, it will be, like, kept in a cache, like, cached into a temporary DB.
175 00:29:09.870 ⇒ 00:29:18.340 Srinivas Saiteja Tenneti: So that, when a user asks the same question, it will be retrieving the same answer. I mean, it will be giving the same answer instead of, like, hitting the LLM.
176 00:29:18.340 ⇒ 00:29:25.589 Srinivas Saiteja Tenneti: it will usually, like, you know, cause the LLM, like, again to check, so instead of that, we can, like, do this, so that,
177 00:29:25.640 ⇒ 00:29:31.539 Srinivas Saiteja Tenneti: Of course, for similar responses, the load will be, like, I mean, lessened on the LLM.
178 00:29:32.180 ⇒ 00:29:34.310 Pranav Narahari: Yeah, sounds great.
179 00:29:35.900 ⇒ 00:29:44.599 Pranav Narahari: How about we hop into any questions you may have for me? I know it’s technically 1 minute left, but I can stay on for a little bit extra if needed.
180 00:29:44.600 ⇒ 00:29:48.119 Srinivas Saiteja Tenneti: Sure, sure, sure. One question, like, I want to ask…
181 00:29:48.250 ⇒ 00:30:03.989 Srinivas Saiteja Tenneti: Like, what are the tools, if you don’t mind, what are the tools you use? Because right now, I use a lot of tools, like, you know, like, Superbase. I use Superbase for my present project, and also I’m using, like, Voyage, I’m using, like, Cohere.
182 00:30:03.990 ⇒ 00:30:11.960 Srinivas Saiteja Tenneti: I’m using, like, few latest technologies, I mean, even Langfuse for, I mean, of course, for the evaluation and all.
183 00:30:12.550 ⇒ 00:30:12.880 Pranav Narahari: Yeah.
184 00:30:12.880 ⇒ 00:30:21.310 Srinivas Saiteja Tenneti: How do you, like, what are the tools right now which are, like, you guys are using, which are really cool, like the LLM stack?
185 00:30:21.490 ⇒ 00:30:22.610 Srinivas Saiteja Tenneti: So…
186 00:30:22.610 ⇒ 00:30:23.290 Pranav Narahari: Yeah.
187 00:30:23.290 ⇒ 00:30:24.370 Srinivas Saiteja Tenneti: Yeah. So…
188 00:30:24.370 ⇒ 00:30:41.480 Pranav Narahari: One, since we’re moving so quickly, right, another thing that we also want to move just as quickly is the actual deploy process. So we want to have, like, an environment where we can, like, the infrastructure, essentially, that supports the speed that we’re moving at.
189 00:30:41.480 ⇒ 00:30:48.970 Pranav Narahari: And so, sometimes having applications hosted in AWS and GCP can take a lot of time.
190 00:30:48.990 ⇒ 00:30:55.240 Pranav Narahari: And so, we’ve opted to use railway for a lot of our new projects.
191 00:30:55.360 ⇒ 00:30:57.700 Pranav Narahari: I know there’s also render as an option.
192 00:30:57.700 ⇒ 00:30:58.629 Srinivas Saiteja Tenneti: Oh, render, yeah.
193 00:30:58.630 ⇒ 00:31:00.850 Pranav Narahari: Yeah, Fly.io.
194 00:31:00.980 ⇒ 00:31:01.770 Pranav Narahari: These are all options.
195 00:31:01.770 ⇒ 00:31:05.069 Srinivas Saiteja Tenneti: io is cool. I mean, it is free, right, to host.
196 00:31:05.590 ⇒ 00:31:09.260 Pranav Narahari: I, I think, yeah, at small, like, there’s, like, a free tier, and then…
197 00:31:09.260 ⇒ 00:31:10.360 Srinivas Saiteja Tenneti: That is sweet, dude.
198 00:31:10.700 ⇒ 00:31:19.309 Srinivas Saiteja Tenneti: Well, sailing, of course, it takes money, but yeah, freight air is cool, and railway and these render and railway, both are also good. I have tried all three of them, by the way.
199 00:31:19.310 ⇒ 00:31:28.099 Pranav Narahari: Yeah. So, those are some tools that we use. The tools you mentioned, too, like, we’ve used Langviews on a few different projects as well.
200 00:31:28.830 ⇒ 00:31:37.740 Pranav Narahari: in terms of being able to support a lot of different models, we’ve… for, let’s say, for, like, a chat interface, we’ve, took advantage of the Vercel AI SDK.
201 00:31:37.740 ⇒ 00:31:38.950 Srinivas Saiteja Tenneti: What’s it, yeah.
202 00:31:39.130 ⇒ 00:31:39.610 Pranav Narahari: Yeah.
203 00:31:39.610 ⇒ 00:31:40.080 Srinivas Saiteja Tenneti: Great.
204 00:31:40.500 ⇒ 00:31:41.460 Srinivas Saiteja Tenneti: I… yeah.
205 00:31:41.650 ⇒ 00:31:48.760 Pranav Narahari: That’s been really good, and for issues like hallucinations and stuff, too, Allows us to really…
206 00:31:49.090 ⇒ 00:31:52.899 Pranav Narahari: track a bunch of, or test A-B test with a bunch of different,
207 00:31:52.900 ⇒ 00:31:54.889 Srinivas Saiteja Tenneti: So, yeah, every desktop, yeah.
208 00:31:54.890 ⇒ 00:32:05.879 Pranav Narahari: a variation in parameter sets based on the model. Obviously, supporting so many different providers as well is, super useful, and it’s very developer-friendly.
209 00:32:05.880 ⇒ 00:32:07.359 Srinivas Saiteja Tenneti: That’s what I’m thinking.
210 00:32:07.810 ⇒ 00:32:17.850 Pranav Narahari: kind of, like, on the niche end of things, of, like, what we do, is we really try to build on top of the infrastructure that companies already are using, and so…
211 00:32:17.850 ⇒ 00:32:31.340 Pranav Narahari: all companies, they have some type of communication channel, right? Whether it’s Teams, whether it’s Slack, whether it’s email, and so we built a lot of Slack bots. We’re thinking about building, like, Teams bots.
212 00:32:31.380 ⇒ 00:32:35.050 Pranav Narahari: We’ve built, like, Chrome extensions in the past.
213 00:32:35.070 ⇒ 00:32:36.010 Srinivas Saiteja Tenneti: Oh.
214 00:32:36.230 ⇒ 00:32:48.960 Pranav Narahari: So, just like that point of interaction with the user is something that we also think really deeply about, so that’s where you kind of need to put your product hat on when you’re thinking about system design as well. The back end is usually…
215 00:32:49.020 ⇒ 00:32:58.930 Pranav Narahari: more or less the same, based on, your experience. It’ll sound pretty similar, but that front-end interface and, how that looks could differ a lot.
216 00:32:59.110 ⇒ 00:32:59.950 Srinivas Saiteja Tenneti: Yeah, yeah.
217 00:33:00.080 ⇒ 00:33:14.150 Srinivas Saiteja Tenneti: So, I have, like, one more question. Of course, the tool stack you have mentioned was really, like, you know, it aligned with me, because all of that I have used, and of course, Vercel, I have used it for my portfolio. It is the first place, so, yeah.
218 00:33:14.150 ⇒ 00:33:23.489 Srinivas Saiteja Tenneti: And, like, I want to ask one question, like, how does your team evaluate, like, LLM performance and response quality in production?
219 00:33:23.730 ⇒ 00:33:40.199 Srinivas Saiteja Tenneti: Okay, like, do you have any specific evaluation frameworks or monitoring tools in place? Because I have been, like, known to many monitoring tools, like, a lot of, like, Datadoc, CloudWatch, a lot of, I mean, tools. So, for logging and also for monitoring, what tools you guys use?
220 00:33:40.970 ⇒ 00:33:59.239 Pranav Narahari: Yeah, so… a little bit of background on me, too. So, I started… I joined at Brainforge in December, and so, for the few clients that we’ve used so far, that we’ve had so far, we haven’t gotten too far in depth with, like, the monitoring tools for things that are in production.
221 00:33:59.410 ⇒ 00:34:14.159 Pranav Narahari: So it’s great that you’re asking that question, because we talk about that a little bit internally as well. We move so fast, and we push, like, features, so quickly, that, you know, some things don’t keep up, but, like, these are the type of things that, like, we are…
222 00:34:14.320 ⇒ 00:34:19.069 Pranav Narahari: you know, we need to find time for, so I’m really glad that you asked this question.
223 00:34:20.600 ⇒ 00:34:24.460 Pranav Narahari: But, yeah, right now, I don’t have, like, a ton of information for you there.
224 00:34:24.909 ⇒ 00:34:38.159 Srinivas Saiteja Tenneti: I feel like Datadog is really cool, because you can, like, create dashboards for monitoring, and upon that, you can create, you know, a few metrics, which can give alerts if something breaks for the pipelines.
225 00:34:38.159 ⇒ 00:34:47.889 Srinivas Saiteja Tenneti: And simultaneously, you can check logs also, so Datadog is pretty overall. And apart from that, CloudWatch is also, like, go-to, AWS CloudWatch, so…
226 00:34:47.909 ⇒ 00:34:50.709 Srinivas Saiteja Tenneti: These tools are great. Even I have used Splunk.
227 00:34:50.809 ⇒ 00:34:53.219 Srinivas Saiteja Tenneti: So, Splunk is also really great.
228 00:34:53.439 ⇒ 00:34:54.329 Srinivas Saiteja Tenneti: And yeah.
229 00:34:55.209 ⇒ 00:34:59.039 Srinivas Saiteja Tenneti: And yeah, a lot of, like, monitoring tools overall.
230 00:34:59.040 ⇒ 00:35:00.530 Pranav Narahari: That’s a lot, right? Yeah.
231 00:35:00.530 ⇒ 00:35:07.279 Srinivas Saiteja Tenneti: Of course. Splunk is good for, like, you know, for response. IV, I mean, you can check for, like,
232 00:35:07.280 ⇒ 00:35:24.560 Srinivas Saiteja Tenneti: the customer data, not like, I mean, you know, some few, like, let’s say any customer account breaks or anything, let’s say any user test account which breaks, you can check the logs there using the test IDs and also. It really works, and of course, for CloudWatch and,
233 00:35:24.570 ⇒ 00:35:29.390 Srinivas Saiteja Tenneti: Datadog, it will be going by correlation IDs, so yeah.
234 00:35:29.390 ⇒ 00:35:33.269 Pranav Narahari: Have you ever used Sentry for, monitoring Sentry?
235 00:35:33.670 ⇒ 00:35:34.710 Srinivas Saiteja Tenneti: Sentry?
236 00:35:34.710 ⇒ 00:35:35.610 Pranav Narahari: Yeah.
237 00:35:35.610 ⇒ 00:35:42.209 Srinivas Saiteja Tenneti: No, but I have brief idea. I have used other monitoring tools, and also… Yeah.
238 00:35:42.210 ⇒ 00:35:47.039 Pranav Narahari: I’m curious, because I’ve been hearing about it, so I’m wondering if it’s another thing to, like, do a spike on and…
239 00:35:47.040 ⇒ 00:35:52.469 Srinivas Saiteja Tenneti: But thank you for telling me, because after this meeting, immediately, I will check that. Okay, sure.
240 00:35:52.470 ⇒ 00:35:56.259 Pranav Narahari: Yeah, yeah, if you can let me know, like, what you think about it compared to Datadog, that would be…
241 00:35:56.260 ⇒ 00:36:09.889 Srinivas Saiteja Tenneti: Yeah, because I have a keen habit of something, if you tell me some new tool, no. If I don’t know that, it keeps me bugging whole night, so… I will even open in my phone in… I have a bad habit.
242 00:36:09.890 ⇒ 00:36:18.180 Srinivas Saiteja Tenneti: of, researching late nights, like, checking in Wikipedia for, like, you know, a lot of current affairs, or a lot of, AI newsletters.
243 00:36:18.180 ⇒ 00:36:22.640 Pranav Narahari: And of course, checking those, like, you know, new tools or technologies.
244 00:36:22.640 ⇒ 00:36:40.899 Srinivas Saiteja Tenneti: So, in that way, I have been, like, you know, learning about lots of, like, Nano Banana, and a few tools I really love, and right now, I’m using Cursor and Claw ID, so these tools are cool. So, that’s why, thank you for telling me. Of course, I will… I will research properly which is better, and I will tell you.
245 00:36:41.040 ⇒ 00:36:44.679 Srinivas Saiteja Tenneti: What are the topics and everything? Because I really love, like, comparing.
246 00:36:44.680 ⇒ 00:36:52.909 Pranav Narahari: I mean, it’s a good habit to have, especially at a company like this, and in an industry like this, where, you know, things are changing every single day, it seems like.
247 00:36:52.910 ⇒ 00:36:56.020 Srinivas Saiteja Tenneti: Every single day. You honestly, every single day.
248 00:36:56.180 ⇒ 00:36:56.620 Pranav Narahari: Yeah.
249 00:36:56.620 ⇒ 00:37:11.769 Srinivas Saiteja Tenneti: Every single day. And yeah, I have a habit of, like, comparing which tool is better, because I want to, like, okay, spend my money where I could get the… more out of it, so… Right. And I could feel an ease of using it, right? So, that’s why. Right. Yeah.
250 00:37:11.770 ⇒ 00:37:13.700 Pranav Narahari: Oh, this is great, Srinivas.
251 00:37:13.700 ⇒ 00:37:14.450 Srinivas Saiteja Tenneti: Thank you, Pranay.
252 00:37:14.450 ⇒ 00:37:21.249 Pranav Narahari: If you have any additional questions, feel free to email them to me. But yeah, the Brainforce team will reach out to you shortly.
253 00:37:21.530 ⇒ 00:37:37.820 Srinivas Saiteja Tenneti: Sure, sure. Thank you, Pranav, and happy to, like, have a conversation with you, and it was really brilliant. It was… it went like a peer-to-peer conversation, like a colleague-to-colleague conversation, rather than, like, an interview. It was, like, me telling my experience, you telling your experience.
254 00:37:37.830 ⇒ 00:37:42.080 Srinivas Saiteja Tenneti: It was brilliant, I would say, and have a great day ahead, and
255 00:37:42.100 ⇒ 00:37:44.239 Srinivas Saiteja Tenneti: Of course, have a great weekend as well.
256 00:37:44.850 ⇒ 00:37:46.840 Srinivas Saiteja Tenneti: Appreciate it. Thank you so much. Have a good one.
257 00:37:47.220 ⇒ 00:37:48.280 Srinivas Saiteja Tenneti: Have a great day.