Meeting Title: Brainforge Interview w- Awaish Date: 2026-02-13 Meeting participants: Godwin Ekainu, Awaish Kumar


WEBVTT

1 00:02:02.440 00:02:03.190 Awaish Kumar: Bye.

2 00:02:03.610 00:02:04.600 Godwin Ekainu: Hi, Irish.

3 00:02:05.020 00:02:06.660 Awaish Kumar: Oh, hi, how you doing?

4 00:02:06.990 00:02:08.539 Godwin Ekainu: I’m doing good, how are you doing?

5 00:02:09.169 00:02:10.239 Awaish Kumar: I’m good as well.

6 00:02:11.549 00:02:20.789 Awaish Kumar: Yeah, so… In this interview, we are going to talk about, Little bit about…

7 00:02:21.129 00:02:24.559 Awaish Kumar: In the brain fold, and then we are going to explore more…

8 00:02:24.899 00:02:29.929 Awaish Kumar: About your past experiences at projects.

9 00:02:30.129 00:02:35.729 Awaish Kumar: who you have worked on. So, my name is… I’m a leading data engineering

10 00:02:36.089 00:02:44.289 Awaish Kumar: Service here in Brainforge, and, and Brainforge is a data and AI consultancy service,

11 00:02:45.159 00:02:49.139 Awaish Kumar: Redvo is operating remotely, with most of

12 00:02:49.469 00:02:51.629 Awaish Kumar: With, like, most of the clients.

13 00:02:51.979 00:02:54.049 Awaish Kumar: I’m based in US.

14 00:02:54.989 00:03:01.699 Awaish Kumar: But the employees are across the world, from… Younger… Asia.

15 00:03:02.509 00:03:12.329 Awaish Kumar: Like, USA, so… That’s basically how we, like, work remotely, so there’s a lot of collaboration.

16 00:03:12.939 00:03:18.699 Awaish Kumar: a single collaboration, like, we write documentation, write… posted Slack.

17 00:03:18.899 00:03:25.609 Awaish Kumar: And obviously, since the clients are in U.S, so… expectation is that We are.

18 00:03:26.139 00:03:29.429 Awaish Kumar: Overlapping subscribe with our clients.

19 00:03:29.639 00:03:34.039 Awaish Kumar: So that’s basically it for… Form of rain forage.

20 00:03:34.040 00:03:34.840 Godwin Ekainu: Beautiful.

21 00:03:34.840 00:03:36.999 Awaish Kumar: If you can introduce yourself.

22 00:03:37.920 00:03:46.449 Godwin Ekainu: Okay, so my name is Godwin Akainu, and I’m a data engineer. I have a background in computer science, artificial intelligence.

23 00:03:46.560 00:04:01.110 Godwin Ekainu: Have over 5 years of experience in data engineering and helping teams and organizations build, data pipelines, ETL pipelines, setting up the data analytics, stack from ground up.

24 00:04:01.320 00:04:16.940 Godwin Ekainu: Collaborating with stakeholders, across the organizations to build out their, analytics needs, monitor their data based on their requirements, refine their requirements, and also, build, scalable data products.

25 00:04:17.010 00:04:27.379 Godwin Ekainu: I currently work at Quidax. I’ve been at Quedax for 2 plus years. Quidax is a crypto exchange company, focused on West African region.

26 00:04:27.470 00:04:34.360 Godwin Ekainu: And I’ve been in data engineering with us for about… for over 2 years now. 2 years and 10 months, to be precise.

27 00:04:34.540 00:04:52.609 Godwin Ekainu: As a data engineer at Credax, I help build out their data analytics stack from the ground up, which is setting up the entire data warehouse, ingesting data from various sources into our warehouses. So, into the warehouses, discussing with stakeholders, and

28 00:04:52.690 00:04:58.600 Godwin Ekainu: Refining requirements, understanding those requirements, and building models based on those requirements on top of our warehouse.

29 00:04:58.660 00:05:17.300 Godwin Ekainu: We does currently work with many GCP, so GCP for the data stack, Python for ETL, SQL for transformation. We build our warehouse on top of BigQuery, and we ingest data from EWS, production database into GCP.

30 00:05:17.300 00:05:31.589 Godwin Ekainu: Using data stream, and from there, we, write our transformation on top of dbt, to 50 business use cases. We also have external, ETL pipelines that we build to extract data from external sources using

31 00:05:31.630 00:05:35.620 Godwin Ekainu: On top of, we write those using serverless functions.

32 00:05:35.720 00:05:41.699 Godwin Ekainu: And also, our entire TITA, stock disputes, I’m using them.

33 00:05:41.940 00:05:43.120 Godwin Ekainu: Terraform.

34 00:05:43.200 00:06:01.890 Godwin Ekainu: To be precise. Previously to joining, Quidex, I worked at MetricStyle. Metric Style was under blockchain analytics company, but we focused more on helping blockchain companies to derive insights, build models, basically. As data engineer at MetricStyle, I helped them to build,

35 00:06:01.930 00:06:05.699 Godwin Ekainu: Data… data models for various blockchain protocols.

36 00:06:05.970 00:06:09.960 Godwin Ekainu: Or companies, per se. So companies like,

37 00:06:10.080 00:06:15.600 Godwin Ekainu: to… the model’s IP basically to analyze transactions, analyze,

38 00:06:15.780 00:06:25.509 Godwin Ekainu: customers, transactions, customers, understand customers’ behavior on top of their platforms, and I worked mainly with Snowflake,

39 00:06:26.560 00:06:39.680 Godwin Ekainu: Python, SQL, then dbt too, at, MetricStyle. Before, prior to MetricStar, I worked at Lestego as a solid engineer, so Lestego was, a…

40 00:06:40.010 00:06:49.430 Godwin Ekainu: anti-money laundering company, platform as a service, basically, so I helped them to build an anti-stack, and the still guy worked mainly with AWS, so…

41 00:06:49.430 00:07:06.020 Godwin Ekainu: Since I taught a small team, so I did a lot of hands-on role, from back-end engineering to data engineering to infrastructure engineering to at Lestego. Prior to Lestego, I worked at Hamoie. Hamoya was the first professional company I worked at, and at Hamoya, I was doing mainly data

42 00:07:06.020 00:07:23.309 Godwin Ekainu: and platform engineering role. So, I do the data side and also the platform engineering role. Samway, was a consulting company, too. So we’re building internal tools, but we’re also working with clients externally to help them with their data and AI needs.

43 00:07:23.310 00:07:40.310 Godwin Ekainu: And I led the entire architecture, and platform team, data and platform team, and helped the team to build, and our clients to build our data tools, and data solutions, and also AI solutions, so from machine learning operations.

44 00:07:40.310 00:07:53.720 Godwin Ekainu: data platform engineering, platform engineering, DevOps, I was hands-on on the show, leading the architectural decision, implementation decision, and also implementing, some of our,

45 00:07:53.740 00:07:58.329 Godwin Ekainu: Work, all kinds, so that’s, like, a basic pronoun of my experience.

46 00:08:00.420 00:08:09.590 Awaish Kumar: Okay. So, like… Can you give me an example of a… Oof.

47 00:08:10.100 00:08:14.169 Awaish Kumar: project, where… You have,

48 00:08:15.760 00:08:20.649 Awaish Kumar: Like, you think that was the most complex project in terms of data pipelines and all of that?

49 00:08:23.370 00:08:34.620 Godwin Ekainu: So, not… I think I work on a lot of complex projects, or I think the most recent one was, I… we… I’ll say I work on my team to build out, like,

50 00:08:34.640 00:08:45.410 Godwin Ekainu: fraud analytics pipeline for our organization, as Acidas currently, and what’s my role in that was in building the… designing the architecture.

51 00:08:45.440 00:08:47.560 Godwin Ekainu: Physically, and also,

52 00:08:47.820 00:08:54.770 Godwin Ekainu: building, like, the streaming processing part of it, so it was a two-phase team. So it started this way,

53 00:08:54.960 00:08:59.460 Godwin Ekainu: We noticed… so we finished… we just finished a migration, and when we did that migration.

54 00:08:59.490 00:09:01.759 Godwin Ekainu: from our OC Sunshine ecosystem.

55 00:09:01.780 00:09:13.050 Godwin Ekainu: The… we noticed, there was a bug in the… in the platform, so, we call it the precision bug, basically, and we noticed some customers were trying to take advantage of that.

56 00:09:13.070 00:09:23.050 Godwin Ekainu: So, the first stage, we cop on a call, like an on-call call, to try and figure out how to stop this, because, we’ve seen our… we did not have any reconciliation on Earth.

57 00:09:23.240 00:09:31.399 Godwin Ekainu: back-end side of things, so we decided to create, like, a simple, fraud detection pipeline quickly, and in…

58 00:09:31.520 00:09:33.999 Godwin Ekainu: In a rich amount of time.

59 00:09:34.050 00:09:43.589 Godwin Ekainu: So I worked with my team to design the architecture for that, so we did it in two phases. The first phase was more of a quick process, which was

60 00:09:43.590 00:09:56.679 Godwin Ekainu: doing everything on top of GCP, so it wasn’t really… it wasn’t really real time. So we’re already ingesting our data into BigQuery using CDC, so we just, based on the transactions coming into the warehouse.

61 00:09:56.680 00:10:14.450 Godwin Ekainu: In real time, we created, like, a simple, what you call it, materializing on top of BigQuery, based on, rules. So we work our… we could work with our market operations team and our finance team to build out rules, fraud detection rules, that could help us, detect when,

62 00:10:14.700 00:10:19.020 Godwin Ekainu: What’d you, when… Grand,

63 00:10:19.250 00:10:34.370 Godwin Ekainu: users were trying to take advantage of the systems. So about… we came up with about 15 rules, and based on those rules, we built out, a naturalized view for our transactions to help us detect when transactions like this are happening.

64 00:10:34.370 00:10:42.530 Godwin Ekainu: And, it helped them to, immediately stop out or block out about $2 million in losses, prevented, because

65 00:10:42.530 00:10:54.469 Godwin Ekainu: Users could trade, but we blocked out withdrawals, so when those trades were happening, we could see them, and in real time, as they were happening, and we also prevent those users from drawing the money out of the…

66 00:10:54.510 00:11:12.810 Godwin Ekainu: platform. So once that, side was done, we then sat down and we architected the entire thing to be more, scalable. So in this second phase, we built out a more real-time system that added the platform to detect it on its own and block out those transactions.

67 00:11:12.820 00:11:21.959 Godwin Ekainu: We CD it on top of GCP, but this time we follow the, streaming routes, so we’re streaming data from the production database into

68 00:11:22.080 00:11:28.019 Godwin Ekainu: Google Cloud Pops up, and once that got into Google Cloud Pops Up.

69 00:11:28.190 00:11:36.080 Godwin Ekainu: We also, what do you call it, use, Apache Beam on top of Dataflow, to,

70 00:11:36.230 00:11:54.200 Godwin Ekainu: as our streaming processing pipeline. So, wrote out all the rules we needed for that process, on top of the streaming processing pipeline, which was Apache PIM, for the end destination. So, it happened in two ways. So, one side of the,

71 00:11:54.360 00:12:02.000 Godwin Ekainu: how do you call it? One salary destination was to a Slack, channel to alert us of any,

72 00:12:02.660 00:12:09.270 Godwin Ekainu: what do you call this? Any deals that were happening that were not… that was cut by our…

73 00:12:09.380 00:12:13.959 Godwin Ekainu: what do you call our rules. Then the other side was to, the,

74 00:12:14.110 00:12:18.570 Godwin Ekainu: The platform itself to help prevent this,

75 00:12:18.860 00:12:22.059 Godwin Ekainu: what do you call? This, this…

76 00:12:22.440 00:12:32.310 Godwin Ekainu: rules from being, exploited by our customers. So, when that was done, we migrated from the previous method we had been running it.

77 00:12:32.420 00:12:39.379 Godwin Ekainu: To this new method, and that has helped us to let… to block over $3 million in losses.

78 00:12:39.560 00:12:48.639 Godwin Ekainu: Based on our new systems. And I… that’s, like, one of the most recent complex projects I’ve worked on. Prior to that, I helped

79 00:12:48.880 00:13:04.470 Godwin Ekainu: I mentioned we did the migration, so prior to that, I built… I worked my team, basically, because when we did our migration, we had to re-applicate our entire data pipeline to fit the new business logic of our new system.

80 00:13:04.550 00:13:23.769 Godwin Ekainu: So I help… I work my team to design the architecture for that. Then, after I designed that, I handled the entire ingestion and transformation phase of that project. So I built… I’ll say I built out our entire warehouse, our entire data stack, from the ground up, as in…

81 00:13:24.070 00:13:29.199 Godwin Ekainu: In the past couple of months, I’ve worked with, my team, collaborated with team.

82 00:13:29.200 00:13:29.850 Awaish Kumar: Perfect.

83 00:13:30.460 00:13:31.440 Awaish Kumar: Sorry. Whoops.

84 00:13:31.740 00:13:35.019 Awaish Kumar: What are the tools and technologies that you used?

85 00:13:37.420 00:13:42.519 Godwin Ekainu: So, for ingestion, I used, I used data stream.

86 00:13:42.780 00:13:46.440 Godwin Ekainu: As I mentioned earlier, we’re a GCP shop.

87 00:13:46.900 00:13:56.810 Godwin Ekainu: So, most of our entire data stack. So, for data on GCP, for the application on AWS, so we handle… we use two platforms for that.

88 00:13:56.810 00:14:11.759 Godwin Ekainu: So on GCP, we use data stream for ingestion, that is from the production database into the warehouse. For other external ingest… for that ingestion kind of ingestion, we use, either, a byte.

89 00:14:11.920 00:14:16.159 Godwin Ekainu: or we use, custom Python scripts.

90 00:14:16.370 00:14:25.919 Godwin Ekainu: on top of our cloud drone, Google Cloud drone, to invest it into our warehouse. So, we currently, in Jazzitan multiple ways, basically, for the production.

91 00:14:26.930 00:14:29.369 Awaish Kumar: How do you orchestrate your Python scripts?

92 00:14:30.220 00:14:41.090 Godwin Ekainu: So we just, use scheduling. So we just schedule it to run. So, there are… it’s in two ways, basically. There are scripts where we… where we schedule them to run.

93 00:14:42.870 00:14:50.210 Godwin Ekainu: on Google Cloud, basically, if you’re using Cloud Run, you can schedule your script to run at certain intervals, or you could…

94 00:14:50.210 00:14:55.029 Awaish Kumar: What service do you use in Google Cloud to schedule it?

95 00:14:55.380 00:14:56.810 Godwin Ekainu: Cloud scheduler.

96 00:14:57.820 00:14:58.520 Awaish Kumar: Okay.

97 00:14:58.520 00:14:59.869 Godwin Ekainu: Yeah, CloudScaler.

98 00:15:03.520 00:15:20.500 Godwin Ekainu: Then, the other part of the, serverless script we use, we use is, is events-based, so once event comes into the endpoint we’re watching, it triggers the cloud drone to run and sends data to the warehouse. So, that is,

99 00:15:20.650 00:15:37.500 Godwin Ekainu: I’ll see that as a real-time event-based system we use, so those are, like, the three ways we ingest it into the warehouse. Then once it gets into the warehouse, depends on the business need or logic. We build out, and we use the business to build out, analytics data model.

100 00:15:37.500 00:15:47.029 Godwin Ekainu: for various teams. I personally, I build out models for, we have the business intelligence team, which, powers…

101 00:15:47.030 00:15:49.020 Awaish Kumar: How do you structure your DVD project?

102 00:15:50.330 00:15:53.000 Godwin Ekainu: Sorry, can you expand on that?

103 00:15:53.830 00:15:56.370 Awaish Kumar: How do you structure your dbt project?

104 00:15:56.920 00:15:59.299 Godwin Ekainu: Yeah, yeah. I’m structuring what’s way.

105 00:16:00.030 00:16:05.769 Awaish Kumar: No, no, I mean, how do you… like, there are… in the dbt project, you must have some.

106 00:16:05.770 00:16:07.170 Godwin Ekainu: Okay, okay, okay, okay.

107 00:16:07.170 00:16:07.980 Awaish Kumar: shows…

108 00:16:08.940 00:16:10.060 Godwin Ekainu: Yeah, yeah, so…

109 00:16:10.710 00:16:17.460 Godwin Ekainu: So, we have, we structure it in a very simple way, so we have,

110 00:16:17.600 00:16:31.059 Godwin Ekainu: the staging layer, and once data gets into the data warehouse, we build out the staging layer, which aligns with the raw layer in our warehouse. So in our… in our staging layer, we have all the raw data in that layer.

111 00:16:31.060 00:16:40.570 Godwin Ekainu: And from that staging layer, we have the, what do you call it? The intermediate layer, so follow the max intermediate… staging, intermediate, and match layer.

112 00:16:40.660 00:16:51.070 Godwin Ekainu: Which can also be described as, bronze, silver, and gold layer. So in the intermediate layer, we break out the models into, several teams.

113 00:16:51.070 00:17:03.550 Godwin Ekainu: So we have the models for the finance, we have the models for the marketing, we have the models for, the other market operations or business intelligence team. So each folder, in that intermediate layer, we build out separate

114 00:17:03.720 00:17:19.189 Godwin Ekainu: directories for each team, and build out their models using, the… either the star schema method, or the Snowflake… Snowflake schema method, basically, to build out… build out models for the separate teams.

115 00:17:19.880 00:17:24.389 Awaish Kumar: how do you ensure in DBT, how would you ensure that you’re…

116 00:17:25.030 00:17:29.169 Awaish Kumar: Project structure matches with your data warehouse structure.

117 00:17:31.220 00:17:33.610 Godwin Ekainu: I don’t understand the question.

118 00:17:34.770 00:17:40.539 Awaish Kumar: as you mentioned, that in your dbt project, you have raw, layer, staging, intermediate, parts.

119 00:17:40.960 00:17:49.950 Awaish Kumar: in the milestone, multiple folders, I would ensure that the same structure follows in the database, that you have raw schema.

120 00:17:50.060 00:18:02.950 Awaish Kumar: raw database, like, for example, if we take Sloan, like, as an example, we have raw mods, intermediate, these different databases, and in the mods, you have maybe multiple folder, like, multiple data sets, like,

121 00:18:03.150 00:18:15.620 Awaish Kumar: marketing, for finance, right, accounting, analytics, sales. So, how would you ensure that the project structure reflects

122 00:18:16.020 00:18:17.799 Awaish Kumar: database structure.

123 00:18:19.280 00:18:24.119 Godwin Ekainu: So, I don’t know if I’m getting your question correctly, so in…

124 00:18:24.700 00:18:28.719 Godwin Ekainu: Just like how DPT, so we use the naming convention.

125 00:18:28.870 00:18:35.450 Godwin Ekainu: For that, so, marketing team, the editor sets, we have the, marketing.

126 00:18:35.450 00:18:39.370 Awaish Kumar: Can I little bit explain, like, so whenever you run dbt job, right.

127 00:18:39.370 00:18:39.830 Godwin Ekainu: Yeah.

128 00:18:39.830 00:18:44.180 Awaish Kumar: Here’s out the database, what database to use, what…

129 00:18:44.430 00:18:54.030 Awaish Kumar: Right? When you run a dbt job, basically, it figures out that whatever query you have in your dbt model.

130 00:18:54.260 00:18:57.539 Awaish Kumar: So that will be a table, for example. Then where

131 00:18:58.070 00:19:06.359 Awaish Kumar: table should live. So, based on some rules, it figures out what should be the database, what should be the schema, and what should be the name of the table.

132 00:19:06.690 00:19:07.540 Godwin Ekainu: Yeah.

133 00:19:07.700 00:19:12.540 Awaish Kumar: How would you… Where would you write those rules? Where would you ensure?

134 00:19:12.740 00:19:13.410 Awaish Kumar: that,

135 00:19:13.410 00:19:14.160 Godwin Ekainu: Okay.

136 00:19:14.750 00:19:22.469 Godwin Ekainu: So, you first define, the project’s, settings in the, I think DBT projects.

137 00:19:22.580 00:19:37.510 Godwin Ekainu: YAML file, then you can build out your, what do you call your macros to help you with your naming and your, your other, conversions. So basically, macros help us with, defining the names, the structure, and everything.

138 00:19:37.650 00:19:41.150 Godwin Ekainu: For how everything is done in the warehouse.

139 00:19:42.680 00:19:46.399 Awaish Kumar: What is the concept of DBD seeds?

140 00:19:47.470 00:19:51.490 Godwin Ekainu: It’s DBTC, it is basically for storing static data.

141 00:19:51.680 00:19:55.930 Godwin Ekainu: So you just… you have static data.

142 00:19:56.080 00:19:58.210 Godwin Ekainu: That helps you to,

143 00:19:58.420 00:20:10.380 Godwin Ekainu: how would I put it, that you can run using your other models, so that data doesn’t change, it just contains information that maybe change can change in, let’s say, once in two months, so… never change.

144 00:20:10.660 00:20:11.580 Godwin Ekainu: Precise.

145 00:20:12.400 00:20:13.210 Awaish Kumar: Okay.

146 00:20:13.810 00:20:18.310 Awaish Kumar: I think I’m good with the interview, like,

147 00:20:18.770 00:20:25.320 Awaish Kumar: And these are all the questions I had, like, so, do you have any questions for me?

148 00:20:25.930 00:20:35.600 Godwin Ekainu: Okay, so my questions are basically, based on Brainforge and the team and the projects, basically. So, what kind of projects do…

149 00:20:35.800 00:20:38.029 Godwin Ekainu: print, printfoge, hand.

150 00:20:38.030 00:20:43.489 Awaish Kumar: Yeah, we have normally the similar projects, as you said, like, we have… Oh, Lord.

151 00:20:44.420 00:20:46.680 Awaish Kumar: For example, projects,

152 00:20:47.100 00:21:03.280 Awaish Kumar: To build the data foundations for some companies, so we maybe have medium to large-scale enterprises where some people will have some data warehouse, and we might help them optimize it, and we… some people might…

153 00:21:03.450 00:21:08.049 Awaish Kumar: They will not have the data foundation, and we just have them build foundations for the data.

154 00:21:08.160 00:21:14.120 Awaish Kumar: So… That is… these are the kind of parts which are… which the breadfouge has.

155 00:21:15.100 00:21:17.329 Godwin Ekainu: Okay, so.

156 00:21:18.250 00:21:27.830 Awaish Kumar: So, like, use… like, similarly, we will ingest data from somewhere, use Snowflake as warehouse, use dbt for transformation. We have Dexter, or…

157 00:21:27.940 00:21:36.409 Awaish Kumar: maybe some teams might use Airflow, but we internally use Dexter, so… to orchestrate our…

158 00:21:36.910 00:21:39.450 Awaish Kumar: Custom scripts and things like that.

159 00:21:40.110 00:21:51.130 Godwin Ekainu: Yeah, Daxa is nice, I like it because it’s lightweight, it’s easy to use, yeah. So that’s, like, a great choice. So for,

160 00:21:51.180 00:22:00.519 Godwin Ekainu: for the clients, now, how does the team… how do the team interact with the clients? You mentioned you can… you overlap, with the US time zones,

161 00:22:00.520 00:22:05.510 Awaish Kumar: So teams work remotely, they work async, they communicate in Slack.

162 00:22:05.570 00:22:09.950 Awaish Kumar: We write documentation, Notion, whatever you write on GitHub.

163 00:22:10.000 00:22:24.690 Awaish Kumar: whatever is the best for the team, and then you… then we have to collaborate, right? We have stand-ups, and we have client meetings, right? So… and then you might be also working with your colleagues, so you have to collaborate with them.

164 00:22:24.690 00:22:36.749 Awaish Kumar: But for that to happen, you might have to overlap some… some Eastern… some hours in Eastern time zone, so that you can attend your meetings, attend… talk to your colleagues, and after that, I think,

165 00:22:37.170 00:22:43.440 Awaish Kumar: That’s all, right? Once you are clear on what you do, you can work on your own time zone until the…

166 00:22:43.590 00:22:44.940 Awaish Kumar: The next stand up.

167 00:22:45.630 00:22:52.749 Godwin Ekainu: Okay, that’s… that’s great. At the time, do you handle multiple projects, or are you only focused on a single project?

168 00:22:54.580 00:22:56.430 Awaish Kumar: Yeah, normally each team member is…

169 00:22:56.720 00:22:58.800 Awaish Kumar: He’s working on 2 to 3 projects.

170 00:22:59.400 00:23:00.210 Godwin Ekainu: Okay.

171 00:23:00.330 00:23:01.850 Awaish Kumar: So I just…

172 00:23:01.940 00:23:04.040 Godwin Ekainu: How do you structure your time around that?

173 00:23:05.330 00:23:17.700 Awaish Kumar: So we have multiple, like, clients. For each client, we might need few hours of your data engineering work, few hours of data analyst work, few hours of PMing, right? So…

174 00:23:17.910 00:23:29.099 Awaish Kumar: That’s how we structure, like, okay, someone might be working 20 hours in a week on one project, for one client, but… and 10 for another, and another 10 for some other clients.

175 00:23:29.100 00:23:29.980 Godwin Ekainu: backgrounds.

176 00:23:29.980 00:23:35.319 Awaish Kumar: It divides… they divide the time multiple between 2 to 3 clients.

177 00:23:35.680 00:23:39.159 Godwin Ekainu: Okay, yeah. See, that’s great, that’s fine.

178 00:23:39.460 00:23:45.260 Godwin Ekainu: So, based on the interview process, what are the stages like? So, after this stage, what’s next?

179 00:23:45.260 00:24:02.549 Awaish Kumar: I think after this interview, I’m going to provide my feedback to Rico, and Rico from operations will get back to you with the next steps, but normally steps are, like, you might have, some take-home test, or then you might have a technical interview, and after that, you might also meet with one…

180 00:24:02.880 00:24:04.270 Awaish Kumar: other team member.

181 00:24:05.270 00:24:05.870 Godwin Ekainu: Okay.

182 00:24:06.240 00:24:08.420 Awaish Kumar: And then, yeah, after 2 to 3…

183 00:24:08.900 00:24:12.560 Awaish Kumar: more rounds, I will have the answer.

184 00:24:13.340 00:24:15.770 Godwin Ekainu: Okay. That’s cool, that’s fine.

185 00:24:16.000 00:24:18.590 Godwin Ekainu: I think that’s all for my questions.

186 00:24:19.000 00:24:22.079 Awaish Kumar: Yeah, okay, great. Yeah, thank you for your time. Thank you.

187 00:24:22.820 00:24:26.410 Awaish Kumar: Yeah, as I mentioned, Rico will get back to you. Thank you.

188 00:24:26.410 00:24:28.169 Godwin Ekainu: Okay, thank you, nice meeting you.