Meeting Title: Brainforge x CTA: Weekly! Date: 2026-03-20 Meeting participants: Kyle Wandel, Chi Quinn, Uttam Kumaran, Katherine Bayless, Ashwini Sharma


WEBVTT

1 00:00:43.490 00:00:44.520 Uttam Kumaran: Lowe.

2 00:00:45.310 00:00:46.500 Kyle Wandel: Morning.

3 00:00:46.940 00:00:47.860 Chi Quinn: Mornings!

4 00:00:48.230 00:00:49.099 Kyle Wandel: How’s it going?

5 00:00:49.840 00:00:50.580 Uttam Kumaran: Good.

6 00:00:53.230 00:00:54.550 Katherine Bayless: Happy Friday!

7 00:00:58.080 00:00:58.690 Kyle Wandel: That was a couple.

8 00:00:59.560 00:01:01.399 Kyle Wandel: I was gonna say, how’s the conference yesterday, Catherine?

9 00:01:01.630 00:01:10.249 Katherine Bayless: It was good. I mean, I… so I’ve gone for a number of years now, because it’s free, and it’s in the city, right? It’s… I mean…

10 00:01:10.250 00:01:21.850 Katherine Bayless: it’s not the most useful, to be totally honest, but it’s nice to just get out of the office and talk about things with other nerds. So I don’t think I came home with any super exciting takeaways, although,

11 00:01:21.960 00:01:36.800 Katherine Bayless: the roundtable portion of the thing that I went to for, like, the, like, the little association corner that our account’s in, there was a guy there from a nonprofit called Compassion International that I’ve met at a few of these before, actually. He’s really nice.

12 00:01:36.800 00:01:43.939 Katherine Bayless: But, he was talking about their AI rollout, and, like, their company has, like, 4,000 people.

13 00:01:43.940 00:01:58.899 Katherine Bayless: But they have everything done through, like, bedrock, so, like, they’re, you know, using the models through there, making them available… accessible to staff that way, and so they had to put their own sort of, like, AI ethics policy in place, and they decided to use Jesus’ Sermon on the Mount.

14 00:01:59.790 00:02:07.860 Katherine Bayless: And, it’s working! And I’m like, fascinating. Interesting. Makes a lot of sense, you’re a religious organization.

15 00:02:07.990 00:02:10.270 Katherine Bayless: Doesn’t really do the rest of the room much good.

16 00:02:10.270 00:02:11.000 Uttam Kumaran: Yeah.

17 00:02:11.000 00:02:18.350 Katherine Bayless: But, yeah, yeah, it was interesting. It’s interesting. But yeah, overall, I mean, you know, it is what it is.

18 00:02:19.840 00:02:44.099 Katherine Bayless: I think it’ll be more interesting, honestly, next year, when more of these, like, nonprofits have had, like, the support from companies like AWS to really build stuff out using AI, because, like, I mean, the development barriers are a lot lower than they used to be. So there are a few that are kind of, like, really getting out ahead of it, like us, perhaps. But, but yeah, I think next year could stand to be really interesting.

19 00:02:45.620 00:03:03.099 Kyle Wandel: Yeah, the, I don’t know if you’ve seen the, stuff for ServiceNow Control Center, AI Control Center? I thought that was interesting. So, essentially, their whole thing is, being able to automate your agents, basically, or control your agents from one control, tower, basically, landing zone. I thought that was interesting.

20 00:03:03.930 00:03:12.289 Katherine Bayless: Yeah, I think… so the AWS ProServe guys, they were talking a lot about that, too, with Jay. Like, I think that sort of…

21 00:03:12.930 00:03:25.200 Katherine Bayless: provisioning access for agents is starting to, like, get a little bit of traction. Like, the idea that you can define an agent that would have, you know, up to your permissions level, right, but you could also scope it more narrowly.

22 00:03:26.120 00:03:27.360 Katherine Bayless: I don’t know, it’s gonna be.

23 00:03:27.360 00:03:34.580 Uttam Kumaran: Interesting. Yeah, interesting. Yeah, we’re experimenting with a lot of, like, broader agent execution of, like.

24 00:03:34.840 00:03:45.369 Uttam Kumaran: tickets, like, a plan, for example. It’s been interesting, we’re testing, like, Codex, cloud, and, like, Kirscher Cloud to, like, take on different activities.

25 00:03:45.880 00:03:52.179 Uttam Kumaran: very weird. But, like, it’s working here and there, so yeah, we’ve just been testing some stuff internally, just trying to, like.

26 00:03:52.300 00:03:53.469 Uttam Kumaran: Learn more.

27 00:03:54.010 00:04:11.500 Katherine Bayless: Yeah. I have not gotten as far into the agent, like, true, like, agent-agent type stuff, like, sub-agents as part of a, you know, process I’m working on, sure, but, like, something that is just, running around in the background all the time, I haven’t, haven’t moved too far into that rabbit hole.

28 00:04:13.450 00:04:16.190 Katherine Bayless: I know Jay has, though.

29 00:04:16.190 00:04:26.019 Uttam Kumaran: Yeah, he told me he, like, has 8 chat windows up. I’m sure he’s, like, using, OpenClaw and, like, trying a bunch of stuff.

30 00:04:26.380 00:04:40.440 Katherine Bayless: Actually, okay, so, I think he has been smart enough to stay out of OpenClaw, but he was looking at the open brain thing earlier this week, because I was like, dude, if you could, like, find a way to use your little Dark Factory thing to mint a digital twin for, like, staff, right, that’d be awesome.

31 00:04:41.280 00:04:55.459 Katherine Bayless: And so he went and looked at the OpenBrain repo, and the first thing he did was file a Git issue, because, like, step one is, like, put your passwords in an Excel spreadsheet, and he lost his mind. Which is kind of fair, honestly. But yeah, so, he’s.

32 00:04:55.460 00:04:57.789 Uttam Kumaran: I never, I never heard about Open Brain.

33 00:04:57.790 00:04:58.316 Katherine Bayless: Come on.

34 00:04:58.860 00:05:03.130 Katherine Bayless: It was another Nate Jones, Kool-Aid sip for me. He did a…

35 00:05:03.130 00:05:03.660 Uttam Kumaran: Okay.

36 00:05:03.660 00:05:18.200 Katherine Bayless: about putting it together, and like, I mean, it seems interesting. I feel like you could probably do it all within AWS, versus he’s kind of got, like, a bunch of different components. But yeah, I kind of… I don’t know. I would like to put it together for myself, to be honest.

37 00:05:18.660 00:05:19.730 Uttam Kumaran: Interesting.

38 00:05:21.440 00:05:22.250 Katherine Bayless: Yeah.

39 00:05:25.070 00:05:32.529 Uttam Kumaran: Okay, cool. I know Awash is out today, but I wanted to just try to take some time, Kyle, to talk about any of the modeling

40 00:05:32.750 00:05:36.459 Uttam Kumaran: You know, feedback that…

41 00:05:37.000 00:05:50.439 Uttam Kumaran: you know, you had, and I think I have some Cortex changes in a good place that I’m gonna… I can kind of demo Monday, so that’s sort of my goal, and then I am still working on getting the…

42 00:05:50.610 00:05:54.349 Uttam Kumaran: Getting our, like, rollout plan into something, like, visual as well.

43 00:05:54.590 00:05:56.500 Uttam Kumaran: But maybe we can…

44 00:05:57.280 00:06:01.580 Uttam Kumaran: talk about some of the modeling changes, or Ashwin, if you have anything in particular we want to review.

45 00:06:03.540 00:06:15.820 Ashwini Sharma: Not exactly. Like, I’ve completed the dimensional modeling for the ExpoCAD data. Is there in a PR, I’ll get it internally reviewed with Avish, and then I’ll merge it.

46 00:06:15.980 00:06:20.580 Ashwini Sharma: But if you want to take a look at it, I can share the link to the PR as well.

47 00:06:21.400 00:06:30.139 Kyle Wandel: Yep, I just… I just saw it right before this call, so I was… I have it open, I was taking a look at it now, so… it’s probably fine, it’s probably good to go, but I’ll review it really quickly, and then commit it.

48 00:06:31.090 00:06:31.850 Ashwini Sharma: Cool.

49 00:06:33.990 00:06:45.970 Ashwini Sharma: Yeah, and then I thought of, you know, the question that I wrote yesterday on the Slack, and then I just checked, you know, how much effective it is, and it’s just, there’s an additional 40 records that are

50 00:06:46.270 00:06:49.980 Ashwini Sharma: That we are able to map it to a particular member.

51 00:06:51.230 00:07:03.079 Ashwini Sharma: it does not increase the overall percentage, it’s around 90% of records in ExpoCAD data are correctly mapped to a correct organization in remembers.

52 00:07:06.790 00:07:21.179 Kyle Wandel: Catherine, do you think that we should cover that today? Like, maybe, like, check in, like, a more detailed approach to identity stitching? Like, what it looks like? Not, like, an update, but, like, what does it look like, and how do we kind of use it to go to match, basically?

53 00:07:21.570 00:07:30.469 Katherine Bayless: Yeah, I think… I think that makes actually good sense to dig in on. So, context-wise, too, for benefit of the group, so…

54 00:07:30.470 00:07:33.370 Katherine Bayless: Kyle’s working with the Innovation Awards team

55 00:07:33.370 00:07:58.090 Katherine Bayless: those awards will open July 1st, and that’s going to be our first kind of, like, real chance to get it right, because when a company applies for those awards, they get a discount if they are a member and an exhibitor, and so we’ll need to be able to sort of, like, consume the company name as provided by the person applying for the award, match it into those lists, and then, you know, reliably return the true-false

56 00:07:58.090 00:08:04.700 Katherine Bayless: Around whether or not they meet the discount criteria, which will save us money, because in the past, we’ve had to issue lots of refunds.

57 00:08:04.700 00:08:21.889 Katherine Bayless: And so that July 1st deadline, right, is kind of, you know, working backwards from there. The exhibitor and membership data, getting those all joined up, would be useful. And also, I mean, Kyle’s also trying to do, like, the historical analysis with Bobby in market research, so…

58 00:08:21.930 00:08:25.810 Katherine Bayless: Yeah, I think if we want to dig in there, it makes sense to me.

59 00:08:27.900 00:08:28.650 Katherine Bayless: Okay.

60 00:08:33.049 00:08:37.109 Uttam Kumaran: Anything, Kyle, on the Fortune 500 piece?

61 00:08:37.609 00:08:38.569 Uttam Kumaran: Yeah.

62 00:08:38.570 00:08:39.780 Katherine Bayless: Seeds ready, finally.

63 00:08:39.789 00:08:57.399 Kyle Wandel: Yeah, yeah, it is, like, both that is in there, that is in the seeds file, and then also the, we added another one twice, because we added to a top… twice top retailers, they look at electronic retailers, and so I actually went through that and pulled all of…

64 00:08:57.579 00:09:05.379 Kyle Wandel: the information from 1988, I believe, to 2025. So that should be updated and good.

65 00:09:05.819 00:09:11.539 Kyle Wandel: The library had a pretty extensive, throwback, and I just cleaned everything up, basically, so…

66 00:09:11.540 00:09:14.979 Katherine Bayless: Nice, nice, nice. Thank you for jumping in on that one again.

67 00:09:15.330 00:09:24.940 Kyle Wandel: Yep, and that’s in the raw archive, and then I saw… I wasn’t sure if it was correct, because, not correct, but the… I saw that the created… you guys created that seed schema, or that extra schema.

68 00:09:24.940 00:09:40.309 Kyle Wandel: And I wasn’t sure if I should put it there or not, but, I… essentially what I did is I integrated the S3 integration to that Seeds S3, or created a new integration to that bucket, and then just did it how I normally do it. If I should do something different, just let me know.

69 00:09:40.680 00:09:44.710 Uttam Kumaran: Okay, that’s fine. I think what I’ll do is I’ll create a ticket just to go confirm that.

70 00:09:45.150 00:09:46.530 Uttam Kumaran: see if it’s legacy.

71 00:09:46.950 00:09:47.660 Kyle Wandel: Okay.

72 00:09:52.780 00:09:53.800 Uttam Kumaran: Okay…

73 00:09:54.450 00:10:01.679 Uttam Kumaran: I think another piece that Catherine and I were talking about was maybe taking a first pass at, like, some…

74 00:10:01.790 00:10:16.089 Uttam Kumaran: Streamlit views of the data together. I don’t know, Kai, since you’ve been going through the dashboards, wondering, like, if maybe we could do, like, a session where we map out some, like, require… dashboard requirements together?

75 00:10:16.720 00:10:26.829 Uttam Kumaran: I think if… we have, like, a… I have a pretty decent format for, like, collecting those requirements, like, some of the questions, any styling, that way I can…

76 00:10:26.950 00:10:33.319 Uttam Kumaran: I’m gonna take a first pass at, like, a streamlit version, using… but I just think, like.

77 00:10:33.440 00:10:38.989 Uttam Kumaran: it would help me a lot if I could just, like, get some of the requirements from you, and that way you can be a good, like.

78 00:10:39.170 00:10:46.439 Uttam Kumaran: barometer for if we hit the mark, and then I’m gonna figure out all the streamlet stitching that needs to go into executing on that.

79 00:10:46.600 00:10:49.579 Uttam Kumaran: Do you think that’d be fine?

80 00:10:50.110 00:11:03.360 Chi Quinn: Yeah, that would be, fine, because what I will say is I have the reports, like, the inventory is basically done, and I have it sorted out just based on what could… I guess, the best

81 00:11:03.490 00:11:12.770 Chi Quinn: reports that could be added to Streamlit, just based on the recent request. It’s mostly just CES data, very…

82 00:11:12.770 00:11:32.099 Chi Quinn: basic, high-level, count the number of attendees who participated in this, that, and the other. What I can do is I’ll send, like, just an example of the few reports that I think would be… would be qualified to go to, Streamlit. I’ll send that in the, chat, the Brainforce chat.

83 00:11:32.230 00:11:47.469 Chi Quinn: And also, I’ll add the requirements, like, what I… just from what I gathered from the data, requirements, and how it’s, the design of the report, so I’ll add that information if that’s helpful.

84 00:11:47.470 00:11:48.170 Uttam Kumaran: Yes.

85 00:11:48.380 00:11:50.730 Chi Quinn: Yeah, yeah, so I’ll do that today, actually.

86 00:11:51.170 00:11:54.460 Uttam Kumaran: Okay. Yeah, I don’t know, do you… I kind of am like…

87 00:11:55.070 00:11:58.940 Uttam Kumaran: Do you think you’d be down to still do, like, a 30-minute thing on Monday?

88 00:11:59.040 00:12:10.579 Uttam Kumaran: just so I can get you to… I’m kind of just interested to see… get your thoughts overall about the Power BI mechanics, and, like, things about, like, filtering, and, like, I can kind of give you…

89 00:12:10.640 00:12:20.290 Uttam Kumaran: like, filtering, like, things about, like, should we put in descriptions of, like, metrics? Like, I think if we spend, like, 30 minutes on Monday, I can probably just, like.

90 00:12:20.610 00:12:27.610 Uttam Kumaran: I think I get a lot of value so that I can try to drive a couple of these out. So I just don’t want to make… I don’t want to make too many,

91 00:12:28.070 00:12:30.140 Uttam Kumaran: I don’t want to take too much creative liberty.

92 00:12:30.240 00:12:36.420 Uttam Kumaran: Because, like, I’ll… I’m gonna just be trying to figure out the end-to-end streamlit piece.

93 00:12:36.650 00:12:41.989 Uttam Kumaran: But if I can have a target to hit, then that’ll be one thing off of my brain.

94 00:12:42.520 00:12:46.800 Chi Quinn: Absolutely, yeah, so let me do that. I’ll send you the Power BI information.

95 00:12:47.180 00:12:47.870 Uttam Kumaran: Okay, okay.

96 00:12:47.870 00:12:49.760 Chi Quinn: Items for that, and the requirements.

97 00:12:50.440 00:12:51.630 Uttam Kumaran: Okay, okay, great.

98 00:12:55.560 00:12:56.360 Uttam Kumaran: Okay.

99 00:12:57.720 00:13:09.090 Uttam Kumaran: Cool. So I feel pretty good about getting stream lit up. I, also feel pretty good about during our Monday call, I have a bunch of changes I need to push for

100 00:13:09.210 00:13:17.070 Uttam Kumaran: Cortex, it’s just sitting in my… in my local. So that I can plan to take that time on Monday to do a demo.

101 00:13:17.220 00:13:19.319 Uttam Kumaran: Of more items.

102 00:13:20.660 00:13:25.810 Uttam Kumaran: That’ll be changes to the role, that’ll be a lot of the feedback that came from our Wednesday call.

103 00:13:26.020 00:13:38.880 Uttam Kumaran: And then, yeah, I’m gonna see… I’m gonna see if I can… we have a… we have a dashboarding use case too, right? For, like, the observability piece. So for that, I… I’m gonna also leverage as much in Streamlit as possible.

104 00:13:38.990 00:13:41.769 Uttam Kumaran: So that’s what I’ll present on Monday.

105 00:13:43.830 00:13:44.860 Katherine Bayless: That’ll be awesome.

106 00:13:45.250 00:13:51.070 Uttam Kumaran: Cool yeah, the other piece is, like,

107 00:13:51.660 00:13:55.579 Uttam Kumaran: even for developing on Snowflake,

108 00:13:55.740 00:14:04.199 Uttam Kumaran: I’m sort of thinking that maybe once after next week, I can also do a little bit of a demo of, like, just the process of potentially using

109 00:14:04.880 00:14:21.410 Uttam Kumaran: Cortex CLI, like, within VS Code or Cursor or whatever. It’s just gonna help you interact with Snowflake, issue queries, and there is actually a lot of really helpful logic in dbt.

110 00:14:21.570 00:14:29.190 Uttam Kumaran: So if you’re able to, like, use an agentic, like, IDE, like, like any of those, asking questions.

111 00:14:29.330 00:14:39.499 Uttam Kumaran: to Snowflake alongside the dbt code, I think is going to give you, like, a much richer response about the models. So I think, Kyle, like, some of the… some of the pieces in…

112 00:14:40.020 00:14:59.630 Uttam Kumaran: in Cortex, in the UI, questions about, like, models and logic. I’m not yet 100% confident it’s gonna be able to answer some of those. I’m gonna try to pack a lot of the context full, but, like, most of the context in the semantic views is gonna be all business-facing. So if you have a question of, like, how does this logic define what models to define.

113 00:14:59.830 00:15:05.039 Uttam Kumaran: I still feel like… doing it in, like, an IDE is gonna be, like.

114 00:15:05.720 00:15:09.669 Uttam Kumaran: the best shot, so I wanna try to make sure everybody has that set up.

115 00:15:10.080 00:15:17.830 Uttam Kumaran: So you can see, like, it’ll reference Snowflake, it’ll… to query something, and then it’ll reference the dbt models, and it’ll give you, like, a really rich

116 00:15:17.960 00:15:25.310 Uttam Kumaran: response about how something’s defined. And then I also think, Kyle, for your development process, if you’re like, oh, change this definition.

117 00:15:25.690 00:15:28.770 Uttam Kumaran: and open a PR, it’s also, like, super, super quick.

118 00:15:30.680 00:15:31.770 Katherine Bayless: That sounds awesome.

119 00:15:31.770 00:15:51.510 Kyle Wandel: That makes sense. I mean, I have the Snowflake client on mine. Cool. If we want to do it for IT or something, that’d be cool. I mean, I don’t know, if we want to do it with, like, even Ana and them, I don’t know if they would be willing, but they could be. Yeah, I didn’t think so, but yeah, I have it going. I actually used it the other day to do, the staging and stuff, so that was pretty cool.

120 00:15:52.020 00:15:53.210 Uttam Kumaran: Cool. Cool.

121 00:15:57.550 00:16:00.679 Uttam Kumaran: Great. Okay, that’s a… I mean, that’s the majority of what

122 00:16:01.210 00:16:06.189 Uttam Kumaran: I had to present on. I mean, I feel like we haven’t had any issues with dbt running.

123 00:16:06.250 00:16:24.990 Uttam Kumaran: seems like we’re sort of, okay with… with sort of how Snowflake’s running. I think one thing I want to try to also present on Monday is… is taking our notes and the, sort of the rollout plan. But I feel pretty… I feel, like, I can confirm that I’ll be there, that week, so Catherine, I don’t know if we can also, maybe on Monday.

124 00:16:25.220 00:16:25.620 Katherine Bayless: Yeah, yeah.

125 00:16:25.620 00:16:29.310 Uttam Kumaran: We can decide to… Put something on the books, but that’d be great.

126 00:16:31.910 00:16:38.100 Uttam Kumaran: And yeah, I think that, like, Monday we can maybe just plan out, like, you know, sort of April a bit more. That’d be awesome.

127 00:16:38.540 00:16:43.449 Katherine Bayless: Yeah, yeah, that’s kind of what I would like to do as well, is just, like, figure out what’s the…

128 00:16:43.730 00:16:45.759 Katherine Bayless: What’s the up next on the docket?

129 00:16:47.530 00:16:49.819 Katherine Bayless: Okay, I’ll make note so I don’t forget.

130 00:17:06.990 00:17:11.139 Katherine Bayless: Okay, cool. Dig in on this identity stitching stuff, then?

131 00:17:12.540 00:17:15.280 Uttam Kumaran: Yeah, Ashwini, do you wanna…

132 00:17:15.810 00:17:17.170 Uttam Kumaran: Do you want to chat about it?

133 00:17:17.329 00:17:20.679 Ashwini Sharma: Sure, sure, one second, let me just open my file over here.

134 00:17:30.219 00:17:34.029 Ashwini Sharma: Let me share my screen, and then I’ll just walk you through what I’ve done.

135 00:17:36.349 00:17:52.829 Ashwini Sharma: I think it might be a little bit overwhelming to go through the code, so what I’m going to do is just to walk you through the documentation of what has been done, right? So it’s more easier to understand. So we have this, exhibitors, data, ExpoCAD data, for,

136 00:17:54.210 00:17:57.620 Uttam Kumaran: Can you zoom in one… just one tick, Srini?

137 00:18:00.890 00:18:02.150 Ashwini Sharma: Sure, plus…

138 00:18:02.480 00:18:04.090 Uttam Kumaran: I think it’s, like, Command Plus, or…

139 00:18:04.090 00:18:04.710 Kyle Wandel: There we go.

140 00:18:04.710 00:18:05.400 Ashwini Sharma: Yeah, yeah.

141 00:18:05.590 00:18:07.899 Ashwini Sharma: Is it… is it okay now?

142 00:18:09.370 00:18:14.170 Uttam Kumaran: Maybe… yeah, maybe… I think that’s fine, sorry, maybe that’s just me being annoying.

143 00:18:15.130 00:18:17.800 Ashwini Sharma: No, I, I can… Add one more.

144 00:18:18.880 00:18:20.620 Uttam Kumaran: Yeah, okay, that’s great. Thank you.

145 00:18:21.550 00:18:27.949 Ashwini Sharma: Okay, alright, yeah, so, there was this staging,

146 00:18:28.630 00:18:32.960 Ashwini Sharma: Basically, file, which is, just the exhibitor’s data, right?

147 00:18:33.380 00:18:44.479 Ashwini Sharma: this one, and it maps directly to the exhibitors.json file that’s there in several folders, right? And that has been cleaned, and the JSON has been

148 00:18:44.680 00:18:47.769 Ashwini Sharma: Transformed into column names, and

149 00:18:48.660 00:18:59.419 Ashwini Sharma: you know, just a standardization of that JSON into tabular data, renamed, and then exposed into the table. That’s what’s happening in the staging layer.

150 00:18:59.650 00:19:07.300 Ashwini Sharma: And then in the int layer, it’s going… intermediate layer, it’s going to, do the identity resolution, right?

151 00:19:07.540 00:19:12.080 Ashwini Sharma: So, what I do is, first of all, you take the exhibitor name.

152 00:19:12.290 00:19:16.170 Ashwini Sharma: Whatever is there, and then try to map it to an organization name.

153 00:19:16.930 00:19:27.279 Ashwini Sharma: And if it is mapping, well and good, we have organization. If it is not mapping, we look into, alias, of those organizations, and then try to do the mapping.

154 00:19:27.730 00:19:30.040 Ashwini Sharma: Maps, well and good. If not…

155 00:19:30.150 00:19:34.779 Ashwini Sharma: You normalize the website that’s associated with each of the

156 00:19:35.200 00:19:43.199 Ashwini Sharma: exhibitor in the raw data, and then try to map it with the customer links, which is where Remember stores its data.

157 00:19:43.840 00:19:49.519 Ashwini Sharma: For those that match, good. If it is not matching, we’ll look into the email now.

158 00:19:49.700 00:19:54.360 Ashwini Sharma: If it is matching with email, good. Otherwise, go with the phone.

159 00:19:54.480 00:19:57.969 Ashwini Sharma: If it is matching with the phone.

160 00:19:58.190 00:20:06.679 Ashwini Sharma: Good, otherwise go with a sort of a fuzzy matching between organization name and the organization on remembers.

161 00:20:06.680 00:20:14.380 Uttam Kumaran: Is everyone familiar with, like, with, like, for example, like, edit distance? Like, basically, some of these, some, yeah, some of these functions we can swap.

162 00:20:14.570 00:20:19.099 Uttam Kumaran: And I think what we can also do is kind of produce a report with, like, match rates by.

163 00:20:19.910 00:20:30.820 Uttam Kumaran: like, sort of what part of the waterfall, if helpful. Like, at a distance as well, and there’s a couple of others that we could do. I also… yeah, maybe you can continue, Ashwini, yeah.

164 00:20:31.820 00:20:34.860 Ashwini Sharma: Yeah, and there is also, finally.

165 00:20:35.080 00:20:41.300 Ashwini Sharma: email domain, the domain that’s associated with the email ID, that is being matched to the website domain.

166 00:20:41.400 00:20:52.869 Ashwini Sharma: Right? And after all this match, I’m getting around, 90% of matches. If you want to, see something that’s,

167 00:20:54.660 00:20:56.210 Ashwini Sharma: One second…

168 00:21:04.820 00:21:20.009 Ashwini Sharma: I know one thing frustrating about this Okta is… I don’t know if it’s frustrating for you, but for me, it is. You go over here in the styles dashboard, and then you click on this icon, right? And then you think, okay, let it open, I’ll do something else while it opens.

169 00:21:20.650 00:21:23.189 Ashwini Sharma: And then you come back after some time, it didn’t open, right?

170 00:21:23.190 00:21:27.839 Uttam Kumaran: Wait, but so for GitHub, though, I just go to GitHub, and I log in.

171 00:21:29.050 00:21:30.619 Uttam Kumaran: Yeah, the SSO login.

172 00:21:30.620 00:21:31.699 Ashwini Sharma: Yeah, but…

173 00:21:31.700 00:21:32.649 Katherine Bayless: No, it doesn’t work right now.

174 00:21:33.220 00:21:38.520 Katherine Bayless: It is really annoying. It’s like, you have to stay actively on that tab for it to complete.

175 00:21:39.630 00:21:41.680 Katherine Bayless: Yeah, it drives me fucking crazy.

176 00:21:43.520 00:21:47.870 Katherine Bayless: Because it means you can’t just come in and go, bop, bop, bop. Like, that’s what I want to do. I want to say, open these 3 things.

177 00:21:47.870 00:21:50.159 Ashwini Sharma: Yeah, it doesn’t work that way.

178 00:21:50.160 00:21:51.110 Uttam Kumaran: Yeah.

179 00:21:51.110 00:21:53.880 Katherine Bayless: I feel your pain. I feel your pain.

180 00:21:55.260 00:22:00.290 Ashwini Sharma: Okay, I’m just going to show you how we can,

181 00:22:00.800 00:22:05.070 Ashwini Sharma: You know, there are a few columns over here in the end that I’ve added.

182 00:22:07.440 00:22:08.290 Katherine Bayless: Can I ask the…

183 00:22:08.290 00:22:08.949 Ashwini Sharma: Oh, yeah.

184 00:22:08.950 00:22:10.970 Katherine Bayless: Tiniest people question, actually.

185 00:22:10.980 00:22:12.110 Ashwini Sharma: Sure.

186 00:22:12.410 00:22:18.959 Katherine Bayless: I just… I noticed, like, your role is public, but it let you execute the query. I thought public didn’t have any, like, permissions.

187 00:22:18.960 00:22:25.270 Ashwini Sharma: No, I’m also account admin, so it’ll just take up, the account admin role, if…

188 00:22:25.830 00:22:26.819 Katherine Bayless: Oh, so…

189 00:22:26.820 00:22:29.170 Ashwini Sharma: I have not selected it, yeah.

190 00:22:29.380 00:22:30.109 Uttam Kumaran: A warehouse.

191 00:22:30.110 00:22:30.520 Ashwini Sharma: What?

192 00:22:30.520 00:22:31.200 Uttam Kumaran: Yeah.

193 00:22:32.120 00:22:33.220 Katherine Bayless: Interesting.

194 00:22:35.490 00:22:40.380 Ashwini Sharma: But somebody who does not have an account admin, it’ll just, you know, not allow them to query this.

195 00:22:40.380 00:22:44.360 Uttam Kumaran: I should just delete the public role, I feel like I don’t usually do that, but,

196 00:22:44.830 00:22:49.289 Uttam Kumaran: I actually don’t know if it’s a system thing, but that’s a good point. Let me just look at that.

197 00:22:50.210 00:22:50.660 Katherine Bayless: Yeah, that’s true.

198 00:22:50.660 00:22:51.560 Ashwini Sharma: Yes.

199 00:22:51.690 00:22:58.899 Ashwini Sharma: So, right, yeah, in this, if you see, like, there are these two columns, right, where,

200 00:22:59.180 00:23:02.459 Ashwini Sharma: This one indicates whether it’s a match or not.

201 00:23:02.660 00:23:10.180 Ashwini Sharma: And then this one indicates, like, by what method did we match it, right? So,

202 00:23:10.420 00:23:16.239 Ashwini Sharma: Maybe if I just do a… Sorry.

203 00:23:18.360 00:23:19.310 Ashwini Sharma: Boom.

204 00:23:28.400 00:23:30.429 Ashwini Sharma: Yeah, so this is… this is the…

205 00:23:31.050 00:23:35.670 Ashwini Sharma: Around 98,000 matched, and 11,000 didn’t match.

206 00:23:37.840 00:23:42.939 Ashwini Sharma: So close to 90% is matching, and if we want to see the…

207 00:23:43.090 00:23:46.040 Ashwini Sharma: Method through which it matched.

208 00:23:47.030 00:23:54.039 Kyle Wandel: And this is the amount of exhibitors that match to this… the members’ database, right?

209 00:23:54.450 00:23:57.940 Ashwini Sharma: Yes. Okay. That’s…

210 00:24:08.950 00:24:10.849 Ashwini Sharma: CRM resolution method.

211 00:24:36.400 00:24:37.110 Ashwini Sharma: Yeah.

212 00:24:37.230 00:24:38.570 Ashwini Sharma: So we have a…

213 00:24:43.250 00:24:49.419 Ashwini Sharma: So, name match is maximum, right, and then alias match, phone match are… are the least,

214 00:24:51.580 00:25:11.479 Kyle Wandel: It would make sense that those three would be the highest for this exhibitor, I believe, because the name should be pretty darn close to what is in members, and then alias as well would go to the subsidiary, and then email as well, because usually these emails would almost be guaranteed in Impexium, because they’re reps, basically.

215 00:25:12.270 00:25:12.710 Katherine Bayless: Yeah…

216 00:25:12.710 00:25:13.590 Ashwini Sharma: Good.

217 00:25:14.280 00:25:31.469 Katherine Bayless: I was actually wondering about the email and phone, though, Kyle, like, in the case of the aggregators, where we know, like, one person will manage multiple exhibit booths, like, you would kind of almost hope that they just wouldn’t… wouldn’t wind up matching to Impexium, but I could see where we might

218 00:25:32.570 00:25:38.299 Katherine Bayless: If those records did match, we would wind up potentially with, like, a many-to-many join.

219 00:25:39.480 00:25:41.680 Kyle Wandel: I mean, if we… if you remove the email…

220 00:25:41.870 00:25:58.810 Kyle Wandel: match. I wonder how many people match afterwards, meaning down the chain, and then if it’s still at around that 90%, or if it’s not, because I… yeah, that makes sense. And you only… and you… you would just have to remove that, because everything else should be fine, because everything else is on the company name.

221 00:25:59.180 00:26:00.860 Kyle Wandel: Yeah. Yeah, yeah.

222 00:26:02.770 00:26:05.819 Ashwini Sharma: Should I remove this email and then see how many matches?

223 00:26:05.820 00:26:15.279 Kyle Wandel: I would remove… I don’t know, this is a few guys, but I would remove, I think, email and phone, because I think that’s a… I think that’s individual-based and not company-based.

224 00:26:15.930 00:26:35.540 Katherine Bayless: Yeah, that’s… I would… I would… I would agree. I would say, like, maybe remove it in the sense of, like, comment it out in case we want to put it back sort of thing, but, like, yeah, I think, like Kyle said, because those are for a person, we might wind up matching the person in remembers, but that might not actually be the correct company match.

225 00:26:35.800 00:26:42.099 Ashwini Sharma: It’s not matching a person, it’s matching… okay, yeah, yeah, primary representative is still a person, right, yeah.

226 00:26:42.100 00:26:43.839 Katherine Bayless: Yeah, exactly, yeah.

227 00:26:43.840 00:26:44.430 Ashwini Sharma: I know.

228 00:26:44.690 00:26:59.499 Katherine Bayless: Yeah, so, like, I mean, we could wind up with false positives where people, like, switched companies, but I think, really, the more, like, likely issue we’d run into is there are these, we call them aggregators, but it’s, like, one person who will contract with multiple companies to help them do their CEO.

229 00:26:59.500 00:27:00.720 Ashwini Sharma: Okay.

230 00:27:00.720 00:27:20.160 Katherine Bayless: Yeah, and so, like, that… that would be potentially the pitfall there. I mean, I think Kyle is right, like, the name match is strong for this, because the two teams have done a really good job keeping them in sync, so probably the things that are matching further down, like, that are winding up matching on email and phone.

231 00:27:20.160 00:27:27.979 Katherine Bayless: Yeah, they’re probably edge case companies that we don’t really deal with that often, to be honest, but I would take out the email and phone match for the moment.

232 00:27:28.820 00:27:30.060 Ashwini Sharma: Okay. Beautiful.

233 00:27:30.060 00:27:30.410 Kyle Wandel: I wouldn’.

234 00:27:30.410 00:27:31.240 Ashwini Sharma: I’ll roll back.

235 00:27:31.440 00:27:40.660 Kyle Wandel: I would definitely leave it for the CES identity matching, because that is more individual-based, but, this, I would remove it for this.

236 00:27:42.570 00:27:47.720 Katherine Bayless: Yeah, which actually… that actually gets into my question, which is…

237 00:27:50.340 00:28:00.720 Katherine Bayless: It’s like 8 questions inside of one thought. Like, mmm… How do we want to…

238 00:28:01.710 00:28:13.189 Katherine Bayless: do the identity stitching in the sense of, like, okay, so this is doing exhibitors and members, then if we want to also do this for, Innovation Award applicants.

239 00:28:13.190 00:28:29.200 Katherine Bayless: And members, would we basically, like, the… this block of the code that runs through the logic, like, that’s kind of almost like a… almost like a function, in that we would just, wherever we want to do identity stitching between data sets, we’re running that same logic.

240 00:28:29.790 00:28:44.419 Ashwini Sharma: Yeah, so there are a bunch of other data files in the ExpoCAD data, so the way I did was I took the main exhibitors list, did the identity stitching for them, and rest all are just looking up to this data, and then

241 00:28:44.590 00:28:56.780 Ashwini Sharma: taking it directly from there, right? So as long as there is an exhibitor ID associated with any raw data set, we can directly use this main table as a lookup table, and then do the identity stitching. We don’t have to run that

242 00:28:56.990 00:28:59.520 Ashwini Sharma: You know, it’s a high compute code.

243 00:28:59.760 00:29:08.880 Ashwini Sharma: Because there are multiple joints involved, right? There are multiple filterings, and those things, and we don’t want to do that for every other dataset. So, as long as there is a…

244 00:29:09.090 00:29:19.949 Ashwini Sharma: some kind of identifier. If it is exhibitor ID, well and good. If it is something else, then we’ll have to think of some other approach. But as long as there is an exhibitor ID, we can always use this,

245 00:29:20.220 00:29:22.380 Ashwini Sharma: Table to do the stitching part.

246 00:29:22.750 00:29:27.170 Katherine Bayless: Okay, so then I guess, on that one, unfortunately.

247 00:29:27.170 00:29:27.570 Ashwini Sharma: No.

248 00:29:28.140 00:29:33.009 Katherine Bayless: Well, yeah, because only so many companies are going to be exhibitors, and, like.

249 00:29:33.590 00:29:44.289 Katherine Bayless: we will need a different identifier, which I think we’ve known from the start, is, like, we’ll have to determine a canonical, kind of, canonical company ID, because we have…

250 00:29:44.480 00:30:03.339 Katherine Bayless: we just don’t have anything else that is in all of the contexts. But I think what we can do, though… because, like, this is a huge piece of the puzzle, right? Like, this, to your point, compute-intensive, but it’s also done, we don’t need to repeat it. So, like, if we take all of these matches.

251 00:30:03.610 00:30:12.729 Katherine Bayless: reduce them to the distinct list of companies, and assign those canonical IDs. That ID we could put into remembers.

252 00:30:12.730 00:30:31.339 Katherine Bayless: because that company alias, table does… I mean, like, we can have different field labels in there, and so if we put, canonical company IDs into there, then that can be the table that’s driving matches, so that, like, if they have one, great, use it. If not, and

253 00:30:32.270 00:30:42.399 Katherine Bayless: That alias table, I think, does also have, all of the exhibitor IDs, that the team is sort of manually catalogued as well.

254 00:30:43.480 00:30:51.170 Katherine Bayless: We could also opt to not keep the canonical IDs in remembers, like, we don’t have to do, like, we can keep them wherever we want. I was…

255 00:30:51.170 00:31:03.980 Katherine Bayless: kind of thinking it made sense to use that system, because they’ve already been capturing identifiers for companies in that table, but that’s not to say that we have to use it for that. I think as long as we are somewhere persisting it, you know…

256 00:31:04.370 00:31:05.360 Katherine Bayless: Flexible.

257 00:31:07.070 00:31:09.290 Ashwini Sharma: Is there a reverse ETL system to…

258 00:31:09.730 00:31:12.770 Ashwini Sharma: To put this canonical IDs into the numbers.

259 00:31:12.770 00:31:34.619 Katherine Bayless: Yeah, so right now, the sort of brute force option is there is, like, a, you know, like a data import tool kind of thing on the admin side of the platform, but we also have a Power Automate key, that I just have not had time to, like, explore and play with, but we would totally be able to use that Power Automate key to send the data back to remembers, and if not.

260 00:31:34.890 00:31:37.910 Katherine Bayless: We can get the membership team to help us with using the import tool.

261 00:31:40.790 00:31:46.900 Uttam Kumaran: Yeah, if we can get that… I was gonna send a note anyways after this, that would be… we’ll just… we can test it out.

262 00:31:47.250 00:31:48.250 Katherine Bayless: Yeah, okay.

263 00:31:48.250 00:31:53.740 Uttam Kumaran: I think, Ashwini, we could also do some stuff with Snowflake Tasks to, like, kick that off, too.

264 00:31:54.070 00:31:54.730 Katherine Bayless: Hmm.

265 00:31:55.770 00:31:57.049 Ashwini Sharma: Yeah, sure.

266 00:31:57.310 00:32:00.180 Uttam Kumaran: One… one question I had on this

267 00:32:00.290 00:32:10.969 Uttam Kumaran: Ashwini, and I’m actually glad we got this far, because one of my proposals was gonna see if we can try to use some of Snowflake’s, inbuilt AI

268 00:32:11.310 00:32:16.229 Uttam Kumaran: Capability to ask, sort of, like, A next-level fuzzy match.

269 00:32:16.370 00:32:20.760 Uttam Kumaran: Yeah. If, like, is there any way for us to, like, provide

270 00:32:21.080 00:32:29.249 Uttam Kumaran: I don’t know, I think I’ll let you maybe think about the architecture, but I’m almost like, let… can we let AI try to make a more non-deterministic match?

271 00:32:29.660 00:32:32.270 Uttam Kumaran: Like, let’s say we were giving it, like.

272 00:32:32.540 00:32:35.970 Uttam Kumaran: All the details, and then, like, look through this table, and, like.

273 00:32:37.260 00:32:47.129 Uttam Kumaran: give us the number one match, and that’s just another, like… maybe that’s the last, but, like, maybe as we get smarter, it becomes… moves up the chain. That’s the only thing I think that’s different

274 00:32:47.430 00:32:54.419 Uttam Kumaran: in identity stitching than, like, I feel like was available a few years ago, that I think would be nice to add as part of this chain.

275 00:32:55.860 00:32:57.030 Ashwini Sharma: Yeah, I mean, I…

276 00:32:57.340 00:32:58.520 Katherine Bayless: Sorry, go ahead.

277 00:32:58.970 00:33:06.020 Ashwini Sharma: So, sorry, I was just asking, like, are we thinking of Cortex AI, or, like, something like Cloud Code?

278 00:33:06.610 00:33:12.820 Uttam Kumaran: I think in Snowflake, you’ll have access to a few different models. I guess at this point, I’m…

279 00:33:13.520 00:33:18.349 Uttam Kumaran: I’m thinking, like, what… I think you could call inline in SQL,

280 00:33:18.590 00:33:32.259 Uttam Kumaran: you could call an LLM function, and so you could… you could think about a function that brings in some of this data, and then maybe runs another query. I’m not exactly sure the mechanics, but I’m sort of thinking it was, like, one more layer.

281 00:33:32.420 00:33:37.050 Uttam Kumaran: For it to go search across a dataset to find a match.

282 00:33:39.760 00:33:41.410 Ashwini Sharma: Sure, I can explore that, yeah.

283 00:33:41.410 00:33:42.000 Uttam Kumaran: Okay.

284 00:33:43.840 00:33:55.800 Katherine Bayless: Yeah, and all I was gonna say was it’s worth a shot, because I’ve definitely, like, done a little bit of that, like, you know, kind of saying, like, okay, go through with the waterfall logic, and then also just, like, I don’t know, take your best guess!

285 00:33:55.800 00:34:15.859 Katherine Bayless: And it tends to do pretty well. I mean, like, I think because we work in the tech industry, it helps, right? Like, there’s probably a little bit more likelihood that some of these companies exist in the training data set to a certain extent, right? And so, yeah, it does tend to do pretty well. As a final step, I think it makes sense to at least see if it’s good.

286 00:34:18.350 00:34:24.059 Kyle Wandel: Man, these capabilities are just getting out of hand. Looking at it now, it’s like, jeez.

287 00:34:26.510 00:34:39.260 Uttam Kumaran: Yeah, I mean, I really had… could have used… I mean, I hope this works, I hope it’s actually helpful, but, like, it’s one of the first things when Snowflake released, I was like, oh, wow, like, fuzzy matching, maybe it could get, like, a little bit smarter, so…

288 00:34:40.460 00:34:49.389 Kyle Wandel: Yeah, I think it’s the… the way Snowflake has it, I think it’s… I think it’s the AI underscore extract function, but, it’s funny. Happy Hunting.

289 00:34:50.060 00:34:51.120 Ashwini Sharma: the show.

290 00:34:52.820 00:34:59.659 Uttam Kumaran: Well, like, at least we know we have this structure, so I didn’t want to, like, derail until I was like, okay, we have this structure, so…

291 00:34:59.960 00:35:00.360 Katherine Bayless: Yeah.

292 00:35:00.360 00:35:00.800 Uttam Kumaran: Cool.

293 00:35:00.800 00:35:19.960 Katherine Bayless: And this is awesome, I mean, I think this is… this is a huge first step. I think figuring out exactly how and where we want to persist the canonical matches, and then what it looks like on an ongoing basis, like, every time a company comes in somewhere in the data that doesn’t, have an immediate match to the,

294 00:35:19.960 00:35:23.370 Katherine Bayless: Canonical list, like, what that workflow looks like.

295 00:35:38.300 00:35:41.320 Kyle Wandel: Yeah, and even this sentiment, I was looking at that, too.

296 00:35:41.570 00:35:42.530 Katherine Bayless: Hmm, yeah.

297 00:35:42.860 00:35:45.600 Kyle Wandel: parse document… Great.

298 00:35:45.810 00:35:58.429 Katherine Bayless: There is something kind of inherently funny to me, although I obviously… I understand why, but, like, AI redact is like, oh, just ask the giant super-spy LLM to hide your PII, that… that makes sense.

299 00:35:58.430 00:35:58.820 Kyle Wandel: Makes sense.

300 00:35:58.820 00:36:01.810 Katherine Bayless: But also, yeah, it does.

301 00:36:02.240 00:36:02.910 Kyle Wandel: Bye.

302 00:36:02.910 00:36:03.410 Katherine Bayless: Yeah.

303 00:36:03.410 00:36:05.939 Uttam Kumaran: I guess it is kinda nice, but…

304 00:36:05.940 00:36:07.050 Katherine Bayless: It’s like…

305 00:36:07.050 00:36:09.809 Uttam Kumaran: But also, maybe you should just also do it manually.

306 00:36:10.070 00:36:10.960 Katherine Bayless: Right?

307 00:36:14.190 00:36:15.610 Kyle Wandel: What would be happening.

308 00:36:15.610 00:36:18.550 Uttam Kumaran: Yeah, AI Extract could be good,

309 00:36:20.400 00:36:22.739 Uttam Kumaran: I was thinking, like, AI Complete.

310 00:36:24.280 00:36:27.549 Uttam Kumaran: This, like, could be another one, where it’s like, yeah, you give it, like, a…

311 00:36:27.550 00:36:28.130 Kyle Wandel: dope.

312 00:36:28.130 00:36:32.799 Uttam Kumaran: like, for example, you could concatenate something and pass it in. I just don’t know how the,

313 00:36:33.650 00:36:38.389 Uttam Kumaran: This is actually a lot… they expanded these functions since the last time I looked, so I was like…

314 00:36:39.730 00:36:42.730 Uttam Kumaran: when Cortex first came out, all you could do was, like.

315 00:36:42.970 00:36:46.880 Uttam Kumaran: like, one or two things, like, nothing. I think they’ve really expanded it, so…

316 00:36:47.460 00:36:58.519 Katherine Bayless: Yeah, I remember talking to a friend, like, before I took this job, whose company does, like, they do, like, I mean, I think they do awful stuff, like remarketing and stuff like that, right? Like, they’re, like, behind the web

317 00:36:58.520 00:37:17.289 Katherine Bayless: the red plum that you get in the mail, despite never having asked for that. But they were… they’re a big snowflake shop, and he was talking about how clunky it was to, like, get things out of Cortex, and he’s like, I mean, if you put the work in, like, I mean, it’s okay. But he was talking about, like, just the, like, level of lift, and I’m like, yeah, so much has changed so fast.

318 00:37:17.670 00:37:18.330 Uttam Kumaran: Yeah.

319 00:37:31.730 00:37:39.789 Katherine Bayless: So I think for the identity stitching stuff, I’ll get the Power Automate key over to you guys. We can investigate that angle. I think…

320 00:37:42.830 00:38:00.669 Katherine Bayless: I think, I mean, we could also kind of sort of, like, test it as a pipeline if we wanted to. I mean, once you’re sort of ready, right? Like, if we’ve got these sets of matches that are done, get those parked into remembers, and then, you know, take another dataset that has companies in it, and just, like, see how well it does, and then, yeah.

321 00:38:01.760 00:38:16.810 Katherine Bayless: I think the mess will be pretty cleanable, to be honest. Like, I think it’s the individuals that I have more, like, okay, I don’t know what we’re gonna do. Although I did talk to the Okta folks at the, because they had a little booth at the thing yesterday, and…

322 00:38:16.980 00:38:24.120 Katherine Bayless: one of their suggestions, which I thought was honestly pretty good, was like, why don’t you just turn on the social login? And I was like, oh!

323 00:38:24.200 00:38:43.719 Katherine Bayless: good idea. Like, so people could do the, like, sign in with Google, sign in with Apple, sign in with, you know, Facebook, whatever. Apparently, they support quite a number of platforms, in fact. Sign in with LinkedIn, which I was like, oh, maybe that would actually help with validating credentials, potentially? Probably not, sadly. But yeah, so, like, we could actually give people the ability to authenticate

324 00:38:43.770 00:39:02.080 Katherine Bayless: through to our system using an existing login, instead of forcing them to create one for us, which, in retrospect, as somebody who often does the sign in with Google option, I can’t believe I didn’t think about. But yeah, so anyway, that is all just to say, the Okta customer side is a little messier, but I think these companies, like, I think this first pass

325 00:39:02.080 00:39:05.990 Katherine Bayless: And then, like, a little tending of the garden will get most of it in good shape.

326 00:39:07.300 00:39:12.189 Uttam Kumaran: Okay, great. Yeah, I would love for us to try Testapower, it automates stuff out, for sure.

327 00:39:13.670 00:39:32.939 Katherine Bayless: I can’t remember off the top of my head if we have a staging key and a production key, but I’m pretty sure we do. We do have a staging environment with remembers. I don’t think that they keep it necessarily, like, up-to-date data-wise, but it would have enough of the stuff in there for us to be able to use it, because it’s at least all the same structure as the production side, even if the data’s different.

328 00:39:34.110 00:39:34.770 Uttam Kumaran: Okay.

329 00:39:36.660 00:39:38.369 Katherine Bayless: Yeah. This is cool.

330 00:39:41.280 00:39:45.479 Kyle Wandel: Any updates to the Snowflake rollout plan that you want to talk about?

331 00:39:46.490 00:39:51.969 Katherine Bayless: Mmm… I think let’s table that, maybe, for Monday stuff. I,

332 00:39:52.310 00:40:03.990 Katherine Bayless: I haven’t had a chance to catch up with you yet, Kyle, but I think there’s a little of a little anxiety, apparently at the leadership level around people having access to the data.

333 00:40:03.990 00:40:05.050 Kyle Wandel: I figured so.

334 00:40:05.400 00:40:13.700 Katherine Bayless: I have a meeting with Christina at 1 to kind of come up with a little bit of our, like, emotion management strategy, so I’m still…

335 00:40:13.820 00:40:31.240 Katherine Bayless: cautiously optimistic we’ll proceed as planned, but there might be nuance we’ll have to bring in around, like, some of the roles, or who we’re rolling it out to, or stuff like that. So I’ll know more later today, so if we circle back on that on Monday, then we’ll have kind of the more final answer.

336 00:40:31.570 00:40:37.340 Kyle Wandel: Yeah, and maybe what we could do is, like, maybe one way to think about it is offering, like, a schedule of release based on who

337 00:40:37.450 00:40:52.919 Kyle Wandel: I don’t know, maybe that’s one way to do it, but, like, obviously, like, between, September and October, like, only certain people should probably have access to it, quite frankly, but then post-show, before, after him, I can see them opening up. I understand wanting to open it up to everybody.

338 00:40:53.160 00:40:55.340 Kyle Wandel: I just get… not nervous, but…

339 00:40:56.280 00:40:59.729 Kyle Wandel: I don’t know. If we can give them lists and not counts, that’d be good. I don’t know.

340 00:40:59.900 00:41:00.520 Kyle Wandel: So…

341 00:41:00.730 00:41:01.660 Katherine Bayless: Yeah.

342 00:41:01.790 00:41:07.889 Katherine Bayless: Yeah, yeah, I know, I mean, I was kind of disheartened a little bit, because I was like, I mean…

343 00:41:08.370 00:41:16.859 Katherine Bayless: really, to me, it speaks to just people are afraid that our staff are not smart enough to behave like adults with data. So, you know.

344 00:41:17.370 00:41:20.519 Kyle Wandel: Yep, it definitely is a lack of trust, for sure, so…

345 00:41:20.520 00:41:36.439 Katherine Bayless: Yeah, and I… and truthfully, you know, it’s like, I don’t want our team to just get stuck in, like, permission and provisioning, like, you know, doom loops, right? Like, we know that Power BI is a pain point because they tried to govern access to it, like, too tightly and then too loosely, and so, like, I think…

346 00:41:36.440 00:41:51.900 Katherine Bayless: yes, when we get to the part where we are bringing in genuinely more sensitive data, there’s a question, or, you know, a conversation to be had around access, but, like, the CES stuff and the Remember stuff are technically already available to everybody, we’re just making it easier to get to them, so…

347 00:41:51.900 00:42:15.949 Kyle Wandel: And maybe we can, like, try to, like… I mean, you’re probably much better at this than I am, but maybe it’s a cool way to, like, show, like, if we actually have data, like, this is what would be happening if we have data. So, like, if sales are down, well, now most people in the organization understand that, so now they can kick into high gear in terms of helping out if need be. Or if things are going really well, well, guess what? That’s kind of free word-of-mouth marketing, basically. So, I don’t know

348 00:42:15.950 00:42:20.380 Kyle Wandel: So, I mean, I think that’s maybe the way to preface it, but I mean, obviously you’re much better than I am, so…

349 00:42:20.380 00:42:33.529 Katherine Bayless: No, no, I mean, those are totally talking points in my arsenal too, right? Like, this is a way to empower your staff to do their jobs, right? Give them the information they need, like, rising tide will lift all boats, even if it feels scary at first, but… yeah, so…

350 00:42:33.720 00:42:34.660 Katherine Bayless: So, yeah.

351 00:42:34.950 00:42:39.070 Katherine Bayless: I will fight the good fight. I will persevere.

352 00:42:43.990 00:42:48.450 Katherine Bayless: Anything else on our collective brains today?

353 00:42:54.370 00:42:55.100 Katherine Bayless: Okay.

354 00:42:56.510 00:42:57.110 Uttam Kumaran: Okay?

355 00:42:57.420 00:42:57.940 Katherine Bayless: Feeling good.

356 00:42:57.940 00:42:58.480 Uttam Kumaran: Cool.

357 00:42:59.110 00:43:00.000 Uttam Kumaran: Awesome.

358 00:43:01.000 00:43:03.329 Katherine Bayless: Yeah, this is good. I am,

359 00:43:03.620 00:43:11.429 Katherine Bayless: I’m very excited about the identity stitching stuff. I think, like, the… rolling that out to folks, is gonna be really, like…

360 00:43:11.430 00:43:17.460 Uttam Kumaran: Yeah, I also think so, too. Yeah, it’s like, I’m… I’m pumped we also have all the data, and like, it’s…

361 00:43:17.930 00:43:19.150 Uttam Kumaran: I feel like it’s…

362 00:43:19.500 00:43:26.860 Uttam Kumaran: I don’t know if it would have happened without, like, us working through each piece, you know, like we did, so it’s… I think it’s also gonna be really great, so people…

363 00:43:27.100 00:43:32.159 Uttam Kumaran: are not guessing, or at least they have… it’s off of their plate to do the matching, you know?

364 00:43:32.160 00:43:39.380 Katherine Bayless: Yeah, yeah, yeah, exactly, exactly. And there’s a lot of that manually going on right now, so… yeah.

365 00:43:40.350 00:43:40.970 Uttam Kumaran: Cool.

366 00:43:41.110 00:43:44.799 Katherine Bayless: Yeah. Okay. Awesome. Well, in that case,

367 00:43:44.970 00:43:48.069 Katherine Bayless: We can carry on with our Fridays,

368 00:43:48.310 00:43:51.560 Katherine Bayless: I still have a lot more calls for a Friday, it’s weird.

369 00:43:51.560 00:43:53.330 Uttam Kumaran: Good luck.

370 00:43:55.250 00:43:55.640 Uttam Kumaran: Thanks.

371 00:43:55.980 00:43:57.730 Uttam Kumaran: Okay, thank you. Talk to you soon.

372 00:43:58.110 00:43:58.760 Katherine Bayless: Okay?