Uttam Kumaran: I don’t sh… Awaish Kumar: Brilliant. Uttam Kumaran: How’s the day going? Awaish Kumar: We’re all good. How about you? Shivani Amar: Hi! Sorry about them, I ended up being in huddles. Ghalib Suleiman: , problem. Oliver, Point Song CEO, co-founder, pleasure. Solutions with money. Shivani Amar: Nice to meet you, I’m Shivani. on the BizOps team here at Element. I know we’re waiting for some folks on the tech side, and while we are… are you free at, , 3.30 p.m. Eastern? , cool, let’s just… I’m just gonna put that in the calendar now, I’ll just call yourself in. Andy Weist: Hey, folks. Ghalib Suleiman: Hi, I’m Dee, I’m Gallup, Auto Atomic CEO. Andy Weist: Nice to meet you. Jason Wu: Y’all, how are you? Ghalib Suleiman: Hi, Jason, good, not too bad. Otam, I’ll let you… . Guide me as to anything I could cover. Uttam Kumaran: , mentioned, piggybacking off our last conversation team, wanted to introduce you to Golub, CEO of Polytomic. , Galib and I have been working together for, , maybe a little over 2 years now, but we’re big fans, we do a lot of business with them. Really have started, as I mentioned in our last call, to move a lot of our recommendations from Fivetrain, who I would say was the… , seemed the biggest, , , biggest brand in this space. And leveraging Polyatomic a lot heavier, given just how great the product is, cost advantages, and, , support from the team. And in particular. Gala, it would be great for you to highlight, , some of the, , enterprise-sized customers that you work with, but also your emphasis on support and net new integration building. that’s some of the most important things for the team. Maybe, Jason, you could also highlight , any, questions that we would answered. But I also explained to Gala before the call, Elements posture on selecting new vendors and how you guys, handle the procurement process. , , I feel confident that, , Polytomic team could work within that, but maybe if you want to highlight that. maybe, Galob, I can pass it to you just to give a little… Overview about the product. your company, customers, and Jason, I can hand it to you just to cover any topics you want to. And Andy and Steve, feel free also to chime in on any piece. Ghalib Suleiman: , sounds good. , I’m coming from my side. Tom Garlepoise, my CEO, co-founder. I’ve been in data for, I don’t know, God, over 15 years now, although a few years ago, we just started this company. And we’re a platform that moves data, and most commonly, people will move data to a data warehouse. your Snowflakes, your Databrickses, your Google BigQueries, and on. And or they’ll go from these warehouses into other systems. Customers, , it ranges from, , companies on the tiny side to hyper-growth startups of the world, and then big, massive enterprises , I don’t know, you may know WebMD. Webmd have a massive… business side arm, that moves a lot of data to and from hospital and medical partners, or companies, , the NFL, is that NFL Corporation? The NFL and every team in the NFL uses us to move a lot of data Between… in and out of warehouses there. , some pretty large-scale Okta, I don’t know if you guys use Okta internally, a public company that do SSO stuff, but generally we use scale, but the real, probably, big one that Tom mentioned is we’ve invested a lot in tooling to build integrations on request. one ever gets told, , oh, wait till the Q3 roadmap, and maybe we’ll consider this. Generally, if something has an API, We can get integration built robustly quite quickly, and we’ve invested a lot in this ability to respond quickly. Besides that, , if there’s anything more you’d me to cover, Tom? Uttam Kumaran: that’s it. Ghalib Suleiman: Potential integrations here. There’s nothing. Uttam Kumaran: , I wonder, Gala, maybe even, , typically, if you want to just start with maybe doing a brief we can just walk through the product, that would be a good also. , maybe let’s start with a demo. some people on the call are… have some, , experience or familiarity with, , the landscape, but I don’t know if we… we haven’t yet showed what a tool looks , it’d be helpful just to show the team. What it’s to set up a connector, and then… And that’ll just set the stage really . Perfect. And then, , team. If you wanna… any other questions, feel free. But a demo would be a really good show and tell. Ghalib Suleiman: be used here, with Tom, sorry. Uttam Kumaran: We haven’t yet decided on a warehouse, … But any of the. Shivani Amar: pick, I don’t know. the warehouse, the two that we’re considering are Snowflake and BigQuery. Ghalib Suleiman: , I’ll pick Bitcoin and do a demo on that. Shivani Amar: , that sounds good. , it’s fine. Ghalib Suleiman: The usage is the same that much. Shivani Amar: , and otherwise, , you can think with D2C, , types of systems that we’re trying to pull data from, obviously, are, , Shopify and Amazon Seller, and then we have, , retail data, and just supply chain data that will come eventually from our ERP, that’s just to give you a sense of, . the broader ecosystem of things we would want data pulled from, and then eventually data put back into the ERP, or, , flow into the ERP from other systems, just to give you a flavor. Ghalib Suleiman: Hmm. What ERP do you guys use? Shivani Amar: we’re in the midst of transitioning to, we’re gonna kick off a transition to NetSuite from QuickBooks. Ghalib Suleiman: , that’s a very common one. , we support both. , I’ll, , I’ll use NetSuite as an example into BigQuery, but just bear in mind it’s the same way to move anything into BigQuery or Snowflake. Let me share my screen. , can you see my screen here? , here we’re on our demo cluster, , , there’s really nothing set up here, it’s a bunch of zero syncs have run statuses, but under connections, we are connected to all sorts of example systems, and we’ve set up these connections beforehand. For you in particular, we have a BigQuery connection here. We also have a… , I was gonna use NetSuite as an example, , we have a NetSuite connection here with credentials, as one would expect. But, when moving data into warehouses, we’ll stick with our Bulk Syncs tab here. This is a tab in the product dedicated to moving stuff into warehouses. Rather performantly, at scale. 2. , if I click this button, it’s really quite simple, this will take a minute. Pick your source system, maybe we’ll pick NetSuite here, pick our destination, I’ll pick BigQuery here as an example. And for the data people on the call, we’ll generate a schema output name for you to put tables in, but you can always rename this per whatever you want. And then we’ll list the tables on offer from the source system. , NetSuite has, , many collections on offer, but you can select everything by clicking the top left, or you can restrict things to particular tables. , maybe I only want the customer table, for example, we can do that. Within each table, you can exclude any fields you don’t want as . And , again, for the data people here, each field results in a column that will manifest in the warehouse, and each collection name results in a corresponding table name, and we’ll take care of corresponding types, and on. Does that make sense? Otherwise… You hit continue. Your last step here is to set your schedule. Now, this could be manual, where someone needs to click a button every time you want to update. Continuous, the guarantee with this, guys, will run every 5 minutes, and later than that. And then you’ve got more structured options, hourly, daily, weekly, or even you can specify a custom schedule if you’re choosing. Run on weekends, or whatever it is. Does that make sense? I’ll set this to run, say, hourly, for example. You hit continue. That’s really it. There’s a summary of your settings, all good and fine. You can hit save in the top . And then what we’ll do is we’ll create the corresponding scheme on the warehouse, and you have a sync created now. From here, you can run a test, which would sync 5 random records just for testing’s sakes, or just enable the sync. Instead of waiting for the next one to kick off, just click start here. And then we do have a full history, or audit log, if you will, you can see progress on Your current set of tables for the current run, and then anything that’s completed gets shunted off in this completed table here, with appropriate logs. And then, under error handling, you can toss in any number of email addresses, or Slack channels if you do use Slack, into the subscriber list, and we’ll notify if anything fails. But this really covers it, , you really begin with creating a connection. If I were to add a connection to, , let’s pick Snowflake, for example. , it’s what you’d expect. Enter your credentials, upload your old key pair, and you’re off to the races. And then once you do this, then you’re sticking with this tab to set up syncs into the warehouse. You’re always creating some sync. And you’re picking the collections you want to grab from the source system, and then you’re off. Any questions? Andy Weist: , what does the interface look to… if we need to, , create custom transformations of the data between the two systems? Ghalib Suleiman: for pushing, if you’re pushing into a system NetSuite, for example. Andy Weist: . For transforming data on Bridge. Ghalib Suleiman: , we do support transformations, and I’ll cover this now. This falls under this bit, models and model syncs. Now, when pushing two SaaS applications, let’s say ERPs, for example. We do begin with the concept of a data model. Think of a data model as a view that lives with us. Now, for example, let’s click Add Model, and let’s say, , we want to surface some billing data. , if you have… you may have the models manifest in your warehouse, you may want to do a custom SQL query, I’ll cover both, but we’ll pick this SQL warehouse here, for example. Let’s call this model users. Now, you can… pick data if it’s already set in tables. We’ll go to this table here, the users table. And we’ll present you with all the columns behind it. Every column has a name, an example value from your own data. On the left is your actionable column here for data models. This is a - field, a Boolean field, that decides what fields are surfaced in the model. In other words, you’re authorizing fields for syncing to non-databases and non-warhouses, ERPs. You’re going, these fields are allowed to be made available. to go to NetSuite, maybe tomorrow it’s Salesforce or something else, but you’re authorizing them to be synced. that makes sense. And in this case here, I’ll select everything, we’ll hit save. here’s a data model, ? It’s a mini catalog, I haven’t synced anything to an ERP, but it’s a catalog available for doing . If you want to transform data, you can build these models through custom SQL as . I can go to this SQL warehouse here, let’s call this, I don’t know, billing info, and here’s where you can switch from table select mode to SQL query mode. And if I were to hit that, we don’t store data, wherever you write here is running on your warehouse. , some of our customers will, , have the complete works of Shakespeare in sequel form, but for the sake of time. We’ll just do a dumb select star, but just realize, again, you can write wherever you want, complex joints, aggregations, and on. Hit continue, custom query produces fields. Choose the ones you want to greenlight for syncing to SAS tools and ERPs. We’ll select everything here, hit save. Here we have a couple of data models. , what can you do with these data models? This is where we go into model syncs. Model syncs are for syncing into non-databases, non-warehouses, where we get to map specific fields. For example, we’ll create a sync. pick our destination to be. NetSuite, for example. And pick our target to be, say, again, pick the customer object again. And you do get some… a bit more subtlety here, you get to pick a sync mode, ? Either we are updating or creating records in NetSuite. Or, we’re only creating ones that don’t exist yet. Or, on every run, we’re only updating matching ones with your source systems. Does that make sense? And , we’ll do both for the sake of this demo. First thing to set when moving data into any software application is picking an identity mapping. , on the left-hand side, it’s these models we’ve constructed in the previous step, ? Surface for our warehouse. On the -hand side, in this case, it’s the NetSuite Customer Object fields, all the ones listed on there. This identity mapping is really some unique field on both sides. Think of it as a common primary key, if you will. Does that make sense? And you can pick whatever you want. But for the sake of this demo, I’ll pick email. our authorized users table, map that to email in NetSuite, and then from here, it’s really a field mapping exercise, , I can grab first name, maps to first name in NetSuite, I can grab last name, , I could go on, you get the point here. But it’s ultimately, it’s coming from these data models we’ve authorized. And then you do get some point-and-click filtering, and , for example, , I don’t know, maybe you only want to sync records where email, say, I don’t know, does not end with For example, gmail.com. , whatever it is, you do get a full list of filters for text, aka strings, other ones for numbers, and other ones for dates as . That makes sense. Cool. Otherwise, you hit continue, and then symmetrically, with these box links into warehouses, you do get the usual schedule options. I’ll have this run again, say, hourly. 30 minutes on the hour. Again, verify your summary of your settings. If all is , hit save. And here we have a sync going the other way, with your mappings currently present going into your software ERP, in this case NetSuite. Again, you can run a test sync which would sync 5 random records, or just enable this guy. That’s really it, ? Again, similarly, you’ve got a full history view, and you can throw in email addresses or Slack channels for error notifications. But the general concept when going the other way is you construct these data models, and this becomes your catalog of fields to go into software platforms, aka non- databases and non-warhouses. If you’re going into the warehouse. we get to control the schema, , when going in that direction, then you’d use these bulk syncs, as I mentioned. And just pick what you want, and then we’ll generate all the schemas. These model things just give you rather fine-grained control, where you do get to, and you have to do this since, , we certainly don’t want to be generating fields in NetSuite for you, but imagine your NetSuite admin creates fields, and then someone can create mappings to them. That makes sense. Andy Weist: , for, , one real-life example, we have bundled products in Shopify that really map to, , 4 dynamic products that we want line item information about. , , what’s the functionality there where we really need to inspect the data rows coming out of Shopify and transforming them into, , multiplexing them into multiple data rows within our data warehouse? Is that simple to do, or is that creating custom models? Ghalib Suleiman: , this is where Otam comes in, and I suspect, Otam, you probably would use dbt rather than… , … Custom sequences and polyatomic, ? . Uttam Kumaran: But in that situation… Ghalib Suleiman: where it comes. Uttam Kumaran: , we would land the data into the where… , Andy, it depends if that’s getting used for reporting purposes, or if that’s getting used for… , operational. maybe start there. If it’s using for reporting, then we would land into the warehouse, and then that exact thing, splitting it and, , blowing it up into four rows is totally possible, all within SQL. If it’s used for operational purposes, then it depends… my next question is, , what is the SLA? For example, if that needs to happen, and then you need to send it to another system, and you’re with, , a couple hour delay, we can leverage Polyatomic to drop that in. And do that. Additionally, if you… if we’re… if we want a shorter SLA, you could still use polyatomic, but we would probably end up putting the transformations directly in here, in that, , SQL based environment. For example, if you need to send that Shopify information blow it up and send it somewhere else, we could end up leveraging some of the model query, , things there. it depends. . Does that make sense? Andy Weist: Because that is a real use case we have custom software for now, and I’m just trying to assess, , I know that’s not the target use case for this, but just trying to. Whether this can replace some of our existing stuff. Ghalib Suleiman: The big one is a question on latency. Go ahead. Question on latency, Andy, is how soon does this process need to happen? Andy Weist: Real time. Ghalib Suleiman: , if it’s real time, then probably I’d say stick with your custom software, given, , the fastest you’ll get with us is 5 to 10 minutes. Because, , the transformation itself will take, I don’t know, a few seconds. Sync runs every 5 minutes, let’s say a couple of minutes to write the data, and for NetSuite to process it, NetSuite doesn’t process stuff instantly. , I’d say, , somewhere between 5 and 10 minutes is… Andy Weist: , that’s a… Ghalib Suleiman: Could happen with us. Andy Weist: it’s a longer discussion, but that 5-10 minutes isn’t… , real time isn’t real time anyway, ? There’s always latency, … Ghalib Suleiman: Oh, I see, . Andy Weist: Another discussion for the future. . Ghalib Suleiman: That’s a high level, the model is consistently as a warehouse, there’s stuff coming in, the warehouse lets you transform it how you want, and then we can move it to wherever you want. Andy Weist: In your example here, the source data was, you had an example of a Postgres database, and you were writing your extraction model in SQL, Is… is SQL, or… Is that specific to the source model, because it was a Postgres database? Or, , would you be writing extraction SQL to something an HTTP database? Or, sorry, not. Ghalib Suleiman: , the only… we could build these models and pull fields from anything, , this is… we are a flexible system, but, , if we go to BigQuery, for example. the idea of having SQL queryability only shows up if the source is a SQL source. Andy Weist: , that’s what my question was. The build model is dependent on the source model schema. Ghalib Suleiman: , you can build a model on anything, where we can grab fields from the schema, but if you’re writing a SQL query in the model, the source has to support SQL querying, . Andy Weist: . That being said, we do have some… connections will lead to systems that are not common, and not currently, , legacy supported. What is the… what is Polytomic’s preference between us having you build connectors versus self-serve built connectors. I briefly looked through the documentation. It seems we can build things ourselves, but, , what’s. Ghalib Suleiman: Interesting. Andy Weist: operating procedure. Ghalib Suleiman: Generally, we… there’s a reason we do prefer… it’s a nicer experience for you if we build them, because we get to build things pretty quickly, , matter of days. you get a native connector. It’s very tuned. to the particular API in question. We do have a generic API connector you can use, but you just have to get in there and specify, , the pagination method of the API required, and on and forth. In a pinch, one can do that. What customers typically do is they’ll use the generic stuff. for situations where we cannot build a connector. For example, think of our large enterprise customer. They have an internal API. The US team serves the European team through some internal API. We… are not going to be building a native connector here, this is an internal server running an internal API server. But anything that’s public, we do strongly encourage people to just the request’s over. And there’s a reason why under connections, when you’re creating a connection. you’ll see on the bottom here, you can request… sorry, let me move my zoom windows… you can make a request here, which ultimately just sends a notification to us, and you do… are guaranteed a response within 24 hours on feasibility or any further questions, and on. We’ve, again, invested a lot in being able to do this, and we strongly do push people to, look, it’s just nicer for you if we just build you a native connector. A generic one is available in a pinch, but what happens in practice, I said, people use that for internal APIs that we know access to. Uttam Kumaran: , Andy, to your point, is, , there is a benefit, , to centralizing orchestration in one place. that’s probably a little bit of what you might be getting at, is, . can we start to use Polyatomic for broader orchestration? there’s one piece there where, , if we’re able to… we can totally build on top of the Polyatomic APIs, and then use it to orchestrate. where the SLAs matter is if you’re using it to do the handoff, and we do need, , sub-10 second in-memory transformation of rows, that’s something Awesha and I can totally give you a couple other options that may be better. You can still maybe we can still consider having the orchestration in Polyatomic, but I would just need to understand the use case better. For anything that is batch. Between moving systems, we should try to use this tool, for . And at least on the reporting side. we will centralize everything here. , similarly, it’s , I don’t… I don’t want to have 10 systems with crons moving things around. We will try our best to centralize. I hear you in that, , there are other… there are some use cases that rhyme with this. it’s for us to figure out, , whether they’re, , the operational… the SLAs are possible, , and we can do that within the system. Ghalib Suleiman: , there are… there’s a veritable zoo of workflow orchestrators, as one calls them, and they’re very valid platforms, and all I’m doing is just underlying what Tom said. , we are data warehouse focused, which may satisfy all use cases, may satisfy 80% of them. But this , , move some piece of data here, then kick off an invoicing process for these 20 customers there, then do this, then do that, the if-this, then that thing, with boxes and arrows and, , decision points, is not what we do. We’re in the business of moving data and transforming it, and there is a place for those workflow tools. Andy Weist: One more question, sorry, I’m hijacking a lot of this, but, what is the platform’s policy on failures and retries? Is it at least once, and… , how is… how are retries triggered and managed by the end user, i.e. us, in the long term? Ghalib Suleiman: They are baked… are we talking when moving stuff into the warehouse, or when pushing to an ERP, or both? Andy Weist: , let’s focus on moving stuff into the warehouse. Ghalib Suleiman: In that case, the retries happened. automatically. Whatever sync schedule you pick, every time we run, we will retry. When moving stuff into warehouses, it is considered a critical error. If something keeps showing up. There’s situation where, we should be giving up on retries in that situation. There’s just something going wrong here. At the worst case, the vendor maybe has a bug in their API that should be reported and that they should fix. Because when people are moving stuff into the warehouse, it tends to be used for reporting. There’s one out there who says, , I’ll tolerate my… Finance Dashboard being at zero. Because we’ve given up on retries. we will always execute a retry automatically every time we run. Andy Weist: , is detail… Ghalib Suleiman: We’ll send us an error, and we’ll notify. Andy Weist: Got it. is deduplication of data mostly reliant on setting up, , appropriate primary keys between the two systems, then? Ghalib Suleiman: , we take care of that for you. where, when pulling stuff into the warehouse, this is why building a native integration is always nicer, because we do this on a per-integration basis. We will… we find the primary keys, and we’ll designate them in our internal system that we generate them appropriately, and do deduplication appropriately as . Once in a blue moon. some, , again, being in this world, you do run into all sorts of issues. There are systems where there is primary key in the data. , be it. We mark that, , our internals have the ability For that to be marked on a per-integration, and per table basis as . But there is some elements of human curation, human editing that happens that you don’t see, given vendor API documentation and on. But , we do take care of that for you, where you really should not have to worry about it. What the primary care is, does it exist, does it not? And there are situations, you touched on duplicates, you’ve lived in this world as , where a vendor says, this is our primary key, and then lo and behold, there’s duplicated data, this primary key is duplicated over and over again. We have automatic deduplication in those situations, and we do verify that there… we have gotten into situations with one or two major vendors where there have been duplicates, but the row has been different, despite sharing the same primary key. This is a terrible, terrible bug. the vendor gets a bug report going, hey, this is a critical bug here. By and large, we do get duplicates quite often, surprisingly , but they tend to be same-row duplicates that our system detects. Uttam Kumaran: And maybe just to highlight that, , this is, , what my job used to be, , when I started my career, is, , figuring this out, and, , dealing with, deduplication, dealing with creating synthetic primary keys, or, , figuring out , how to handle this data in flight. I don’t have to think about this anymore, , because we leverage an ETL tool here. And , I feel very confident, even in an event of a failure. the way that Polyatomic is architected, they roll back, they don’t… they don’t do… they, , have the structure in a way where there’s not, , oh, we retried it, and now we are dumping the same data 100 times, … It’s, it’s a lot… it’s a lot more sophisticated. … when we do set up, , something that runs every, , 3 hours, immediately in Slack, we’ll get an alert if something fails. Typically, it’s, I would say if there’s a failure, it’s either because, , there’s a vendor API issue, or maybe there’s something in our schema, or something that we’ve set up incorrectly. But even things , oh, a new column has come in through the API, and we need to add it, and, , the safe handling of that process is something that, . I don’t really think about. We can configure, , that directly in Polyatomic. Ghalib Suleiman: , there are situations where there’s primary key declared by documentation, but… Through our analysis of the data coming through, , sometimes we’ll often work with a customer in this very situation. we’ll just take on the work if the customer’s with that. We’ll realize, oh, hang on, there’s a compound primary key we can define. It’s clear these two columns, when paired together, are unique, and let’s just do that. And , again, our system is flexible internally on our side, we get to make these annotations, and then things just float through. Andy Weist: , I’m thinking of cases where maybe NetSuite API doesn’t respond with a 200 or a 204, and, , we don’t want to create duplicate. Ghalib Suleiman: Oh, , NetSuite, , NetSuite. I realize we, behind the scenes, we… We have integrations with 3 completely separate NetSuite APIs. Regular NetSuite has a SOAP API and a REST API, we have both, and then you may or may not end up with a NetSuite Suite Analytics add-on. This add-on makes exports from NetSuite way faster, and , depending on your data volumes, it may be something to chat with NetSuite about. But that’s something I would recommend if your data volumes are on the larger side. But again… Andy Weist: We should talk more about, too. There’s another… interface called a Spy Myers Holm that we may end up interfacing with as . I would. Ghalib Suleiman: I see, I see. Andy Weist: done any work with that far, but we’ll… we’ll address that when we get to the bridge. , there’s, again, , there’s a million things we’ve seen under the sun here that, , NetSuite, certainly, we get these errors once in a while, but the retries. Ghalib Suleiman: take care of it. We’ve got, , , countless NetSuite deployments at this point. Other questions? Jason Wu: Gala, typically, , what’s… what’s the onboarding look ? For, , a new customer. How does that normally get set up? We’re in a situation where we’re using, Brainforge as . Ghalib Suleiman: , typically, we are available. We do follow… when working with Otel and the Brainforge team, we do follow their lead. this really ranges, and you dictate where on the range things lie, ? we’ve done things where we have one or more Zoom calls. to tell people, hey, look, here are your accounts, I will walk you through setting up your first test sync. Click here, click there. Click Test Sync, let’s take a look, make things are successful. Now you can turn it on by clicking here, and on. , the joke is there’s limits to, , abusing this offering as far as support. We’ve had people go, hey, look, we need multiple rounds, or multiple people need separate Zoom calls. Happy to do that as . I don’t know, do you guys use Slack internally, or not? Jason Wu: You do? , , a shared Slack channel is also an offer, and this is not. Ghalib Suleiman: a shared channel with a whole bunch of other customers. It’s just us, Utom, and you guys. A lot of people do take advantage of that one in particular. This way you get, , I don’t know. sub-5 minute response times. We get to throw in people from our team as . At times, people are in there as , and it’s just… everyone’s in the same place. that’s also an offer, which we’re very happy. We do prefer to just push people to say to that offer, at least, for peace of mind. , you do get… every once in a while, ? We do get people who go, hey, look, we’re working with Brainforge, you leave us alone, we’ll work with them, and we’ll go through them if we need you for anything. , that’s a fine dynamic as , but we’ll offer the whole lot. Certainly the Slack channel is something we do push for, just to make your lives easier, and you get to steer things from there, for your demands. Uttam Kumaran: And then, Galva, can you also talk about, , , what would, , the first month look ? , for a lot of folks, , what we do is we just try to get a couple things synced up, the team can see end-to-end, , what it looks for Polytomic to drive data. We can also talk about, , the estimation process for spend on your side. Ghalib Suleiman: , on pricing, you mean? , typically, we do charge per number of rows moved in a month. And that’s your pricing model. One could commit annually, but you don’t have to. You get a discount if you go, hey, I’m committing to 5 million rows sync per month. You get a 20% discount if you do that and pay annually, or you could just simply pay monthly. Maybe you don’t know, you’re not yet. Often, most typically, people will start monthly and pick some really critical stuff initially. , they’ll have their highest priority connectors, start with those, make those are before considering the rest, and then start moving on down the list. Typically, that’s what tends to happen. People will pick some subset. Maybe one or two of them end up being requests for us that we don’t support. , fine, we built those, just we do all integrations, and you get going with your small, higher-priority subset. Make that’s . And then we can move on. But it’s any moment, even if one starts monthly, one can go, what, this is going to stick around. I can always push you an annual plan, pay up front, and get that discount. That’s fine. But you can, again, do the monthly -commitment thing, and see how things go, it’s typically how people start. Any other questions? Jason Wu: Steve, anything? Ghalib Suleiman: from here. Steve Sizer: , the retries one was the main one. It sounds that you’ve got special… API rate limits with certain customers, is that… Is that accurate? Ghalib Suleiman: , they are specific to every integration, because every integration has different limits, . Uttam Kumaran: Because of the business that you’re in, are you able to, … Steve Sizer: Speak to these customers and get, , greater limits, because of… obviously, because of the amount of data that you’re moving. Ghalib Suleiman: Sorry, Hachi Minh? Steve Sizer: , say, for instance, , everybody’s got certain rate limits. Shopify’s got a rate limit, and… We are moving large amounts of data you. … Have you got special rate limits to certain… with certain. Ghalib Suleiman: This is the most interesting question, only because I don’t think I’ve asked you this, but… there are situ… this is very vendor-specific. With Shopify, we have a partner rate limit that every integration partner gets. Others, we have, , I don’t think you guys don’t use outreach, but in B2B SaaS, that’s a popular one, but outreach has us in a special Buckets, which just happened to be close to that team. This is very specific to the vendor in question. The humans who happen to be working there, this is very personal, relationship-driven. Steve Sizer: It’s a sensitive topic for the engineering teams who need to ultimately flick the switch. Ghalib Suleiman: At these vendors. What’s… is… Oof. Tends to be a foolproof method, if the vendor’s being uncooperative with us, which is often. Generally, they’re just busy, not because they’re, , evil or whatever, it’s just their engineering teams are busy shipping products and on. What tends to help is if you have a contact at this vendor. And we do have a script, again, we’ve been through this about a billion times now. we can give you a template email that you can send to your representative and CC us, asking for a larger rate limit. We’ve had that work for, , we had one major customer that needed a larger… needed us to get a larger one for LinkedIn data, and They were a spender with LinkedIn, and they called her LinkedIn Sales Rep. It was sorted out in about 10 days. That tends to be the part that’s most effective, and again, we do have an email template at this time. Points that we just provide to everyone who wants… needs to get into the situation. Perfect. , unfortunately, it’s a very unstraightforward- answer, and I hope you can forgive me for that one. Uttam Kumaran: , , Gala, let’s talk about the, , we had another customer on NetSuite, and they wanted real-time, , near… Real time. things, ? And you guys enabled that through ODBC, which I don’t think was, , a super… , maybe you could talk about, , what was the back-end part of that, if you end up. Ghalib Suleiman: Oh, that’s. Uttam Kumaran: Sweeter, , . Ghalib Suleiman: We have called NetSuite, we don’t , we have our secret sauce or whatever, but in that situation, it was a suite analytics package, , we, again, because we know these vendors in and out, NetSuite has… there are 3 different APIs, and , we got to tell the customer, look, if you upgrade your package, contact your sales rep and mention these words. you will have to pay more, but if you mention these words, they’ll give you something to sign, and then we’ll be able to export faster. sometimes there’s options. But for a Shopify. talk to someone under senior technical leadership, on this topic, it’s if you have a representative there, that’s your best path. Because engineering is… Running a bit of a delicate operation where they cannot increase the rate limits for everyone. And they do it case by case. If that’s. , again, apologies, don’t have a… Straightforward answer here, but there is precedence for a way. Sometimes the vendors will just tell you, bugger off, by the way. This also happens. They’ll go, , everyone’s stuck with this rate limit, including our internal teams, and tough luck to everyone involved. Other questions. Uttam Kumaran: Any other questions? . , in terms of. Andy Weist: Last… , one last question. What cloud are you deployed to, or are you multi-cloud for rental? Ghalib Suleiman: We’re AWS, we do… We’re redundant within AWS, we’re not deployed to any other clouds besides AWS. We do have an… Docker image of people… I don’t know if this would be the case here, but if people do want to deploy to their own cloud. , it does come with an enterprise platform fee, but we do have A self-contained deployment. That people can spin up if they want to. Andy Weist: Did you use that just for sandboxing, too? , local, I don’t know, sandboxing? I don’t know that that would be more convenient than what your platform already offers for testing or anything, but just as a thought. Ghalib Suleiman: Oh, , we just do, , as far as any customer sandboxing, , people who are just testing things, it does happen on our cloud. We do have the notion of a workspace, and , , some people will generate a sandbox workspace. But… Typically, they’ll just have some sandbox connection, because not many systems offer sandboxes. NetSuite does, Salesforce does. But by and large, very few do. , in NetSuite’s case, they’ll have one NetSuite connection and one NetSuite Sandbox connection. And run tests, , one test thing from the sandbox, which probably doesn’t warrant a separate workspace, but we’re happy to provision those if we want them. Uttam Kumaran: , part of the reason, Andy, we also initially was interested in polyatomics is we’re… we do these types of invitations very often, and I was looking for a partner. Andy Weist: That was more programmatic. Uttam Kumaran: I don’t think we ever took advantage of that, but… Much of the platform is customizable, and there… and… but also, we found that, . I was mainly biasing towards in case we had to build stuff, but we haven’t had to use that team has built Almost every new connector we’ve requested, , within a few weeks, it’s not been a problem. Andy Weist: more questions for me. Ghalib Suleiman: Other questions? Jason, anything? Or Shivani? Jason Wu: , nothing on our side. Uttam Kumaran: I’m happy to … , I’m happy to summarize, , maybe goll up over email or Slack, , what setting up looks . , typically. Ghalib Suleiman: Nope, you provision us. Uttam Kumaran: an instance, we go in there and connect, , P0 sources. And then drive towards an estimate. , we haven’t yet arrived at, , a decision on Data Warehouse, , that is, , we’re driving towards that this week. Of course, that will… but I’m we can start to understand what, integration requirements we have for some of those P0 sources. , Shivani and Jason, , how do you guys feel Is it worth driving towards getting, , a… , trial instance set up. Do you want to take some time, or we can talk about it on Thursday, or what do you think is best? Jason Wu: Let’s take that offline, and talk about it. . , the same goal as let’s make we get to a decision to this in the next day or two. Ghalib Suleiman: , cool. , that’s all I have then. , we’ll be on standby with time if you guys need anything. Shivani Amar: Nice feeding you, thank you. Steve Sizer: Yep, cheers, guys. Thanks. Uttam Kumaran: Thanks, everybody.