Haskell Weekly


Our Tech Stack

Listen on Apple Podcasts
Listen on Google Podcasts

You can also follow our feed. Listen to more episodes in the archives.

Stack, HLint, and Brittany, oh my! Cameron Gera and Taylor Fausak go on a deep dive into the ACI Learning tech stack.

Episode 50 was published on 2021-08-16.



>> Hello and welcome to the Haskell Weekly podcast. This is a show about Haskell, a purely functional programming language. I'm your host Taylor Fausak. I'm the director of software engineering at ACI learning. With me today is Cameron Gera, one of the engineers on my team. Thanks for joining me today, Cam.

>> Glad to be back. It's a second week in a row. We're starting to get back into the swing of things with Haskell weekly podcast. So thank you guys for listening today. Uh, we're going to do something a little different today due to a comment we received from a listener. Uh, and they wanted to kind of hear about what does software engineering look like at ACI learning. And what's our processes, and how many services do we have, and what do those services do? So we're going to kind of take a deep dive into that today. Our hope is to only have one episode of this. But if we seem to be coming over, uh, just going to give you guys a heads up, that we may have two. But, uh, yeah. So Taylor, why don't you get us started with a little high, high level process, infrastructure, that kind of thing. So we can kind of drill down into the rest.

>> Yeah, as you mentioned, this was prompted by a comment and I assume the person was more interested in kind of the Haskell side of things. So that's what we're going to focus on, but to give context, um, we developed software using the acid, uh, excuse me, agile methodology.

>> Um,

>> acid. Yeah. Uh, well, everyone has a different definition of agile, so ours can be added. Uh, but we use agile, which just means that we're constantly getting feedback and making improvements and iterating. And the tool that we use to manage our workflow from a high level is clubhouse, not the video chat application, but the one that is soon going to be called shortcut instead. And we use that more or less as a Kanban board to keep track of all the stories or tasks that we want to accomplish and track them as they move through our workflow, starting from ready for development all the way through. In production in our user's hands. Um, from the code side of things like many software shops, we manage our code in GitHub, and most of the code goes through kind of the normal GitHub PR process where somebody pushes up a branch, they open up a PR for it. So, uh, at least one person on the team reviews that PR and then we hit the big green button on GitHub and it gets merged into master at that point. Uh, it kicks off a continuous integration build. We use Semaphore to manage our continuous integration. So if, or sorry, the, the build actually happens before that, but also once it's merged into the main branch, uh, Semaphore, will build it and produce a build artifact, which in our case is a bunch of. Docker container images, and it will push those over to AWS. And then we use AWS far gate for most of our services, and those will launch all the new versions of all those Docker images and get it out into our users' hands. Uh, we use a bunch of AWS services, not really worth enumerating all of them right now, but as I mentioned, we use far gate and we have several Haskell services running in production. our current Best guest count is that we have seven of them. Um, and we may say their names later on. So just for context, their names are Urza, ion nucleus, Jeff, Shelob, quantum and buffer processor. So some of them are descriptive. Some of them are a little more imaginative and some of them are just people's names. So, um, yeah, those are the. Those are the processes we use, the infrastructure we deploy to and the names and number of our Haskell services. But I'm sure the listeners are much more interested in how we write that Haskell. So, cam, do you want to kind of dive into that?

>> Uh, yeah. So with these multiple services, they all have multiple functions. Um, and. Through time we've we have kind of a set of APIs that are all in Haskell, and then we have some other of these services that manage, uh, you know, scheduling and managing like a job queue, um, you know, updating some third-party services on a continuous continuous basis, uh, allowing us to integrate, uh, the organizations teams, uh, Something that it can happen on the backend. So that's kinda what Shelob is. And then, um, something to kind of aggregate and handle metrics, which is kind of the buffer processor. So those services are kind of like kind of what they do. Um, and Urza, Ion, a nucleus being the APIs, nucleus was the first one. So this was we're all learning Haskell. We were all kind of figuring it out. Like Taylor had been a pro you know, kinda had come in to help guide us. Previously started with Elm. So we've been, you know, with nucleus, we kind of took some of the, the easier to, to digest, uh, choices like using, uh, technologies like half stack and Orville for an ORM for Postgres. Um, and like Happstack being our web server, uh, kind of a little more loosey goosey, um, and the abilities and what it can do. And so that was where we kind of started and we said, okay, like, let's figure out. What we can do better. And that's where new ion came in. And then from there. We have one question coming from the audience.

>> Yeah. I just wanted to jump in and say that, uh, Orville isn't one of the more well-known Haskell ORM, uh, libraries. And that was developed by flip stone, which is a contracting firm that was working with it pro TV back in the day when I started several years ago and, uh, you know, great library, very happy to use it, but similar to hap stack in that, um, most of the checks happen at runtime. Rather than at compile time. So just if y'all want to look it up, uh, github.com/flipstone/orville.

>> Yep. And uh, if you guys want, we can also post the links to all of this kind of stuff we're talking about it. We're not going to post them all. Cause that would be a lot, uh, because we're going to talk about a lot of stuff today. So, um, yeah, that's kind of the lay of the land with the services. With, you know, Ion we made better choices. And then with Urza, we, we made even better choices. So, um, I think we'll kind of jump into that now of kind of what's what are we libraries that we're using now and, um, kind of where have we come from.

>> Sure. And we'll start with where we started make sense. Uh, as I mentioned for the kind of database abstraction, We started with Orville, which is a wrapper around, uh, postgresql-simple. If I remember correctly and that worked great, uh, has a lot of things that I really like, but one of the downsides was that as I mentioned, most of the validation happens at runtime. So for instance, if you mistype a column name or you get the column name, correct. But say that it's the wrong type. So you pulled a string, but you thought it was an it, um, those checks. Are only going to blow up for you at runtime rather than compile time, which, uh, you know, hopefully you would cast catch in testing or catch at some point, but it would be nice if you could catch them at compile time. So that motivated us to look for a different library. And there are many database abstraction libraries available in Haskell. Um, the one that we decided to go with was persistent and also esky Leto is kind of like. Add on library for doing more complicated SQL queries. So we sometimes use that one when we need to do a join or something like that. But what persistent got us was the ability to have, um, strongly typed queries. So we would know that the field we're querying for. We got that name. Correct. And we would know that the type that we're comparing that against is also correct, or when we pull it out of the database, that part is also correct. So those are huge wins for us. Um, and it means that stuff that you used to have to catch either with a test or in code review can now be caught by the compiler, which is great. Um, and that's, you know, just one part, one little library, and there are many other libraries that can do that, but yeah.

>> Yeah. And you know, the move, you know, Orville with runtime issues to persistent was a little bit of jump due to, you know, it's more type level programming, um, in a little bit more, uh, abstraction that wasn't as like the Orville library is pretty verbose and like creating tables and you know, all these, like what these fields look like and this kind of stuff. Whereas like servant kind of like says, okay, what do you want? I'll generate it. Sorry, persistent, but that leads me to my next point of kind of the web app, you know, web servers we were using, you know, I mentioned earlier hat stack and you know, Ion and Nucleus nucleus were built on Happstack and it was great. It started us off. We got moving with it. It, uh, you know, allowed us to move a majority of our legacy JavaScript API into Haskell. So it was a great step, but we started to see shortcomings with that. Um, we started to. Kind of get our wires crossed with what routes were and where they were like, how do they, what the types were. And like you had to go, if you wanted to know what something was going to return to, you had to kind of search for it by finding that file. And then also you don't have documentation out of the box. So it just kind of created a lot of little tensions and paper cuts for us that eventually helped us choose to go with servant. Um, so servant is a type leveled web server, um, or. Yeah, type programming more or less where you say, okay, my API, I'm just going to have this type and this is what's going to be returned. It's kind of easy to read, easy to comprehend. And the handler function is, you know, digestible kind of, uh, easy to, to grok rather than having to jump through all these hoops to kind of figure out what's happening. Um, and that was, you know, a big step for us because we weren't really sure. What servant would look like. Um, but we, we took that step with a smaller like side project and we saw, okay, like this isn't too bad. We kind of got the team on board. And, you know, we, we ended up getting some really great benefits out of servant one of them being the swagger, um, documentation that it can generate. And that has really helped with the product that us, as an engineering team delivers to. No, the, the front end team who needs to understand and know what our API is doing. So, um, you know, I'm a big fan of servant. It was definitely a shift, but I'm really glad we ended up where we are. Uh, and I'm looking forward to, uh, getting everything into servant and not having half stack any longer.

>> Yeah, I'm looking forward to that as well. And it's worth pointing out that both persistent and servant are, uh, new libraries that we are moving to. So we haven't migrated everything over yet, but we're getting there and we have bought in on both of these libraries. So we tried out some of the alternatives and we've done like little spikes for these to make sure that they'll work for our case. We're doing the tedious work of actually moving every model, every end point over into these new things and it's going well, but we're just not done yet. Um, and I wanted to mention one of the upsides of hap stack was something that we don't really, or didn't really take advantage of. It's really cleverly designed and everything happens inside this kind of server Monad. So if you want to do routing that's in the server, mine, if you want to get something out of the query string or a header or the body, that's all in the monad. If you want to return something or re uh, throw an exception to. If something is not found, it's all in the same Monad and it's actually really clever how that's architected and it lets you do some neat tricks to implement things. But for us, like you mentioned, it got really challenging to figure out what route, what does the route look like for this handler? Or what does the, what does the body supposed to look like? Um, which feel like they should be simple questions to answer. And they're kind of hard with happstack because so that architecture,

>> right. And especially, you know, we just kind of hit that point. Yeah. Too many routes to kind of like quickly parse and digest what we needed to do and wherever you need to go. Um, yeah. So we've talked about those two libraries quite a bit, as far as like what's new, uh, you know, and I, and I'd like to talk a little bit more, um, we use H spec for testing, so, you know, I know there's a lot of different testing options out there for us. hspec has worked the best. Uh, we've kind of just started with it and gone. Um, another thing we have created for it per TV, ACI is the prolude. Um, so that's our own custom prelude and we're a big fans of that kind of all those rote things we use all the time from these little smaller base libraries is, you know, nice and easy to use or base packages. I'm sorry. Um, yeah. It makes it a little easier to use.

>> Yeah. And I wanted to go back to hspec really quick. Uh, one of the reasons that we use hspec is that uh, not only is the manner of writing the test cases really convenient where, you know, it should do this thing and then blah, blah, blah, should be that like, it's a nice, uh, DSL for writing tests, but the H spec discover kind of, um, extra program that it runs to discover all the tests is really nice. Let's us avoid writing all that boiler plate of like, yeah, we wrote the test file. Uh, but you actually have to rig it up into the test suite and you can forget to do that. Or if you do remember, it's still just tedious. So having that done automatically is super nice. Um, and Cam, as you mentioned with the prolude which is our custom prelude. We resisted this for a while. And part of the motivation was onboarding new people is probably going to be easier if you don't have a custom prelude, because everything is the same as quote unquote normal. But we discovered that we were very, very often doing the same things over and over again. So we'd always import the same library as the same way. And use lots of the same functions from them. And we thought, okay, well, let's take a data-driven approach to this and analyze our code base to figure out what things do we use the most. And let's push them into a custom prelude so that we don't have to import those things all the time. And this worked great for us. It was just awesome way to develop a custom prelude. Um, and clearly there are some things that are convenient to have that you don't use all the time. So this doesn't get everything, but it gets a lot of things. Shout out to Sarah, one of the engineers on our team for doing that whole process and developing this prolude it's been a huge benefit to our team.

>> Agree, agree, agree. Uh, yeah. So, uh, you had mentioned something earlier, Taylor, that we use AWS for infrastructure, um, and AWS, the library for Haskell really has done the best and is the most well. Explored would be amazonka which, you know, they have, they made choices that have made it maybe a little harder sometimes for new, you know, Haskell developers to figure it out. But once you kind of start to understand and see the conduits and see how kind of, all of the information plays out amazonka turns out to be in like all the lenses. Like it turns out to be pretty nice. Um, cause it is so vast and the amount of services it supports

>> and covers. Oh yeah. It's a great library. Uh, and it's, I'm pretty sure it's built from Amazon's own description of their API and their services and everything. So. Very, uh, it's got everything it's comprehensive. Um, and as you mentioned, it does have some complexity, so it uses conduits and it uses lenses, both of which are more advanced concepts, but you notice, we were talking about servant and persistent earlier, which also use advanced concepts with type level programming and template Haskell, quasi quotes. Um, and that's been kind of the story of the development process. Or the engineering process for us, where, uh, we started with really simple stuff, the types of things that we felt comfortable, we could implement ourselves. You know, if we needed to, obviously we didn't write our own libraries for everything, but, uh, we've been reaching out to more advanced libraries and techniques to push things into compile time errors at the expense of maybe they're a little harder to understand. We feel less confident. You know, could we implement this type of thing ourselves? Like, you know, I think given enough time, we could probably write our own servant library and there are great resources for doing exactly that, but it's not something everyone on the team could comfortably do.

>> Yeah. And that's something, you know, like you said, we try to keep it simple in the beginning because I mean, myself included, we were learning Haskell. Like we didn't have. Really, you know, understanding of the functional paradigm and the things going on. So that's what we're wrestling with mostly now, but the more advanced things in high school, they were way beyond our reach at that point. And so I think as a team, we've done a good job communicating with each other, saying, okay, Hey, here's something here's a limitation of this library, but here's another one that could, you know, it's a little harder, there's a little more, uh, underneath the covers. It's going to give you a little bit more safety and benefits. Um long-term um, so it's like a short term complexity learn, but a long-term benefit. And I think that's kind of the line we tow as an engineering team because we do know we're gonna grow. We're gonna have new engineers come on board. And you know, there's not a lot of high school developers out there looking for jobs. And so when we are looking for one, we want to make sure it's appealing. It's something that's, you know, One way or the other, like we like to try to, you know, be a middle ground for, um, just how we write Haskell and how we communicate and all those things. So, um, yeah, that's. You know, the reasons we've made some of these choices and how we kind of balance that.

>> Yeah. And I think a great example of that is not even switching libraries, but staying with the whole library at the same time. So like every, you know, web programmer, we deal with JSON and we use the aeson library to encode and decode JSON, which is kind of the bog standard. Everybody uses it library, but the manner in which we use it has changed over the years. Two to four years. Back in the day when we were writing more, what you might call simple Haskell. We wrote all of the instances by hand. So we would define a datatype and then we would define the, from JSON instance and the to JSON instance, more recently, we have moved away from doing that. And instead we use generic deriving to do those for us. So we don't have to write that code anymore. It gets written automatically, and we can be sure that the implementation of those functions matches the shape of the data type, which is something that anyone who's written. These instances knows it's pretty easy to accidentally, you know, miss name one of the keys or get something flipped around. Um, but another one of the motivations for us actually ties in with the different change we made switching to servant. One of the benefits of servant is that we get API documentation for free. And part of that is that the types in your API need to have schema instances, and we could write those by hand, but, and we have to keep not only the data type and the JSON instance in sync, but both of those have to be in sync with the schema instance. So that's a lot of code that all has to change. And you either have to catch it in code review or write tests for it or something, but, um, the easy way out and the way that we took is to use generic driving and that way your schema instance matches your JSON instance. Cause they both use the same mechanism for generating that code. And that's been really nice, huge, uh, decrease in the amount of code that we write and huge increase in the consistency of it.

>> Yeah, it's been nice. Um, we definitely have had some tensions here and there with it relating to build times and things. Um, and Taylor's knows, he's been working on some stuff to kind of see what we can do about that. But, you know, for the most part, the benefits are there and, you know, yes. Maybe a little bit more hand wavy, but you have that comfort of it's all, it's all the same. It's all doing the same stuff behind the scenes and yeah. When you're dealing with really any, any code base, you want to make sure that at least there's consistency. That's another thing within our team that like, w we're trying to continue to work towards this consistency in naming, you know, imports, um, you know, our style guide, you know? So like there's some things that, um, you know, you want to make sure it's consistent and. Generic derive and gives you that. And I'm really grateful for it because it's saved me a ton of boiler plate time too. Um, because we were starting to hit more hiccups with like our JSON instances and all those things of, oh, we encode it one way, but decode it another, and then that's problematic and you're really creating more work for yourself. So, um, you know, yeah. Anyways, uh, did we want to. Kind of in the middle here of, um, you know, all of our services, we've really been talking about APIs at this point. Uh, and we have four other services that kind of do all four different things. And so the another library that we lean into is which, which we talked about last week. So if you miss last week's podcast, go check it out. Uh, that was actually written by Taylor. It really allows us to switch between, uh, types, a lot easier and a lot more effective. Um, and a little more easy on the eyes as well. Um, so yeah,

>> Taylor. Yeah. Uh, and that touches on what you had mentioned earlier of consistency. So, which as we talked about last episode gives a consistent interface to switch between or convert between types. Um, and we've been talking a lot about libraries, but that consistency goes to other things as well. So for instance, the formatting of our code base. This is contentious, not just in Haskell, but in every programming language of how should things be formatted. Should it be up to the individual developer? Should you try to match the style of the file that you're editing. Uh, should there be kind of a house style that everyone follows or should a tool enforce the style and we've chosen to have a tool enforce the style. So we use Brittany to format all of our source code and, uh, this took some, uh, you know, discussion among the team to land on a configuration that we were comfortable with. And then we had to have this big bang PR. Everything gets formatted with Brittany and we switched everything over. But I think the end result has been really nice and we actually test it in our CI environment as well. So we know that when code gets merged into the main branch, it is all formatted the same way. And it may not be anyone's preferred format, but everyone is comfortable with it. And you don't have to think about formatting anymore. You just run it through Brittany and you're done. So that's been really nice. You know, for me, it makes it a lot easier to review code because everything has the same visual style and I know what expressions should look like, what types should look like. So it's a lot easier to skim over.

>> Right. doesn't disorient you. Um, yeah. I know another tool that we use, um, for our code base and kind of, kind of creating some sort of rules around function, choices or line with. Well, I guess Brittany really handles line with, but, uh, you know, restricting functions or restricting certain, um, language extension. So those kinds of things we use hlint to um, or Clint, if you're read it, read it straight forward, but that allows us to have, you know, our own configuration to say, yes, We're okay with this, but we're not okay with this. And if you see something like this, change it out for this. Um, which is, I know a lot of this is, but it does give you that like consistency across the code base, that you're not like one file using this function, which is really the same as what's doing done in another function, but it's written differently. And so like kind of creating that consistencies with hlint has really helped, um, our development process as well.

>> Yeah. And this also helps with onboarding because there are some things that you can rule out. Uh, for instance, since we have our own custom prelude, we can exclude functions from the prelude that we don't like, you know, like head, which is partial. And instead we want to either pattern match on it or use something like, um, safe head or one of those various functions. Um, and if we didn't. Or for functions that, uh, are in our prelude, but are, you know, questionable. Maybe we can have an hlint rule that just says, uh, this isn't preferred. Here's another way to do it. But if you know what you're doing, you can disable the hlint rule there and be on your Merry way. So it's a good way of teaching more junior developers or people were onboarding about, uh, the style that we like to write in. And by and large, we use the same community hlint rules as everyone else. So it's the community style.

>> Yeah. You know, I know we, um, we have a rule in there for the, uh, utterable IO,

>> accursed unutterable IO that unsafe perform IO yeah. That was,

>> can't use those, but, uh, you know, maybe next week I'll ask Taylor to, to let us happen. And, uh, yeah, so like, I mean, we're just kind of starting on the tools and, you know, Kind of get quick in this next little bit, because we're going to kind of talk about our, to day-to-day development tools, our editors, and kind of what our locals and local environments look like. Um, so we use stack for majority of our services. One of them uses cabal, um, to build and manage, um, packages and all that stuff. And then, uh, so that's kind of. Pretty, you know, I know there's people who are one side versus the other, but we chose to use stack and it's worked well for us, but we're not opposed to cabal. So we have a cabal package, uh, and then we have ghcid um, for development as well as HLS or purple yolk. So those three kind of, uh, amongst the team are used in different ways. So I'm usually a ghcid kind of guy where I run it. Um, I have used purple yolk before, but HLS has been, always giving me issue. So I kind of, I kind of shy away from that.

>> Yeah. And purple yoke, not super well-known it's something that I wrote. It's just a vs. Code extension that basically works the same as GHC ID, where it fires up, ghci in the background. And when you save a file reloads ghci and then shows you the warnings and errors in your editor. And I wrote it because HLS is a fantastic piece of software and it's amazing and it's way more powerful than purple yoke, but it's also, um, the, I dunno, rickety maybe is the best word I can use to describe it where when it works, it is amazing, but it doesn't always work. And that was frustrating to me. I prefer. Uh, more stupid tools that are more reliable. So that's why I built purple yolk, but we also use GHC ID because it's kind of the quintessential stupid tool that just works. So, um, yeah, we, we have a lot of options there for our quick feedback loop in development.

>> Yeah. I like, I like ghcid cause it's kiss. Keep it stupid. Keep it simple, stupid, not stupid either. I guess either one works, but yeah. Yeah. And like you said, vs code is, you know, generally the term, you know, the editor that everybody uses. Um, we do have any max guy. We love him to death. He's awesome. And he also is the next guy as well. So, you know, he he's, he's trying to work a sandwich. It's totally, totally admirable. So, um, he's our Emacs nix user. And then for local development, We use Docker and Docker compose to lift up our services and create local networks that commute, they can communicate with each other and all that stuff. Um, that way it's any machine can run it. Not machine dependent. It's all kind of built into the Docker images and those will be run. So that is kind of the rundown on our tooling. I'm sorry. Kind of just whizzed by it. But if you guys have any more comments or questions or you want us to dive deeper, obviously, you know, you know where to find us just comment, we will. Take it into account and look at, uh, maybe expounding upon it.

>> Yeah. And I just wanted to a quick mention about Docker. Um, we use it both for local development and for our production deployments, which has been really nice because at any point you can grab the Docker image. That's actually running in production and run it locally and see, you know, how exactly does it behave. Um, but also for local development, it's been really nice to reduce, not entirely eliminate, but almost entirely eliminate. The works on my machine syndrome, where one developer has something that works great and then they push it up and the other developer pulls it down and doesn't work. Um, we have very, very few of those problems and it's mostly due to Docker. Um, not to say that it's the only solution to that problem, but that it has more or less solved that problem for us.

>> Yup. For sure. Cool. So we talked about tooling. I did want to kind of jump in real quick to kind of code layout. How do we structure our code and what are some of the choices we made along the way to, to be where we are today? Um, so Taylor, you wanna start

>> us off? Sure. Uh, the way that we like to lay out code today is to more or less have one module per type. And it's a little tedious because there's a certain amount of overhead involved with making a new module. But the benefit is that when you want to use that type, you can import that module and you can import it qualified, which we normally do. And then you can use really short identifiers in that module. So you don't have to make names that are globally unique or even unique among a bunch of stuff. It just has to be unique within that module, which is usually pretty easy to do. Um, and we actually arrived at this because in our list of services up at the top, we had nucleus, uh, we used to have another service called metrics, and then they got pushed together into one service called ion. Um, and we had. Catch all module. That was the glue between those two modules, all the stuff that was common. And that meant that it had a lot of stuff in it. And we had to make the names unique within that module. And that was painful. Um, and that's part of the reason, I guess we didn't say this, but our main code base is called Smurf. And the reason we called it that is that we ended up with a lot of repeated names where you'd have something. Person dot person, name of person and are like, okay, this is getting ridiculous. Now we need to do something about it. Um, so that's why we're trying to move toward the one module per type. We are moving toward that. Um, yeah, that that's for our types and stuff, but cam, do you want to talk about how our API is set up?

>> Uh, yeah, so, yeah, so, um, like Taylor said, we have one module per type. And we have separated like our types our queries and our API, uh, handlers in the separate files and in a different structure that way. Uh, you know, we're not trying to import something from an API that's in another API or anything along those lines. And we also pull out, uh, you know, more common actions into separate files as well. So, you know, say you want to create a course. Well, you know, or a person, you know, like we. The person action, and then we'd have a run for it. And that would give us a new person. And so, um, some people would say, that's not the way making people work, but you know, we're not going to get into that today. Um, it just, uh, came to my head. So I am, but then we also have, uh, each handler is in their own file. So with our servant handlers, You know, the route and the handler in there. So it's a little easier to comprehend what's going on. And then in our nucleus and ion, uh, handlers, it's just kinda the handler fabulous supporting functionality. And, you know, I have a dog who likes to bark in the middle of a podcast. So, um, we'll see if we can edit that out. If not, that was my dog, Ruth.

>> Yep. Hi Ruth. Um, but yeah, one thing I wanted to go over there again, was that you mentioned actions in separate files for stuff like creating. users. Uh, this is a pattern known as, uh, like a command object in a different language. And in fact, at a previous job, I worked in Ruby and I worked with a coworker of mine, Aaron, to develop a library called active interaction. And so that's kind of, I pushed for this in our code base here because I was really familiar with the concept. And it's surprisingly powerful of let's take this thing that we want to do. That's complicated and pull it out into its own file and have well-defined inputs and outputs. Which granted you get for free in Haskell, that's the type system, but, uh, pull it out over there so that if you want to call it from the API, or if you wanna call it from a script or you want to call it from a job, they can all just use that thing and you don't have to worry like, oh crap. Well, when you make a user in the API, it does it this way. But when you call the script, it does this other way. And then those things, invariably get out of sync with each other.

>> Oh, yeah. Yeah. We had that issue, especially when we were starting to move, um, kind of legacy end points from ion a nucleus into Urza. We would kind of have this two definitions of these things in the meantime, and then change one and then the other were having issues. So, uh, kind of moving towards that actual file kind of isolate. You know, and, and more or less kind of siloed what that, what was happening rather than kind of trying to do anything, not the wild wild west. Um, and you did kind of touch on the scripts. Um, you know, business needs various needs and reports and metrics are returned or, you know, things merged or this or that. So we kind of have a, uh, a whole directory of scripts that are just kind of runoff one-off scripts. We can run whenever the person needs it or whatever, or we'll create a new one if we don't have it. Uh, and those are kind of also 11 Smurf. That's just like a thing we can run on our local machines connect to the database and we're good to go.

>> Yeah. And a quick note there about kind of the life cycle often we'll get a request for a new report type and we'll write a script for it. Cause that's the easiest thing for us. And then if we get a, another request for a report of that same. We will start to think, okay, maybe we need to put this into a job. Something that can run, you know, every week, every month, something like. Or we need to expose it to our internal, you know, staff users so that they can just click a button and get that report rather than having them to ask us for it. So often things that start off in scripts will kind of graduate into becoming an action that then is exposed through a job or a web UI.

>> Yup. Yup. And you know, that's kinda, uh, what quantum does is handles is our it's our job, uh, manager and using odd jobs, like we said, right. And so know that's definitely, you know, even this week we had a script that was, uh, that was running for about 30 minutes. It was like, that was the runtime for it. And so we kind of evaluated. I said, oh yeah. Like when things start to go like that, we say, okay, what can we do differently? And we take a look at the code and we kind of dissect it and see what's going on. Well, we saw, we had a re repetitive. We were making database calls within a loop. And so rather than just trying to get all the information at once, we were trying to do it eat for each company we had, which we got a lot of companies. So that was, you know, begging our CPU usage on our database and, um, taking forever to run. So, um, we pulled that out, like fix that up and then that, that down about 42 seconds and turn that into a job because it is a monthly report that is asked for a lot in 30 minutes a month. That 30 minutes a month can be expensive, especially because if you're trying to do something else and you forget to ever send it to the people who need it, there's a lot of things there that can kind of create these hiccups. So that is kind of what even bred the idea of quantum and our job queue is like this kind of repetitive rote thing that was always happening that doesn't really need engineering. And we can kind of push it off and automate part parts of our job, which has been a huge, huge.

>> Yeah, absolutely. Well, cam, as you mentioned earlier, we're always happy to get, uh, questions or comments from our listeners. So if you, you know, we, it sounds like we've been talking for a long time about a lot of different things, but really we've just been scratching the surface here. If there's any part of this that you want to hear more about. If there's something that we didn't mention that you're curious about, please let us. We're on Twitter. It's probably the easiest way to get ahold of us. Just go to Twitter.com/HaskellWeekly.. Um, but I'm pretty sure those were all the things that I had to cover at least for today. Cam, was there anything else that you wanted to go?

>> I mean, like you said, we're just scratching the surface. So I think this is a good start and, uh, would love to hear your feedback and we can definitely expound upon things as you'd like. So just really appreciate you guys joining us.

>> Yeah, absolutely. So, thanks for listening to the Haskell weekly podcast. I have been your host Taylor Fausak. And with me today was Cameron Gera. If you want to find out more about us, like I said, we're on Twitter or you can go to our website, which is HaskellWeekly.News.

>> And we're brought to you, by our employer, ITProTV, an ACI Learning company. They would like to offer you 30% off the lifetime of your subscription by using the promo code HaskellWeekly30 at checkout. All you gotta do is go to ITPro.TV, and you'll see how you can sign up there. Um, and that HaskellWeekly30 promo code will get you 30% off. So I think that about does it for us. Thanks again for joining us on the Haskell Weekly podcast and we'll see you next week.