Podcast
Chat Wisely
You can also follow our feed. Listen to more episodes in the archives.
Special guests Michael Litchard and Brian Hurt talk with us about their new social networking site Chat Wisely. We hear about their experience using Haskell not only on the backend but also on the frontend through GHCJS.
Episode 40 was published on 2021-03-17.
Links
Transcript
[Automatically transcribed. Not edited yet. Contains many errors.]
>> Hello
>> and welcome to the Haskell weekly podcast. This is a show about Haskell, a purely functional programming language. I'm your host, Cameron Gara, an engineer here at I.T.. Pro T.V. with me today is Taylor Fausak, one of the engineers on my team. Thanks for joining me today, Taylor.
>> Yeah. I'm happy to be here, Cam. Also excited because we have not one but two special guests with us today. We've got Michael Bouchard and Brian Hurt both from chat wisely. Thanks for joining us today. Michael and Brian. Good things here. Yeah,
>> thanks for having us.
>> So, Michael, you are the CEO of chat wisely. Could you get us acquainted? What is chat wisely. And I guess. First, who are you? Tell us about yourself.
>> Oh, sure. Um uh, I am a software engineer. Uh, that, uh, met Brian when we're working, uh, at a different company. That was also using Haskell. And that's where we got the idea for chat wisely. That's where I was enamored with Brian's idea. Um uh, for chat wisely, uh, which is a member supported mini blog. Um uh, social network. That's currently an open beta.
>> Okay. And many blogs, social network. Is that something like Twitter? Mastodon, That type of thing.
>> Oh, yeah, Well, it's, uh, uh, something like Twitter. Except, uh, one of the main differences is how we handle, uh, the post chain, um, with Twitter. Uh, we see a particular problem with Twitter is is being able to follow and manage extended conversations. So the idea we came up with was, uh, to be able to open up a larger hosts hidden behind smaller post that looked like twitter. Looks like those Twitter, um, tweets, Okay. And
>> the other big Sorry. The other big thing, and probably even more importantly, is how we're actually going to fund it. Um, our plan is, rather than depending, depending upon advertising revenue, we're just going to charge a small amount of buck a month, basically nothing. But that's enough that it changes all of the incentives that the company has. And you tell me how you tell me what incentives the company has. You tell me how the company makes money. I will tell you how the company will behave, right? And so this has been something I've been sort of muttering in my cups about for a couple of years. Now that you know the problem with Facebook and Twitter and all of the spying and the trolls and everything has been because their business model is wrong, fix the business model and the behavior of the company fixes itself. Yeah, And so I met Michael and I was, you know, uh, muttering in my cups one day and he was like, Sounds like a great idea. Let's do
>> it. And that sounds like a great idea to me as well. Mhm. Um, so yeah. So, Brian, could you give us just a brief introduction to the technical side of things as well?
>> Okay, I'm Brian Hurt. I'm the cto of chat wisely because I won the coin flip. And so I got, uh and yeah, chat wisely is built in Haskell. Uh, we're using Haskell on the back end. Um, a lot of servants. Some Yes. Owed some more. You know more. Yea. So we'll be coming. Um and, uh, postgres is the database and G.H.C. Js and reflexes. The front end and we are loving Haskell.
>> I should hope so.
>> Yeah. Would you guys kind of expand upon your experience with G.H.C. gs and kind of why he chose it.
>> So you know what? You know, the question is always what's the right tool for the job And the, you know, the engineering is always about trade offs. What do you gain? What do you lose? There is no such thing as a perfect solution. Um, and so if you're going to do a more static web website, I would I would probably recommend a soda as the website generator Really like it for, you know, static server side rendering. But, you know, if you need the dynamism of the the front end, you know, complex, dynamic changing of web app. Um, then G.H.C. Js and, um, reflex are good solutions, all right?
>> And I mean, there are solid business reasons to choose J. C. J s and reflex, as opposed to say no Js when you're two guys and your labor, your labor budget is zero. Uh, and you need to iterate quickly, and you're building something that you don't know how to. I mean, this is we started not knowing how to build a social network. Um uh, and, um, it was, uh, the type system of Haskell. Um, that allowed us to get stuff out quickly. Figure out where the happy Path was. Um, cheaply. Um, and we don't spend a lot of time on unit tests, um, but rather getting getting the product out. Um, and this is where Haskell really shines from a business perspective.
>> Yeah, I I would go. You are. If you're not using Haskell on the back end, at least you are almost certainly messing up. Um, it is Haskell on the back end to such a strong proposition. Um, the advantage of doing Haskell on the front end and doing G.H.C. Js The biggest advantage is you can share code between the front end and the back end. And so what happens? And I've seen this at other you know, at other places I've worked is you know, you You you get a If you're using Haskell on both front and back end, you drive. You take a feature and you drive it home. Soup to nuts, right? You do the back end, you do the front end, you do the database. You do, you know, um, you do everything and then you go here. It's done in other environments that I've worked in. You know, when you have different languages. It, you know, in the front end versus the back end. It sorts out that there will be front end people, and there will be back into people. And what will happen is the back end. People will enter. You will will implement, you know, stuff for this feature and the front end people will go. Well, we're not. You know, the back end isn't here yet. We can't start developing until the front, the back end, is there and then they'll forget about it. And then I'm not kidding. I actually have experiences of five years later. Management's going. Whatever happened to this feature that, like we did work on, like, five years ago? What back in people are like, it's there. It works in the front end of people are going What feature or right? First I've seen it go the other way. This is not, you know, this is you know, this is just how humans work. Once you start segregating into two groups, it becomes the sharks and the jets, whether you wanted to be or not, right and so just just being able to go. Okay? You are in charge of this feature soup to not to do it means it gets done. And you don't have all of this stuff hanging fire and handoff issues and so on. Um, and there's a huge advantage to, I mean, one of the biggest pieces of code that gets shared between the front and the back end. Our is the types of the data you're sending back and forth. So it's very nice to just go. You know, we have a common schema directory that has all of the common directories. Everything that's shared between the front and the back end and the schema is just all of those data types that are boasts the front end and the back end. Have and you go. Okay, here's the two. You know, this is the type. Here's the two. Jason, here's the from Jason. Do the obvious quick check test to go. If we encode it to Jason and decode it, we should get the same thing back out again. Right? And now you don't have a communications problem of, you know of of the back end, sending one saying in the front end, expecting something else
>> exactly right. So we we are currently in that situation you were describing Where Cam and I both primarily work on the back end and we are using high school. So we're doing something right there, and our API is in servant. So we have, you know, a well type description of it. And on the front end, we're using l'm rather than G.H.C. Js. But the principle is the same. You know, we're using a strongly typed language, so we have to decode it. And yeah, we have walked into that situation a couple of times where the back end team has developed something, and then the front and team doesn't know that it's finished. They think they're still waiting on it or vice versa. And, uh, it would be nice to be able to go into end with one language. Um, but I am curious. Uh, is there a particular reason you all chose G.H.C. Js versus, uh, you know, there are many languages in here, like, uh, Elmore peer script or paste or fate or whatever.
>> Uh, a
>> lot of it was a lot of it was just straight up familiarity. Both of us had worked with G.H.C. Js and reflex before, and so, um I will admit to a little bit of pure script envy. Um, there are, But there are tradeoffs here, right? It's always, you know, what do you gain? What do you lose? Um, I had
>> some experience with pure script and my only problem with your script Besides, the fact that you cannot share, uh, data types is that it's just like Haskell until it isn't, um, and getting getting tripped up over that just just there's too much friction. Um, and and what you get in exchange, this didn't seem worth it. And not knowing how to build what we were going to build, um, I found that reflex would give us the flexibility to change course if we needed to. And we have needed to on a number of occasions
>> or the re flexibility, you might say,
>> Yeah. I mean, for for me, it's the idea of the shared data types between the front and the back, and it's just awesome because we've had kind of that issue where, you know, for generically deriving something, we changed the name of a record access er and oh, the front is broken or a mobile app was broken for two months. We had no idea, You know, like those kind of things. You know, obviously there's test cases for that kind of stuff, and we're have learned from our mistakes. But, you know, it's kind of that thing where if we could just have the same data types front and back end, it's not, You know, we don't have to worry about decoding and encoding changes. It would break production running sites.
>> Yeah, and I'm curious for y'all. Maybe chat wisely hasn't been around long enough to have run into this problem. But maybe you all are familiar with it. So even with code sharing, do you still worry about Is there like an older version of the client out there that expects the old schema and we updated it? And it's new on the back end and new on the front end. But they haven't refreshed the page yet.
>> Brian
>> came up with a great idea very early on to deal with this.
>> Let's hear it.
>> Okay, so the basic idea is you need to have version schemas. Um, the nice thing. One of the nice things about servant, one of the many, many nice things about servant is you know, you sit there and you go. This endpoint has the function here is to function so you can have multiple different You can have, you know, V one schema and V two schema and you know V three schema. And if the function if that endpoint hasn't changed, you just use the same function in all three places. And so our plan for doing that is you just support multiple versions of the schema, Um, you know, at the same time. And it's okay, these in points are different. And here is the old in point that supports the old schema. And here's the new endpoint that supports the new schema. And then you can still have old clients using the old the old API, and you can, but you can still move forward.
>> Yeah, that's very slim.
>> Um, and we're doing something similar to that on on the database. Where if you have this, you have sort of the same problem on the database side of things right where you want to be, changing the database schema. Um, and don't get me started on schema list. We only have an hour here. Just, um just walk, walk quietly passed and let's move on. Um, you want to change the database schema, but you've got some servers that are using the old schema and some servers that are using the new schema. And so the solution is, um you you you basically code it so that we're using liquid base and there's a specific table that liquid base updates to to go. These are the constraints. These are the the schema changes that are in in place and you can write your code so that you know the nice thing about Post Grad the schema changes are our atomic are are transactional. So your eyes are on the old scheme or you're on the new scheme and you don't have to worry about what if I'm halfway in between, you can just go. Has this change been applied yet? And you can write What you do is you write your server Stage one, you write your server to go. It supports mostly old scheme and the new schema. And it just goes, you know, when it hits the database, it goes, you know, starts a transaction. It goes Okay. Which schema do I have? Do the right thing mhm. Then once, then you do a blue green deployment to update everything. So now everything supports opposed the old scheme and the new schema. You do the database transaction to implement the new schema. Once that commits, every all all of the you know, all of the, um, all of the web servers switch over to using the new schema. And then when? When it's convenient, you delete the support, the old schema code allowing you to simplify your code, and then the next time you do a blue green deployment, you don't have to do it immediately. But as you know, as you bring up new servers anyways, you bring them up with I just only support the new schema.
>> I love it. Seamless.
>> Yep. And so this allows you to be updating the, um, you know, updating the database even while you're alive and serving traffic.
>> Mhm. I'm curious. Uh, it sounds like you're doing either something very similar or the same on the front end. Is there a cut off point for you on the front end like we will support clients that are up to, you know, week old or a month old or whatever
>> um, we haven't made that decision yet. Um, there there's a couple of different solutions to that, um, ranging from nice to obnoxious. Um, yeah.
>> I've seen some single page apps where they'll have a little pop up that says, Hey, there's been an update. Please refresh. And I assume at a certain point they get a little more strongly worded than
>> that. You can start doing stunts like Okay, everybody using the old schema now has an additional one second delay added to all of the rest calls. Oh, you have a program. A performance problem. Have you tried updating? Oh, it's all fast now. Okay, Problem solved.
>> Yeah,
>> that's like the iPhone world where they slow down the thing. So you get a new phone,
>> the planned obsolescence approach to client upgrades.
>> Yeah. Not not saying we're going to do that. We Actually, we haven't gotten to the point where we needed to do this stuff, but my pieces are in position, and I'm ready to I don't know how we're going to solve that problem. Gonna
>> put them in Checkmate,
>> huh? I like it. Um, so, uh, G.H.C. Js obviously works great for the web. Do you have a story for. Have you thought about native clients? And are you going to use Haskell there as well?
>> Okay, yes. The short answer. Yes, we have a story. And yes, we're going to use Haskell there, at least at the beginning. So one of the other nice things about reflex Well, G.H.C. Js in general is you can flip and do compile to native IOS and native android. Um, using, you know, just using a built in browser using the native mode browser stuff.
>> Okay, so, like a web, you or what?
>> Exactly? Um, and so you can just take your reflex code and compile it to IOS native or android native and go bang. Here. Here's your web app that gets that gets you started. Solution, right? That isn't a long term. You're going to, you know, if if chat wisely takes off, we start having you know, millions of users and money to actually spend. Um, we will almost certainly be, you know, you know, evolve. Um, native true native front ends written in swift or Scotland or what? Have you mhm by that time? Possibly G.H.C. right. It might be worthwhile to just spend at the time to do the bindings to the native, to the native calls to the native libraries and just write the code and G.H.C..
>> Exactly. I was going to say that with the recent changes to the apple silicon using, um, arm as their, uh, platform of choice, I think G.H.C. in 9.2 is going to have support for that. And I know it's not the exact same, but that should perhaps ease the transition into IOS native and maybe and native.
>> Yeah, um, but but we can get a long ways just going here. Is the web Nate of, you know, the, you know, the web native implementation. Oh, yeah. I mean, Flack did that for years. Yeah, exactly.
>> Slack got away with that for many years, which is which is what I needed to be assuaged. I was a little nervous about this, but then, you know, um, we saw the slack business case, but they did it for years. I think we'll be fine. You
>> know, if slack can get away with it, why not? So yeah, So I'm curious. Uh, G.H.C. Js Seems like it has been going like gangbusters, and you've got plans for future improvements. Is there anything about it that you haven't liked or hasn't been? Great?
>> Uh, I will say there's there are some performance problems, which is what's causing the sort of the the the, um, the pure script envy. A little bit there. Um, although, you know, from what you guys are saying, I said I would take the fewer problems. Slower code. I am a firm believer of I would rather have slow but correct code than fast, but wrong code. Yeah, I I think it's
>> a little easier to speed things up than it is to make them more correct, usually in painting with some broad strokes here. But, um, And for the performance problems, are you talking about, like, bundle size or, like, actual runtime performance?
>> Um, bundle size of the problem? Um, actual runtime. So the problem as I understand it and I'm not an expert takes us. Don't. Don't take this with so much. A grain of salt to the £5 block of salt. Um, is, um, the the the you know, the garbage collector in in Haskell will actually force chunks. Um, how it's been explained to me, um so like if you have a If you have a sunk that produces a tuple and then another song because it just calls first on the tuple the garbage collection will notice that the tuple sunk has been forced. And we'll just automatically forced the first sunk right to just and and that will, oftentimes free up a lot of memory. This is a huge performance requirement in Haskell. The the story I got told was re compiling G.H.C. whist trick in the garbage collector takes, like, 10 minutes or 20 minutes or whatever. Without this trick, they let it run for 24 hours and it hadn't completed yet before they killed it. It's that sort of performance.
>> Couple orders of magnitude.
>> Um um and so again, large chunk of salt here, Um, but this means this means two things. One thing is the the the the the the the the The JavaScript size is much larger than it needs to be because you're converting to javascript from the wrong point. You have to be converting it to down at I think the C minus minus level rather than at the core level. Um, I think it would be much smaller and actually much nicer on on JavaScript if we could go straight from core two to JavaScript. Um, and so you know now where the other thing is that now you have to have a garbage collector in your garbage collector language, right? Which is just never a win. You
>> gotta run time on top of your runtime.
>> Yeah, and so mhm. Yeah. I mean, at the moment, it's I think it's good enough. Um, but I actually have the dream of, you know, one day having enough money to actually hire people to fix this. And I actually think the the this is this is me dreaming the shining city in the distance sort of dream. I actually seek Haskell compiling to JavaScript could be faster than native JavaScript. Uh, frameworks like like, um, like, react. And here, here's my reasoning, and this isn't true at the moment, you know? But what could be, um, So with react, how things work is whenever we when, when any. When? The little about. Let me try this thing in English. When any time when anything changes you, we draw the entire you re render the entire the entire page. But you re render it in virtual dumb. Not real dumb objects, because constructing real dumb objects is expensive. And then you def. This tree, the virtual dom tree with with the dom that actually exists. And then you do whatever changes actually changed, right? But this means you're doing an awful lot of work. You're generating an awful lot of objects that you're having to do a large def to go. Okay, this button changed from red to green. What happens in reflex in G.H.C. Js is you just hit the dumb directly. You don't recreate the dumb, you just go. This dumb now has style. Instead of having a style color red, it has a style color green bang. And this is a much faster update. You know, you're even if you're slower. Even if your performance on average is slower, you're doing so much less work that it could still be a win, right? Yeah, I want I want to reiterate this is not housing actually exist at the moment. So everybody going I tested it and it works. I'm going. Yeah. This is how
>> things could be. This is
>> how it could be. I think l'm is perhaps proof positive that that approach can work as far as I understand it. L'm, um, can perform better than, say react because it has immutable data structures. It doesn't have laziness. However, there, uh, html generator can be written in a lazy way. They have a strict one, and they have a lazy one. Um, and they also have a keyed version which I think react as well. But yeah, it's, um it can have that same performance guarantee where it doesn't need to realize the entire virtual Dom in order to notice that this one no changed. It can do much less work. And because of that and many other reasons, I think it can be faster. Yep. Well, that's exciting. So I see you've taken the downside of G.H.C. s and turned it into a potential future upside.
>> Yeah, but again, it's welcome to engineering. It's What do you gain? What do you lose? There are no perfect solutions, right? And the upsides of Well, I also really like the the the, um the, uh, having a senior moment. Reactive banana. Michael the, uh Oh, yeah, yeah, yeah.
>> It's called reactive banana. That was my first introduction to um uh, functional reactive programming.
>> Functional reactive programming was the phrase I just couldn't remember. Yeah. First the knees go, then the memory goes, and then I forget what goes after that. But, um, yeah, I really like functional reactive programming. Just as a programming style. Maybe it's just how my brain is wired. But I I like just wire everything up.
>> Yeah, And, uh, for listeners who may not be familiar with f r. P, my understanding of it is that it is more event driven than, um, like, state driven. But I haven't actually used f r p and anger. So maybe Brian, could you explain what it means to
>> you? Okay, so the core of f r. P is actually fairly simple. Um, you you've got you've got you've got 33 parts, three major parts. You get what I think of as the f r p. The f r p. Pure stuff. You've got the, um um the dom generating stuff. And then you have the interface interfacing functions. So at the f r. P level, you have events. An event is something that can be firing at any given point in time, and it can be carrying a value when it fires. So a classic examples are a key press, right? And it's carrying what the event is. What key got pressed or a mouse clicked. Right. Um, then you have behaviors. Um, behaviors are things that have a value at all points in time. At any point in time, I can return, I can go behavior. What's your value, like especially this event fired. When this event fires get to get the value out of that behavior and, you know, go do something with it, then you have dynamics, which are the combination of behavior. It's a behaviour and an event that fires whenever the behavior changes. So this is this is sort of the the pure and, you know, you know, behaviors you can f map behaviors, you can filter behavior. You, uh you know, you can, um, sorry. You can't filter behaviors. You can f map everything. Everything is a funk door. Uh, dynamics hormone adds, um, you can do filters on the event. Right? So this event fired is that when I care about No, don't fire. You know, don't carry on firing right. Then you have a standard monastic based dom generation, right where I can just you know, there's an e l function which creates a dom element, and you can give it This is you know, this is the tag. Here are the attributes. And here is the the the Menad that generates its Children. Right? And I'm generated as some Children of some parents function, right, so this ends up this code ends up looking an awful lot like, if you've ever if you've ever done, um, blaze HTML or um um, lucid um, it it ends up looking very similar to that. I mean, the functions are all named differently because, of course they are. But, um, it's very much that pattern and then you have the the interface functions, right? So there's a function that can take an element that you generated and go when, when when the user clicks on it. Give me an event that fires when the user clicks on this element,
>> right, walking back into the
>> or create this element. And here is an event or dynamic that holds the attributes that this this element should have. Right? And when this, when the dynamic updates and changes, the attributes get reset or here is, you know, a widget Hold here is you know, I just want to swap out which Manatt I'm sticking in, you know, to generate the dom in this place to replace this whole Dom sub tree with some other dumb sub tree, you know, coming out of a dynamic or an event. Right?
>> So, to me, this all sounds eerily similar to the architecture elm has tried to do. So I know that we kind of talked about that earlier. Um, so it's relating a lot to me as far as like, my experience with elm and like, Yeah, so I'm like, OK, yeah, like this makes sense. And I do enjoy that style
>> very much. L'm, I think used to be based on f r p. But sometime I want to say, like, l'm 0.16. They are no longer f r p based, even though the the interface you use is very similar. Mhm. So just historical oddity there.
>> Yeah, I think they they they may still qualify as f r. P in my my book. Um, I haven't actually done a lot of elm, so, um, but if they are f r p I mean that in a good sense, I like that.
>> Yeah.
>> Um, so, uh, switching gears just a little bit. Uh, you all mentioned that the back end is kind of a mix of servant, and you sewed. Is that correct? So, uh, do you do any server side rendering, like, a pre baked version of the single page app to send out the first load? That then gets updated later. Or how does that work?
>> Um, okay, we don't. Okay, we're using again. I'm a great believer in using the right tool for the job and for stuff that doesn't have a reflex. Um, a reflex front end that's needed. Um, we are using a showed. And one of the nice things about Haskell is Haskell works well with Haskell. You can have, you know, you write a little bit of Web application interface code by a little bit. I mean, like, five lines, 10 lines of it, and you go. Okay. The episode stuff goes here, and the reflex stuff goes there, and everything is just happy. And it all just works beautifully. And so you can sit there and you can go. Okay. There are There are some Web pages like, you know, give us your financial information where we don't want, You know, we want it to be a server side rendered, you know,
>> uh, weapons a little JavaScript on there as possible.
>> A little job. Yeah. And And, you know, all of the, you know, stuff to make sure that people aren't hijacking the session and all of that, which Yes, So doll has and baked in, um, so use a sewed for what you saw is really good at. And by the way, if if that's pretty much your entire web page, I do recommend the A Sewed it is a very nice if you, you know, if you are. If you are just a static web page generating site, um, you know, yes, Soda is very, very nice. I like I like its widget idea. Um, but for the, you know, for the reflex pages, we haven't implemented this yet, and we are going to in the near future, and it's, you know, it's simply a complete deficit of circular to it. Um, we don't have any round to it. Um, haven't gotten any round to it yet. Um, the we have, you know, um, reflex has a, um a a static rendering Where you basically run the reflex code again? This is the nice thing you can share code between front and back end, right? So we can just suck. They suck the page, rendering code from the client into the server side and go when you're hitting a reflex page. We just pre render the page on the server side, using exactly the same code and spit that up. And then and then reflex has some some magic. It does to to instead of generating the dom, reuse the existing dumb
>> kind of bootstrap it. Yeah. Okay, so then this is This is possible. In theory, it's just not done yet.
>> Um, we haven't done it yet. We haven't done it chat wisely yet. Um, I've done it in other places. Just round to its deficit. Um,
>> that's very exciting.
>> So hopefully that will solve several of our bigger performance problems. Um,
>> so we've been talking tech for a while now, but when we kick this off, we were talking a little about, um, you know, the users of the site and how it's funded, and I want to come back to that. So, uh, Michael, could you, uh, maybe talk us through a little more about the funding like you mentioned? It's going to be, I think right now in an open beta and you will charge users some feed. Sounds like a dollar a month. Um, how does this compare to other sites with a similar model? Ones I'm familiar with are, like, meta filter or slash dot or FARC or something awful, I think where, um, this is just like a minimum bar to clear in order to participate, uh, you have to throw a dollar at us or $5 or whatever it is. So, yeah, what do you expect? What impact do you expect that to have on the community?
>> Well, I think the, uh, biggest, uh, impact is going to have is that it's our first troll filter. We don't think most bad actors are going to want to pay for the privilege. Um, and, uh, the feedback I've been getting when I talk to people, um, and make the pitch is going to be a dollar a month. The the, uh, reaction I get is Well, that's nothing. Which is what we want to communicate to you. It's nothing but in the aggregate. To us, it's a sustainable business. Uh, whether you're charging a dollar a month or not, every social network needs to reach, um, a point of stability. Um uh, having a strong enough strongly connected enough and large enough network in order to be considered stable. So it's in that scenario chat wisely, having reached stability. Um, that, uh, that dollar a month, um, is is very much worth it. Um, and so yeah. So the dollar a month It's our first, uh, not just our first filter, um, but also allows us to deliver, uh, features that we've noticed. Um, would be popular have been popular, but they don't work in a social networks that rely on ads. I'll give an example. There was a few years back in 2014 associate were called Yik Yak. Um, Yahya was ju located based messaging, um, which blew up on college campuses. College kids love Yik yak. The problem was, is that everyone was anonymous and people could come and go anonymously, which meant that turned into a harassment machine. Um, and they they shut down as fast as they blew up. Um uh, So, um, the the the the market is there. Um uh, just for that one aspect as an example, Um, where we come in? Um, uh, the reason why we think we could make that scenario work. Um, in a post covid world, of course. Right. Is, uh, having paid us, even though we provide mechanisms to protect your privacy to the outside world, if that's what you want to do. Uh, so you you can go ahead and be anonymous to the outside world, but we know who you are. Um, and so, uh, that is a huge disincentive. Um, to try and use chat wisely as a harassment machine. Um, uh, so I think that's gonna be a huge impact. We were going to be able to offer, um, features that people want, um, but cannot work, um, in, uh, an ad based, uh, social network.
>> Yeah, that's fascinating. The kind of localized or, uh, I guess hyper local would be the buzzword version of that. Um, I can see that it is an appealing feature. And, uh, Jack, I suppose, is proof positive of that. Um, and that would be it would be cool to see that in a platform that isn't ad driven.
>> And can I jump in here for a minute, please? Also, I mean, there are features that, you know, that makes perfect sense for us. But don't for, um, um, Facebook. And one big one is a lot more control. Over what? What you see, um, and the soul, you know, Mm. If you wrote a client for Facebook or for Twitter that showed all the content but didn't show the ads, they would have to shut you down quickly. Right. Um, just just to keep their own lights on right? Just to, you know, to pay their own salaries, they would have to shut you down quickly. We have no such problem. Um, you know, so long as you're paying us to buck a month, you know, we're we're good was, however you want to access the site. All
>> right? The only clients are fine, as long as the users are actually paying for paying you for the service. Yep.
>> And so and, you know, allow me to to circle back on the tech side of things. This is one of the nice things about servant is we are planning at one point. We're not doing it currently, but at some point we will be going. Here's a library to access our you know, our a p i with, you know, javascript with Ruby, with python with java with whatever right. And, you know, let 1000 clients, you know, you know, Bloom, may the best one win. Hopefully, we're the best. One will be working hard to make sure we try to make sure we're the best one, but it's going to be a level playing field. We're not going to have access to super secret. A p I is that you know you don't, right?
>> That's something that we have been able to take advantage of here at I.T. Pro. Like I mentioned, our back end team in front of team are separate. Um, and we have an internal swagger. Api, uh, we haven't yet got to the point where we generate a client from that, but we do have the open API swagger specification that you can browse and, you know, make requests through and all that, which turns out to be really handy, because when we finish a feature, we can just send a link to that documentation of the front and team and say, Hey, here's the endpoint. It's got everything you need in there. Yep.
>> Um, yeah. Previous.
>> Uh, I just wanted to talk about how we organize. This is another thing that, uh, Twitter and Facebook don't really want you to do. Because what they want to do is to deliver you and add, um, and so their timeline belongs to them, Not your timeline belongs to them, not you. Uh, and, uh, so with us, our incentives, uh, were incentivized to to want to want people to own their timeline. Mhm. And so what we've done is given ways to organize your timeline, um, so that you can divide up the timeline based on who you're talking to and what you're talking about. So, for example, if you've got relatives that you want to engage with, um in terms of, you know, food recipes or sport events and things like that that are, you know, family safe, um, but not want to talk about politics so much. Um uh, the other. You know, other social networks are pretty binary. Either you're engaging these people, or you aren't. Um and, uh, with us, we give much a much finer control. Maybe I want to talk tonight. Uh, what about her cookie recipes? But not so much about her opinion on national politics. And we we have mechanisms, uh, to help you do That
>> sounds really powerful. Like a hopefully better version of Google's circles, which were a person based rather than kind of topic based from from the dark days of Google. Plus
>> Yeah, well, I appreciate you guys coming onto the podcast. I I I want to let our audience know, um, and let you guys kind of give them an update of kind of what's next And maybe some of your platforms that you're, uh, not secure platforms, but where to find your platform and how to access it And how to, you know, be a part of
>> it. Yeah, wisely dot com. Yeah,
>> yeah, yeah. You can come to chat wisely dot com. And, uh, So what we're doing right now, uh, is trying to demonstrate, um, to, uh, to potential investors that are our assertion is valid that people are willing to pay. Um, but because we're beta, uh, we don't have our payment mechanisms set up yet. Uh, so uh, in lieu of that, we've set up a patreon. Um, uh, which, uh, has has astounded and and and and giving me lots of warm fuzzies based on how successful we haven't had it up very, very long. Um, and, uh, the success that we've received with that, um uh, you know, making me feel really good. Uh, we're up to a little over 100 a month on it. Um, uh, um, the higher we can get that number, the more powerful our assertion will be that people are willing, um uh, to pay for a social network,
>> for sure. So we'll leave a link both to chat wisely dot com and to the patreon and the show notes for this episode. Great. So thanks for that. Thank you. Yeah. And, uh, there anything we didn't cover that y'all wanted to mention about? Chat wisely.
>> Uh, well, uh, we need beta testers. Um, uh, we we've had some, uh Come on, the blog was very helpful for that, Um, and, uh, uh, we had we had some people point out tests and problems, um, and generated some, you know, lots of tickets for us, which is what we need.
>> Um
>> uh and so, Yeah, we need beta testers. We please try to break it. Try to break the site. If things are confusing, let us know. Um, we're not user interface or user experience, people. Um and so we need a conversation with beta testers about their reaction to how we have things designed which will help us iterate, um, and and get something and get and get a better user experience.
>> Awesome. Yeah. I mean, thank you, Brian and Michael. It's been a lot of fun. Kind of hearing about chat wisely. I'm gonna throw it over to our friend Taylor.
>> Yeah. Thank you so much for listening to the high school weekly podcast. And again. Thank you, Michael. Thank you, Brian, for being guests today. It's been great to talk with you. I've Oh, yeah, absolutely. I've been your host, Taylor Fausak. And with me today was Cameron Gara. If you want to find out more about high School weekly, you can go to our website, which is Haskell weekly dot news. If you enjoyed listening to this podcast, please rate and review us wherever you found us. If you have any feedback for us, please set us up on Twitter at High School weekly. And, uh, yeah, anything else came.
>> Yeah, High school weekly is brought to you by I.T. pro T.V. and a CI learning company and our employer. They would like to offer you 30% off the lifetime of your subscription. So if you have any interest in I.T. world, we have the content for you. Um and you can use promo code Haskell Weekly 30 at checkout for that discount. So I think that about does it for us. Thank you for joining us on the Haskell Weekly podcast. And we'll see you guys next week, right? Police?