Remote Ruby

Turbo Morph & ActiveRecord Encryption with Jorge Manrubia

November 10, 2023 Jason Charnes, Chris Oliver, Andrew Mason Episode 247
Remote Ruby
Turbo Morph & ActiveRecord Encryption with Jorge Manrubia
Show Notes Transcript Chapter Markers

In this episode, Jason and Chris welcome guest, Jorge Manrubia, a Lead Programmer at 37signals in Spain known for his contributions to Ruby on Rails.  Today, Jorge shares insights into his background, role at 37signals, and contributions to open source projects.  He discusses his experiences, including the importance of learning from rejection and the value of experience in job interviews.  The conversation dives into Jorge’s work on Active Record Encryption and Console1984, and Jorge touches on the development of Turbo, with a particular focus on enhancing user interface fidelity in calendar applications using morphing. Also, they discuss the challenges of using Turbo Streams for complex updates and the benefits of using libraries like morphdom or Idiomorph for simplifying the update process. Jorge also gives us a glimpse into the upcoming release of Turbo 8, so press download to find out more! 

Honeybadger
Honeybadger is an application health monitoring tool built by developers for developers.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Speaker 1:

This is remote, ruby. Have you any remote ideas to the meaning of the word? Good morning, christopher. Good morning, it's a Thursday morning, which is a rare recording time for us.

Speaker 2:

Rare. This is one of six we're recording this week. When this comes out, it could either be November of 2023, or it could be Christmas of 2024. I don't know, we are Andrew lists this morning, but we have, I think, a lot of fun conversation to get into. Today. We're joined by Jorge Manrubia and we are going to talk about turbo, we're going to talk about rails, we're going to talk about Ruby. Who knows what else. We'll get into you, but first I would like to ask if you would mind maybe just giving brief introduction into where you work, how you started working there, things like that. I think it'd be interesting to hear.

Speaker 3:

Well, first of all, thank you so much for having me in the podcast. I'm a big fan of this one regular listener, so I'm thrilled to be here. So yeah, my name is Jorge Manrubia. I work as lead programmer at 37 signals. I live in Valencia, spain, and in terms of open source, I'm the main author behind active record encryption and also I recently present a new feature in turbo that will arrive in turbo eight, which is using more thing for smoother page updates and what else. I like to write a lot, so I'm writing many technical articles on 37 signals that side and also in my own personal side. So yeah, that's me.

Speaker 2:

I meant to, in the introduction, say that if you're listening and you haven't read some of the technical articles that Jorge has written, you should, because they are insightful but they're also fascinating. Looks at how you and 37 signals approach code and I think people really love that type of content. I love reading it. So thanks for writing that stuff.

Speaker 3:

Thanks so much. I appreciate it. The reception for those articles has been a super nice surprise for me. A bunch of people have reached out. They have written me like plenty of emails with questions or with showing their appreciation and yeah, it's been like a wonderful thing. I wasn't expecting like so many interesting interactions with folks around the world because those articles. So thanks so much for your kind words. I appreciate that.

Speaker 2:

So you work at 37 signals. I would be kind of curious to hear, maybe, how you got into Ruby and how that led into working for the company that extracted rails.

Speaker 3:

The thing is that both 37 signals and rails were in my radar, I think, since rails started, especially since version two. Back then I was into the Java enterprise. I was working for the IT department of the social security office in Madrid, spain, if you believe it or not. So there was like all about enterprise Java there. I don't know if you're familiar with okay, with a very good old times of Java enterprise. Anyway, rails was launched like an answer to a lot of madness going on in the Java world in those days and got my attention immediately. Then, around 2010 or something like that, I started my own company, my own product company, using rails, which was called Sendan. Sendan was getting things done too. That never truly gained much traction to make a business out of it, but it served me to learn rails for good, because so far it has been like a side thing. But when you build something real that you need to operate with real customers is like when you really learn the technology. After that, I got to work for all the companies.

Speaker 3:

The thing is that 37 signals was always in my radar, especially when I became experienced enough with rails so that I started to feel that I had a chance of getting to work there. I tried many times. I tried to apply I don't know maybe five times, five different times, just when there were openings and also when there were no openings. I actually wrote a blog post back in the day which is I think it's called how I Got Hired by 37 Signature, by Basecamp or something like that.

Speaker 3:

It was not like, oh my God, I want to do this. I'm going to do that in a linear way. It was more like trying and failing. I remember once I wrote like a super length cover letter. It was like I don't know multiple pages, six, seven, eight pages. I remember seeing a post from David Stoneware asking people please be concise in your cover letters. So it was like, oh okay, so they don't want like a lot of text. I tried to adjust what I was doing and eventually I was lacking in 2019, I think, when there was an opening for the security infrastructure and performance team, and once there were a series of interviews and I got finally hired. There are like all sorts of stories, but having to try several times is not rare among 37.

Speaker 2:

Signature employees.

Speaker 2:

I appreciate you sharing that. I love the honesty of like I tried this multiple times Too often in my career like I get one rejection I'm like okay, well, I'll never work there, Can never apply again. I also love, too, that you mentioned sometimes I applied when there was openings and sometimes there weren't, because I actually think that's actually really valuable advice for people looking for jobs, because my job at Podia I emailed the CEO and was like hey, are you hiring? And he's like, nope, but I'll let you know the next time we are. And then it was a month later. So yeah, there's a lot of opportunities. I think it's really cool that you shared that.

Speaker 3:

Sure, I mean, I've been rejected like several times in my career different companies and if you take that like in a constructive way, there are valuable lessons like there. Going through selection processes is something that you can get experience at so that the fourth time you are interviewing in a for a big or an important company and you are nervous, like you do that several times, things get better in terms of how you behave and let's for most people, because there are people that are so amazing that they just go for it and make it in the first attempt, but for most people like I think that failing and learning is like a much more realistic approach. Right.

Speaker 3:

So that was definitely my case, yeah.

Speaker 2:

Yeah, and I think I was a lot. Once I got rejected from like my dream company at the time and they were like you'd be a good cultural fit, but you don't have enough experience. And I was like, what do you mean? I don't have enough experience, like I didn't get it. And then I've worked for Podia for five years now and I'm like, oh, I get it. I had no experience. Sometimes the timing is just not right. Yeah, there's a lot to learn from that. So two things we can probably spend hours on Chris mentioned, before we started recording and you mentioned the beginning active record encryption and as someone who is now heavy user of active record encryption, I would love to kind of dig into that.

Speaker 1:

So just to set some context, like, we have used encryption in active record before, with Adder encrypted and lockbox and other things. But when your solution to encryption came out in Rails, it was really unique because it was like hey, we know that you're going to apply this retroactively and you've got data in your database you probably should be encrypting. Maybe it's OAuth API tokens or something that you're just storing in plain text, but they're kind of like passwords, so maybe you should encrypt those. I thought it was really cool to see that baked in as part of the solution, which was awesome. So I was curious what were your requirements when you started working on it and what did you learn along the way that led you to that as the solution there?

Speaker 3:

So when I joined 37 Signals the end of 2019. So two weeks after joining 37 Signals, I went to a meetup in the Chicago office. They were celebrating like a company meetup there. There was like the first time I heard about the encryption thing, because David Henry Hansen in the opening talk they normally deliver Jason and him he mentioned something like my wife asked me are your employees going to be able to check my email? So it was a very simple question like that. So he really was interested. It was an example, obviously, but he was really interested in raising the bar of security when it comes to accessing data. Just because email because this was in the context of launching the email service by 37 signals so email can contain health information, information in relation to your relationship with other people, political affiliation, whatever you want like the most sensitive topics you can think of can be in email. So email is very sensitive, okay. So that was the first time and I was like enjoying the infrastructure team 37 signal.

Speaker 3:

So I got just assigned to the project, which for me, was like quite a huge challenge. Well, it was a pretty big project Really. There were no experts in encryption in the company and certainly I was not an expert in encryption at all. So what I did first was like trying to study like all the gems that were out there doing the same there were a few so to see how they did things and I think I learned like good things from most of them, even if I didn't find like quite the gem that we wanted to use. Then I went to work on all the projects in the company. I didn't start with this right away, even if it was in my radar. So first I created a prototype, we discussed the prototype internally, we learned what we wanted to build thanks to that prototype. Then I rebuilt another version of the library, because first I was using tables, like additional records, to store the keys. So in the current active record encryption we are storing the keys with the encrypted payload, but in the initial version I was using like additional keys in the database, which was like a point of freedom in the design and it brought like a bunch of complications at several levels.

Speaker 3:

So anyway, the thing is that when we were ready to launch encryption, we were already using hay internally, like for our email. So we started using hay without the emails being really encrypted, like just an internal thing, so really had like that constraint of we need to encrypt like this information and we need to keep things working. And that was like the first big challenge technical challenge that I remember. Then there were others, but that was like the seed for making the library flexible enough internally to support that. Then there was something interesting because we hired before going public.

Speaker 3:

We hired like a cryptography expert, like a security company who brought a woman who was an expert in cryptography, an actual expert, and when she reviewed implementation there was something that got her attention as a major flaw. I think it was like I was fixing the initialization vector in combination with some AES256 encryption mode. It was completely like making the encryption almost useless. But she was like agitated because she knew about encryption. It was a big flaw and I was okay, the fix was easy.

Speaker 3:

But the problem was that we had already data encrypted with the previous encryption scheme. So again, there was like a big challenge, a big technical challenge, but out of there, like I could leverage on that flexibility, we had to even make the mechanism more general. So right now it's I think it's the only Rails encryption solution there is out there that supports like multiple encryption schemes so you can actually change the different properties of the schemes you're using over time and the system keeps working so that you can query all data and things like that, and things are going to work. So all those fancy features were out of pure necessity, out of finding ourselves in a situation where we needed to fix that for good. So yeah, that's kind of the interest story there.

Speaker 2:

I love hearing that because to me it was like, oh wow, they just really thought about every possible like edge case, like up front. So I also think it's really cool that you brought in an expert to like review it and then you open source it. How much more could you ask for as a consumer of the library? This has been tested by someone who is an expert in that field. That's amazing.

Speaker 3:

Yeah, that was a call that Jeremy there from Rails Core and 37 Signature Meet there made, that call of bringing an expert. And I think it was an excellent call because I was living like how do you call it? Like how much do you ignore and you live a happy life? I was there, I was happily ignoring like a lot of our cryptography and I tried to educate myself. But cryptography is like this body of knowledge which is huge and you can just dedicate your career to that.

Speaker 2:

That's amazing. Yeah, my first foray into encrypting data. I can't remember the gym I use, but I had a lot of boilerplate. It wasn't lockbox, but I had separate files with keys and all this stuff and I was so scared that I was going to mess something up and lose access to my data forever and shut down the app. And so then when lockbox like when I found lockbox I think everything Andrew Kane does is like gold and so when I found that, I was like, oh, perfect, this works. And then when it got baked into Rails, I was like, yeah, one less dependency and it works just incredibly well. I love that. I don't have to do anything special to the column. I love that I just say a, encrypt this field and then I don't think about it anymore. It's one less thing I worry about. It's wonderful. So, yeah, thank you for your work on that.

Speaker 3:

Sure, I appreciate that Thanks.

Speaker 1:

Yeah, the single column thing is really nice because in same thing with file uploads, in the past it was like, hey, you want to do one file upload, then it's four fields. You want to do two or three on the same model, then have fun. Here's eight or 12 different columns you got to add, and having that all in a single column where it can be serialized, this makes sense. Why wouldn't we just do that and then have it all contained in one place and it's just easier to manage. It's just nice. Recently, while I was partially doing the 256 thing for action text embeds or whatever, but we had encrypted stuff as well and the Shaw one and now the rotations, for that is like super easy and that's like a very nice thing.

Speaker 1:

I feel like Jason in the past doing encryption stuff. If I screw this up and we encrypt stuff and something breaks, I might lose all of this data, which would be horrible, and I just don't really feel that fear anymore either. We're not like you're saying. Crypto experts can spend their entire career on it. It's very math heavy. If you've ever had to debug something that goes wrong with open SSL or whatever, good luck. You'll end up with a crazy stack trace and have to use.

Speaker 1:

Like I dug into S trace and was like looking at all the internal stuff that happens there and trying to debug some random thing and it's yeah, it's a quite a lot of stuff to learn. It's crazy, along the same timeline where you working on, because you had talked about you're using hey internally, so you started encrypting stuff. But also to David's wife's question about hey, can you read our emails? One of the things that I remember, like you initially released, was the console 1984 gem, which I love, and really the value that, like hey, anybody that's in production looking at the Application on the console probably should be telling us why they're there and keeping track of what they're doing. And I was just curious it was that around the same time. Oh, we need to also Not just encrypt the data but also do some audit, logging and stuff like that around who's doing what.

Speaker 3:

Yeah, yeah, definitely. So right after the encryption was in place, like I started working on both console 1984 and well, firstly was console 1984 because but first we were just Registering like the console locks and trying to prevent like access to encrypted data. I got a first version of 1984 like very quickly done, like maybe Three or four days, but it was incredibly nice, so it was very easy to circumvent. But it was kind of working. It was like essentially offering like at the crypt command so that you can access the encrypted data and you could. Also you were in a encrypted mode by default, so you wouldn't see Encrypted data by default. That was working.

Speaker 3:

It was like connected with our logging pipeline. First we were collecting like the logs internally in our well, we use Kibana internally. So we were like collecting our logs in our login pipeline and exploiting them through Kibana. So let's say, the auditing system was like a custom dashboard in Kibana. Like for the first version we built so far, this was like an internal gem when we were to release the whole thing. I created like all this 1984 which is like a very basic admin interface for auditing these logs and instead of emitting those logs through the standard output or through the logging, whatever login system you have configured. Instead of doing that, we started to store them in the database. So we built, like this tool for auditing those logs. And then I worked on several improvements to make the system Harder to circumvent, because it's easy that Essentially the problem with Ruby console is that by design is a system that it lets you execute arbitrary Ruby code and Ruby is incredibly Flexible, right.

Speaker 3:

So it's hard to try to cover all the ways you can try to Skip the controls we set in place, but it's still. I'm pretty happy. I mean, there are no known issues that I know of about how to do that, but I'm sure someone with expert enough can. I mean, I wouldn't be surprised if someone Discover new ways of doing that, but I always say that it's a good baseline. Obviously, maybe it's not going to if you have like very malicious actor with access to your console, to your production console in Rails, and that person knows what they're doing and maybe it's not like bulletproof, but it's a little way better than not having anything in place, right.

Speaker 1:

So, yeah, if they've got production rails console access, they've got Probably privileges to do almost anything they want, but it's one of those that's really nice to be like. Well, we need to do customer support and Want to just make sure that people who are doing that are Not going around and doing malicious things or whatever. I think what was it? Vercel had an employee that was like a customer's domain was too similar to theirs and so they had done stuff that they had access to that they shouldn't have or whatever. And this is a great tool for that.

Speaker 1:

Where it's like, hey, production console is Necessary evil, we don't want you to have to do this ever, really, but we can provide a way to make that a lot more, you know, just monitored or audited and keep that safer, and I think that's a really nice Thing, because that I mean as soon as it came out, I was like, oh yeah, why wouldn't we use this in default for every application we run? Because we might as well. And it was like when I was working by myself, I was like I don't need to audit myself, but having other people that you work with, yeah, makes perfect sense, and it felt like something that I Would want to include like out of the box, on everything.

Speaker 3:

Once you get used to work with, like with encrypted data, it feels weird, like not doing it. After a while it starts feeling with why am I seeing like this piece of customer information, even if it's not sensitive by how a customer names to do, for example, it's not my business, even if it's nothing, it's like sensitive data, it's not. I shouldn't be seeing it.

Speaker 2:

Let me ask you one quick question. Are you currently using one service for uptime monitoring, another service for air tracking, another service for status pages and yet another service to monitor cron jobs and microservices? Paying for all of those separately may be costing you a lot of money. If you want to simplify your stack and lower those bills, it's time to check out honey badger. Honey badger combines all of those services into one easy to use platform. It's everything you need to keep production healthy and your customers happy. Best of all, honey badger is free for small teams and setup takes as little as five minutes. Get started today. Honey badgerio that's Www. Wwwhoneybadgerio.

Speaker 3:

And also I wanted to mention that another gem that we released, which was also out of pure necessity, is mass encryption, which is a library for encrypting data in mass using active record encryption, and that came out of having to encrypt, like all the base camp database, billions of records there. Initially we only supported like encrypting records one by one and with this library is going to use jobs and the jobs are going to use absurd SQL statements so that you're going to update a bunch of records 1000 records at a time, something like that so it's way faster. So if someone wants to encrypt like a large database, that's definitely like the gem to keep in mind. Yeah.

Speaker 1:

I don't think I even knew that existed. That's super handy. Yeah, a lot of old applications will have lots of things that you probably want to go encrypt and probably millions of records, and opening the Rails console or running a rake task that doesn't one by one probably takes a ridiculous amount of time. So this sounds really awesome.

Speaker 3:

I don't remember the numbers, but the numbers for base camp was where we did it. It's like we wouldn't never finish if we did one record at a time.

Speaker 1:

Yeah, so you were working on that. What point did you start working on the turbo stuff that you announced at Rails world?

Speaker 3:

Oh yeah, well, that was this year. Actually, so far I've been doing like infrastructure work 37 signals, so mostly internal projects and also supporting infrastructure features for existing applications. So last year I asked the company that I would like to do some product work if possible because in my career I've done like product work places and I was kind of missing that, like working with designers with user facing features in the front end and such. I had a lot of front end battles in my career. I was kind of missing those and I was like I was given like this wonderful opportunity of working on the new product that the company was talking about. We was the calendar, the high calendar. So that's how everything started this year. Like I started.

Speaker 3:

I was in sabbatical in January. I started on February on the calendar. So there was like an HTML prototype showing several screens for the calendar with the principal designer working on a Scott Atom principal designer working on the product had been exploring ideas with some HTML prototype in Rails. So what I did was starting to animate some screens and rendering some screens for the calendar and I was using the approach I would recommend anyone to use when you are starting a race application which is relying on regular turbo dry behavior, which is using Rails as it's meant to be used, and relying on regular full page body replacements, which is what you get for free. So I was doing essentially I'm going to update or create a new record, I'm going to read it back, render the full screen, and I'm going to be done. And I was moving pretty quickly using that approach. But the thing is that the UI fidelity we wanted wasn't there, obviously because in a calendar well, I think you're familiar with calendar applications, right, chris? I've seen that you have some experience there right?

Speaker 1:

Yeah, for anybody that doesn't know that, I tried to teach myself how to write a Ruby gem and made a little calendar gem and then published it. It didn't work or wasn't complete, and then people filed issues and so I was like, well, I guess I'll start maintaining this. And now it's got. I think it's the most popular calendar in Rails and I've really never had a need to use it myself.

Speaker 3:

That's a pretty cool story. So yeah, I mean, in a calendar in general, the responsiveness bar is high, like you want it to be responsive. It's not like a regular application in the sense that you have this graphical canvas you want to act on and you want to see things happening in a responsive way. The fidelity, the UI fidelity bar is high in a calendar. So with the default turbo drive full page body replacement implementation, it was nice but it wasn't good enough for what we wanted. So for a while I was like thinking about this problem because our initial idea was like green fidelity using regular turbo, meaning turboframes and turbo streams, which were like the tools that turbo offers for partial updates that feel really great.

Speaker 3:

So the problem there is that the rendering in a calendar is very complex. Like it's way more complex that, for example, in hey. So in hey, when you are listing like your list of emails, you are just placing like one email and one email under the other, so you're going to render like a list of rectangles. That's kind of easy to do in a calendar is things are more hard. Things are way harder, like you need to place things in the proper day, for example. Imagine that you are rendering like a regular calendar grid. You need to place things in the right day. If it's an event, for example, an event can take like a bunch of days, or maybe one day or maybe multiple months how one event is rendered can affect other events. That's just talking about events. This new product is going to have like a bunch of other things happening which are going to be kind of novel features.

Speaker 3:

So the rendering was really complex and I spent a lot of time like doing the rendering. So when I went to the server and say, okay, now I'm going to imagine I'm going to, I think one of the first thing I did was creating new events. We are going to create a new event in the calendar. I'm going to create a new event. Now I'm going to do ready read back to the screen after submitting the form, and that ready read back is the rails way of reusing all that work you put in the initial rendering of the screen. It's like a great answer Okay, I'm going to reuse all that. So that felt amazing to me. Like from a programming point of view I was like, oh my God, this is what I want, just not have to care about which cells do I need to update for reflecting this new event. That could be like maybe one cell or maybe I don't know the whole screen and beyond. So that felt amazing from the programming point of view in terms of responsiveness. It wasn't feeling as great when I was trying to think how to do that with Turbo Streams.

Speaker 3:

Again, all that complexity that the calendar presented in the initial rendering directly translated into the partial updates. Because you're going to imagine update and event, you can use a regular replace, prepend, delete, whatever a regular Turbo Stream action. To do that you need a lot of Ruby code to account for all the conditions that edit operation is going to have in the screen. I had to do that for every element, events and the rest of elements we were rendering in the calendar for every perspective, because we have different timeframes over the same data For every different view in the calendar. It was like an explosion of very complex partial updates. It was feeling like a pardon in my mind, pardon that I kept thinking about that. How are we going to do this? Because I wasn't seeing an appealing route. Anyway, I started to look for alternatives. Phoenix Live View from the LXRiown has been a solution that has always fascinated me because of how different it is to how regular web programming works and how original it is and how responsive it feels in the demos, I decided to see. Okay, let's see how this works internally.

Speaker 3:

Internally, phoenix Live View is like the programming model. It's quite a departure from how regular web programming works. So it's based on a persisted web socket connection. That is going to be stateful. It's going to keep the state of the connection where you are interacting with the screen. It uses that state for calculating the difference between the new rendered content and what you have in your screen. It can do that because it's stateful and it's going to calculate the difference as a very efficient payload. It's going to transmit that payload over the web socket. The end of it is going to apply that payload to reflect the change in the DOM.

Speaker 3:

Okay, so to do that it was using a library called Morphdom, which that was how Morphdom entered my radar. To be honest, it was analyzing how Phoenix Live View did things At the end of the pipeline. So I wasn't as interested on the web socket programming model, on the persisted connection programming model from Phoenix Live View, because it's an approach that has trade-offs for sure and we kind of. It's a nice thing that in general we love to embrace how the web works in general, like how HTTPs are an stateless protocol and browsers are wonderful pieces of software optimized to render HTML and to perform HTTP requests. So we want to leverage that as much as possible. So the Phoenix Live View approach wasn't directly something we could translate directly and we were not interested in translating that directly to what we were doing.

Speaker 3:

But the Morphdom bit of it was very interesting because it was a Morphdom, it was a standalone JavaScript library.

Speaker 3:

It was doing Morphdom over the DOM directly, which was kind of, I think back in the day it was a novel thing because, as you know, first React popularized Morphdom by virtual DOMs, because initially I think they were the first ones popularizing like Morphdom, like React, they were using this technique where they would create like virtual DOM in memory of the page and of the new content and it's going to calculate the difference in memory and it's going to apply the difference, calculate it in memory over the actual DOM, and React proved that that was way more performant than the previous approaches.

Speaker 3:

Eventually browsers catch up in terms of DOM manipulations, but the idea of calculating the difference between the current state and the new state you want to reach was there and Morphdom was an implementation of that idea, but using the actual DOM. So with Morphdom you could take like a DOM tree and to say, okay, I want to render this new tree of elements, calculate the difference and apply that over the actual DOM. It was a simple small library. I grabbed the library, I created a patch version of Turbo using that library in our calendar product and I was like blown away. I had to make some adjustments, but I was blown away about the improvements in precisely that scenario, the scenario where you are making a change in a page, the server says red, red back and you see like the change reflected. So in that scenario the improvement was very noticeable and I was super excited when I saw that.

Speaker 1:

I was just going to say you were talking about rendering a new event in a calendar. When you think about a calendar, if you were to do that with Turbo Streams and I've definitely built some more complex things in Turbo Streams that are really it felt the same friction as you're describing, where it's like well, first we probably have a flash message you want to insert at the top of the page, so that's got to be one Turbo Stream action that goes to this specific div and then, when it's like a new event, you have to know what kind of event it is. Is it a single day event? Is it going to be inserted into all 30 days of the calendar or seven of those days? Is it going to wrap around, is it not?

Speaker 1:

If it's one of those, that is a multiple day thing, it probably needs to be before the current day items. If you're prioritizing how they get inserted, like it just becomes. Basically, you're duplicating the logic where I think a lot of single page apps might tend to do the same thing, where you've got some logic server side to render, and I think that's why they're doing the server side rendering of React and stuff now, or it's use the same logic on both the client and the server. But with doing that in Turbo Streams it's like we have to render the page out normally and have these sections anyways when you load it for the first time. So the server has to know how to do that and it's templates.

Speaker 1:

But then if we want to do that dynamically when you create one, we've got to recreate all that logic in Turbo Streams and it feels like why are we doing this? And the morphing really solves that problem because you just have the same process. You just say, hey, render the page just like you would do normally. If there's a flash message, it just gets included in your layout just like it normally would. You don't do anything special. And then you sort of get effectively what all those Turbo Streams would have done to update the page, but just using the exact same process, because you have the diffing with Morphdom or IdioMor for any of those libraries.

Speaker 3:

Yeah, you explained that wonderfully well, that's exactly it. So the thing is that with Turbo Streams you need to manually create the partial updates, accounting for all the logic and conditions that are needed there, and with morphing you can reuse, like the initial rendering completely, and lead to an algorithm, the calculation of the partial updates as a high level. I see it like that too. You are delegating that to a business software. Of course, ladies, it's not like there are trade-offs to pay, because with morphing, first of all, the response from the server is going to be more expensive to generate, because you are re-rendering everything instead of just a little bit of code, that or the little bit of HTML or whatever you want to update there, and you need to transmit that payload that is going to be higher also. So there is like some additional network latency. It's like a miracle that it's impossible not to use. You need to pay in some trade-offs, but the trade-offs, I think, are fantastic and are worth it in most cases because the responsiveness you get is wonderful.

Speaker 1:

I'd love to hear some of those trade-offs and let you noticed building it in an actual application where you're not just like, oh, let's just add morphdom as a feature, you're actually using it and seeing the exact issues that you're like we lose focus state on these form elements, or scroll position is wonky in these cases, or whatever, because those are the things that like when you're building a library like this. I think it's those actual use case applications that you're like oh, these are the rough edges that really show where the polish comes from.

Speaker 2:

Yeah, and along those lines too, as someone who has tried to use morphdom directly, even some of the surprise things that aren't necessarily like input but like this, didn't update. Why didn't this update? I think is interesting to you.

Speaker 3:

Well, the first thing I'd like to mention is that there was something that we didn't like much about morphdom, which is that in the way it works, it's really picky about IDs, about DOM IDs, so it relies on DOM IDs in order to match elements, and the problem when morphing fails to match elements, the problem is that what you get is not like an error. You get, essentially trees that you are not expected are replaced. So instead of getting like a smooth update, you get like a full imagine, like the first main container in your application get replaced because there is a flash notice without an ID that didn't find like a match or whatever. So those kinds of errors for me were like a blogger, because we were looking for a seamless thing and that was completely not seamless. So there is this library from the HTMLX project, which is idiomorph, which is essentially like morphdom, is newer, so probably it was heavily inspired by morphdom, but it uses like a more relaxed algorithm for matching nodes and, in our experience, like it completely seamless. It doesn't have that ID matching requirement in order to work, and we have tested it with realistic screens, complex screens, and it works really well. So that was like the first thing that I wanted to point out, well, the two main problems we saw in our tests and two problems that we are going to offer a direct answer for since the first version.

Speaker 3:

The first of all is that what happens when you have loaded new content in your page and you refresh it For example, if you have paginated down in the calendar in the car table, for example in basecam, that the Kanban feature from basecam when you have paginated down and you have loaded new cards and you morph the page, you update the page, what happens with those new pages of content? Because if you are, like morphed in the body of the page, when you reload the page, those new pages are not going to be in the new body and they are going to disappear. So that was kind of an edge case which was like absolutely crucial to look at a solution for, because it was kind of invalidating the whole thing. I mean, it's kind of quite common to have pagination or some kind of dynamic content you want to load in a page after the initial load. So that's kind of common constraint.

Speaker 3:

So what we did there was supporting turboframes, like leveraging turboframes. So when we are doing a page refresh in the new feature that we are going to upstream, to turbo-aid. You can flag turboframes with an attribute. I think it's called refresh equal reload, I think it's the attribute. And when you mark a turboframe with that attribute and a page refresh happens, it's going to reload that turboframe. So if you make sure that you load the new content in your screen with turboframes and you do that, that's going to work well with page refreshes and that's actually what we are currently using in the calendar for paginating information and that works well and solves the problem. And the other case is that sometimes you want to preserve a screen state when you are making a page refresh. For example, if you have opened a menu and you are acting on some element and a page refresh happens because you have submitted something or whatever, you might like to keep that menu opened. So there is an attribute.

Speaker 3:

Well, we are actually reusing the data turbo permanent. I think there was an existing turbo attribute that we are reusing for this, the version we are upstreaming to turbo eight. So we are solving those two problems. There is also an idiom or, for example, it has a setting so that when you are more in the page, if a form control has a focus, so if document active element equals to that control is not going to touch it. So to keep focus, for example, to keep that untouched this is pretty much working in our use cases. But I'm sure that when people start using it in the wild, like more things will pop up and more scenarios to handle. But with these simple things, I think that the system I mean we have tested it in very realistic scenarios and it's working very well for us. So I'm optimistic. It's going to work well for most folks out there, but there will be things to refine for sure.

Speaker 1:

Do you run into any issues with, say, like a password manager extension that might inject some stuff on the page, or like loom I know, adds buttons to different websites. Does that end up problematic with the morphing? Or is that one where it's like the current page has these elements, we're adding the new changes, but that's not in there? Does it keep that stuff or does it remove it and then maybe the Chrome extension just re adds that stuff?

Speaker 3:

We use one password internally and we haven't run into issues with it, but that's not to mean that it might not happen. That's a good point. I hadn't thought about tensions.

Speaker 1:

to be honest, I would have to assume that they implement sort of the stimulus style of monitoring the page to work with single page apps and anything else that might do the exact same process of not a full page load and then applying their changes. They got to be monitoring the page and then doing something dynamic, I would assume. So. Probably, if you do run into issues, it's more like an issue with their implementation, not so much the idiom or Diffing or anything I would guess.

Speaker 3:

I would expect that also. You can replace a form that's just in a regular Ajax request in a page.

Speaker 3:

You would be like so I remember an issue we found in our tests. There is a scenario like when you're morphing a page, stimulus controllers are not going to reload by default because you are not touching the DOM unless it changes. That's a good thing because if you're thinking like in the TurboStreams updates, equivalence is the same. So we found that stimulus controllers to reconnect if nothing has changed in the page. But we found some cases where some stimulus controllers were on the connect. When connected they were modifying the element under the hood, maybe adding some attribute or something like that. So one heuristic we are using is that when you are morphing a DOM node meaning when the DOM node has changed and you are updating it with idiomorph, it has an stimulus controller it's going to reconnect the stimulus controller so that you make sure that it gets connected and connected again. So in the use cases we found in our code base that was solving the issue. So that was like an heuristic that was working well for us.

Speaker 3:

Maybe we might need to introduce more fine grained controls there to say, okay, you want to reconnect, you don't want to reconnect. But we are really trying to find a very seamless, happy path and fight hard before deviating from that because we want to keep things simple, like my dream or my vision for this is we are going to enhance the default turbo drive behavior without making that a parameter concern. That's what I would love that you updated to turbo eight and your screens started to behave better when you submitted forms and so without having to do anything. So probably reality is going to bring some adjustments over that vision. That's still like an important vision for us to make this seamless.

Speaker 1:

Yeah, I can imagine you have a set of tabs. The user changes to the second tab but it refreshes. Server side doesn't know you're on tab number two, so it might re-render with the attribute being like select the default tab or something. And then you'd have to reconcile that in some way when you refresh the page. But those hopefully become seamless and then you sort of eliminated the whole spaghetti I guess you could call it of the turbo streams, trying to interact with 12 different things on a create event or something that would be just the good old redirect to the show page or whatever, and the rest of it's taken care of for you.

Speaker 3:

Yeah, and this is meant to work so that the URL should reflect, like the page you're seeing. So if there are, like many screenestate changes that are not reflected in the initial load and you're keeping those stayed in the page, you will probably need to, depending on those, you're probably going to use, like this data turbo preserve tag in creative ways so that those get respected with one more thing. But yeah, that's the idea, that's the idea.

Speaker 1:

Yeah, Awesome. Well, unfortunately we're out of time. We've probably talked for another couple of hours, but yeah, this is super exciting. Is there anything you want to share before we wrap up here or drop a release date for Turbo 8 or anything?

Speaker 3:

No, we don't have a release date. I mean we are still working on the full requests. The full requests are opened in the repo, so I invite so we can go try it.

Speaker 3:

Yeah, you can go to try and to get feedback. We have already collected like some good feedback on the approach. The other side of this are like broadcasting changes which we can discuss now, but there is like another exciting simplification that this is going to bring to Turbo. I wrote an article, depthsettlesignalcom, so you can check the article there describing the whole approach if you are interested. We don't have a release date. We are working on it. It should be like a matter of weeks at least for getting this merge like few weeks, not a lot of weeks and then Turbo 8 should happen like shortly after that, but no release date. Sorry about that and thanks so much for inviting me. I was going to point out that someone wants to reach out. I have a webpage which is JorgeMandruviacom and you can reach me by email. I love email discussions. So, jorge, at heycom is my email. Someone wants to reach out? Yeah, that's my pleasure talking to you folks.

Speaker 2:

Yeah, it was fantastic having you, and maybe, yeah, maybe, when Turbo 8 drops or something, you can come back on and we can catch up and do this again.

Speaker 1:

Sure, yeah, it'd be cool to see what we talked about today and then what's evolved in the final version of it. So, yeah, nice, looking forward to it. Well, thanks for joining us. This is a blast and we will talk to you soon. Thank you so much, folks, thank you.

Remote Work and Ruby With Jorge
Challenges and Innovations in Encryption
Encryption and Auditing in Rails Applications
Calendar UI Fidelity and Alternative Approaches
Benefits and Trade-Offs of Morphdom
Exploring Turboframes and Upcoming Changes
Opportunity for Future Discussion