Turbolinks and the Prague Café Effect

Turbolinks

Turbolinks is a new Ruby library, enabled in Rails 4 by default, that is designed to speed up your web applications.

It does this by binding a Javascript handler to all link clicks. Instead of allowing the browser to load the new page, it fetches it in the background via AJAX. It then parses out the body, and injects it into the document you’re currently viewing.

The main advantage of Turbolinks is that your static assets such as Javascript or CSS will not be downloaded or parsed every time a link is clicked. This can result in a significant client side speed improvement for your end users.

Turbolinks was extracted from work that 37signals did on the latest release of their Basecamp product and has given them some serious speed improvements.

I happen to live in one of the largest cities in North America. Basecamp is hosted out of Chicago, which, at 880KM away is actually quite close to me! The Internet is fast, and we can send those packets over distances in a jiffy.

When I click on a link on Basecamp, things load super quickly! Good job 37 signals and Turbolinks!

…But what if I didn’t live in North America?

The Prague Café Effect

A few developers traveled to Prague a few months ago for a conference. I’m told they had a fantastic time: the city was beautiful, the food was delicious and the people were lovely. The one major downside was they had to go to a local café to access the Internet.

Accessing a North American site was a wholly different experience. Web applications that used to be fast suddenly took forever to respond. All those packets had 7x as far to travel each way!

Large companies like Amazon have a solution to this problem: they set up data centres all over the world. Their users are directed to the closest server for faster speeds.

The issue with this approach is that setting up data centres all over the world is too expensive for most. Additionally, the development involved to partition your data, or keep data it in sync over large distances is hard. The chances are, unless you are a Google or Amazon you probably aren’t doing this.

Fortunately, there is an accessible way to use geographically disparate data centres that is easy and cheap: The trusty Content Delivery Network (CDN).

Setting up a CDN is ridiculously easy these days. There are dozens cheap and reliable ones. I can’t recommend them enough. If you care about the performance of your web application, you should be using a CDN.

The major downside of a CDN is that they only blaze on static content. Your North American dynamic content won’t arrive any faster at that Internet Café in Prague.

Back to Basecamp

The smart guys at 37signals use a CDN for Basecamp. Good stuff. But what happens when you click around? For example, here is my list of projects in basecamp:

On the left column in the screenshot, directly below New Project, they have a few icons you can click to filter your project list. When I click one of the icons, I get a view like this:

This view must be very useful if you have many projects. However, what do you think happened between my browser and the server when I clicked the icon?

My web browser already contained all the data I needed to display the list (just a title and link.) However, due to the way Basecamp is set up, it actually makes a full HTTP request to the server, which returns HTML, which is then shoved in the DOM.

This is a major flaw. Turbolinks encourages you to make requests to the server when you want to change the UI, even when the end user already has all the data they need!

But isn’t Turbolinks supposed to be fast? Yes, but Turbolinks is biased towards those who live close to your servers.

Leveraging your CDN

There is a way you can leverage the power of the CDN to improve the performance of your web application, regardless of where your users live: by using a Javascript MVC framework. There are many to choose from, but for the last year I’ve been using Ember, so I’ll speak about that one.

In a typical Ember application, you perform all your rendering on the client side, using handlebars templates. Your templates are bound to the objects you have in the browser’s memory, so if the object changes, the template will automatically re-render. If the projects list in Basecamp were coded this way, clicking on the new filter would just tell Ember to re-render the page using a different view, and nothing would be exchanged with the server.

The default behavior in an Ember app is to only download your stylesheets and scripts once, so you get the benefit of the Turbolinks-style “one-time parse” too.

Of course, you can’t eliminate the conversation with the server entirely - JSON data would need to be sent down the wire initially to populate the projects list. However that payload is generally a lot smaller, as JSON contains much less extra data than the HTML your server would send back. And you can do a lot more with it, for example filtering to only “starred” projects rather than making a whole new request.

I’ve found that the more comfortable I get with this approach, the more I end up shoving into the CDN for fast delivery to end users. All my application’s templates live in the CDN, as does all the UI logic.

I deliberately left out a part of the Prague Café story.

The developers there had access to an Ember app I’ve been working on. They claimed it was one of the fastest sites they accessed. Our CDN delivered almost everything to them, so they only had to reach across the ocean when they absolutely needed to.

If you haven’t had a chance to investigate a Javascript MVC framework, I highly recommend you skip Turbolinks altogether and do so.


Imported from: http://eviltrout.com/2013/01/06/turbolinks-and-the-prague-effect.html

Adopting CoffeeScript by default never bothered me. It's an opinionated default that I could turn off by commenting out one line, and while I like CoffeeScript the end result is largely the same whether you use it or you don't from an application design perspective.

Adopting TurboLinks by default doesn't sit well with me, because it fundamentally changes the way that developers are supposed to conceptualize their applications, and there are gotchas that counter-act the magic performance gains many will expect.

It certainly adds yet another technology people have to be familiar with to use Rails in an out-of-the-box way, and that's a shame.

"All my application’s templates live in the CDN, as does all the UI logic." Does this mean you separately package your .js files for your Ember app? One for templates, one for views, controllers, etc?

I actually end up packaging it all together in one big bundle. The rails asset pipeline makes this easy.

But during development I have everything expanded into dozens (hundreds?) of CoffeeScript files.

The cool thing is it compresses very well. The entire client side app ends up being 220k compressed, which is smaller than many sites that don't do client side MVC!

You're comparing two completely different architectures. The rationale behind Turbolinks isn't just speed (I'd argue it's not at _at all_). It's being able to stick with Rails' tools and development model. As DHH made clear in this interview -> http://www.youtube.com/watch?v...

"... due to the way Basecamp is set up, it actually makes a full HTTP request to the server, which returns HTML, which is then shoved in the DOM."

So that's a sync problem, not a Turbolinks problem. Using HTTP caching correctly in your app (if Basecamp doesn't, email them and discuss that), you'll get around that problem. And still retain a mostly server-side architecture which, again, is the point of using Turbolinks.

http://en.wikipedia.org/wiki/S...

Didn't Twitter decided to move away from all that client side JavaScript handling because it became too slow?

"The main advantage of Turbolinks is that your static assets such as
Javascript or CSS will not be downloaded or parsed every time a link is
clicked."

You forgot the other main advantage of Turbolinks, which is allowing the server to render the HTML, and not having to re-write all your existing UI code to use the latest trendy javascript template whizbang library.

"So that's a sync problem, not a Turbolinks problem. Using HTTP caching correctly in your app (if Basecamp doesn't, email them and discuss that), you'll get around that problem."

Even if you're using and relying on http caching correctly, a full request is a full request. If the server, or some box between you and the server responds with a 304 it doesn't change the fact that a request was made and the browser has to diligently wait for a response. If the request results in a page getting pulled from cache, then it's still fully parsed and all associated assets are requested yet again (even if they too get a 304 response and ultimately come out of the cache).

The requests are made and the browser will has to wait for a response.

It sounds like turbo links is indeed intended to alleviate this somewhat thereby providing a performance bump. It's all about speed. The idea that you'd argue otherwise seems odd. I do agree with you that it's an attempt to allow rails as a serverside framework to stay relevant in an increasingly js heavy/client driven environment, but only because these style of app provide such a snappy/speedy user experience.

@1a75099826c14ce652ed1277b7e48f9d:disqus: I think the post is geared towards people deciding what to use when they're starting new projects. So "rewriting existing UI code" is not an issue, since there is no existing UI code.

The goal of Turbolinks seems to be to give existing apps that render HTML a bit of a speed increase, to put them *on par* with the JS frameworks of the day. If you're developing a UI-heavy app, Turbolinks does not replace a JS MVC framework, but for apps that aren't as complicated, it will offer a significant speed increase. That is, if you're not already using PJAX. :)

Yes, loading an entire MVC to display 140 chars was stupid.

Not at all what the author is talking about.

so basicaly ruby on rails has discovered ajax? in 2013?

Was that really the reason? As far as I remember, server-side rendering granted them more caching techniques. Which is IMO one of the still existing elephants in the room concerning JS MVC frameworks. On the twitter case, every feed page refresh was pinging the tweets API and rendering, which delayed what they called "time to first tweet".

The article is a bit biased. You focus on some (not all) of the weaknesses of turbolinks, and on the other hand on some of the strengths of using MVC frameworks, and I don't really understand why you would compare them in the same post since it's apples and oranges.

I personally dislike turbolinks because of what I think is a bad implementation of something which is assumed to be an overall use case (the pain I have to endure to create non-turbo'ed links is immense...), not counting that, if I divide my assets per contexts, I might actually need to load them more often than not. Also, I think turbolinks missed the point of rendering full HTML pages asynchronously. Take Facebook, for instance. They are probably the most known case of an app rendering full pages on the client side... or aren't they full pages? Actually they do it because of their footer, the chat app. If they wouldn't be doing that they'd have to restart the chat app for each browser request. So, it shouldn't only be about speed. Take soundcloud for instance (which might be using turbolinks, since they're Rails, but I might be wrong). Their new layout is now fully dynamic, using everything from full client-side page rendering to client-side MVC. But what was the main gain from rendering a full page on the client side for them? The old soundcloud would stop playing music each time one would change a page. The new one doesn't suffer from the same flaw, since it uses the html5 audio tag, and by not browser-loading the pages, music is continuously playing. Wunderbar. So, I personally think the idea is valid, but the implementation is weak. Their addition by default in rails is just another case of "Rails is what 37signals decides".

I also like the client-side MVC frameworks, but they do bring some disadvantages to it. First of all is ease of refactoring in old projects (granted, it's a new paradigm, shouldn't be easy). The other one is adding a new templating language to the one you're already using in your project to render stuff on the server side. I have a Rails app which uses slim for server-side rendering and Mustache for client-side. Worse, if you actually need to render the same thing on the server-side and on the client-side (crawlers don't care about JS MVC), you'll need to maintain two templates written in two different templating languages that have to render the same HTML, which is a nightmare to maintain. Lastly, rendering on the client-side will not enable you to use caching techniques which only exist on the server-side, like fragment caching. If I load a page with an MVC-rendered list inside today, and I open the same page tomorrow, the logic (make AJAX request + get JSON + render template + insert in DOM) will happen each time. If I render it on the server-side, I can always fragment-cache the list.

So, wrapping it up, I like both of the paradigms, both have their advantages and disadvantages, I'd focus more on the implementation. Like, turbolinks sucks, totally with you on that. Now, I'm missing a good article comparing backbone with ember, which would be cool cuz I don't know the latter that much.

I'm just reading about turbolinks (and ember) for the first time, so I'm curious when you say "the pain I have to endure to create non-turbo'ed links is immense". Why is it hard? It seems like all you have to do is add "data-no-turbolink" to a div, or even the body to turn them off altogether (or simply not include the gem). Are there other situations that make this difficult?

Beside that, I really appreciate your comment – a good explanation!

Hi,

Of course, i wasn't talking about the case of just not including the gem. I'd just argue that including it per default in Rails will make new rails developers eventually use it, abusw it and unlearn everything (kind of what rjs did back in Rails 1/2).

Well, I don't like this type of solution because it inserts two type of listening events on the client-side. The "standard" anchor tag gets turbo-linked per-default, that is, there is an event listener on the full page to handle this special click event. On the other hand, you have another type of event listening for the anchor tags which should not be turbolinked (which targets the divs with that class you said). So, two heavy listening events on the client side, it's performance depending from browser to browser. If turbolinking links was the exception, instead of the rule, you would only need one of these listeners. Just saying. Besides that, I wonder what the data-no-turbolink event handler does, and whether it doesn't affect subsequent click event handling on specific anchor tags you might target, and even if I want to target click event handling for turbolinked anchor tags, how should I handle event bubbling? I don't see any of that specified in the documentation, I guess I'll have to learn by doing. These are some of the issues I have.

There's also something I mentioned: although it is deemed faster than usual, the turbolinked request still builds the full page, of which you just parse the body. If the turbolink just wants the body, why doesn't it just render the body then? Besides, I don't like the paradigm of switching the body tag for another. What if I just want to replace the main content, a la facebook? I have stuff I don't want to see removed (navigation bar, chat), I just want the main content replaced. Can I do that with turbolinks? I guess not.

The beauty with JS MVC — especially when using template strings instead of DOM generation — is that by using Node you can actually do this on the server side. So on the first page request, the server generates the whole ready-made DOM and hands it back to you: only event binding remains to be done on the front-end. Future requests then take the form of JSON and benefit from all the advantages mentioned.

Of course, implementing the above is difficult — I only know of one company doing it at the moment (airbnb, using Backbone and Node) — but the point is that JS universality technically makes this a non-issue.

I am not as technically skilled, but purely in terms of UX, While i love the idea behind JS Client App, I have yet to see any that works fast, light and responsive. Most works wonderfully well in a single Tab environment. But once you go beyond that Client Side JS Apps tends to suck. And this is especially if you are on a older OS and older computer. And Basecamp is one of the few that i felt is still super fast even when i am in Singapore or Taiwan. Yes that is about as far from US as you can get.

And While i would agree Discourse is quite fast, It is simply not "Basecamp" speed.

Nobody's 'enforcing' this. As far as I know, only one major company is doing this (write-up: http://nerds.airbnb.com/weve-l.... The beauty is in keeping it absolutely DRY. A lot of web crawlers do speak JS, but obviously it's good to make your content available without a functional JS dependency. People have been doing this for yonks, the only problem is having to maintain persistent view logic on the front and back end. Write-once-run-everywhere is still pretty cool.

'enforcement'? I don't understand. What's enforcing what? When I say 'beauty', I'm just talking about the relative development efficiency of only having to write view logic once, rather than once for the back-end and once for the front.