Ever since the emergence of AJAX into web commonplace, we have been looking for feasible ways to may the content crawl-able by spiders. Recently, you may have noticed a handful of websites with strange looking URLs, something like this:
Why would anyone do such a thing? The answer should be obvious, Google. In a guide released on Google Code, they suggest creating “pretty URLs” that allows Google to find your “web snapshots.” Among the big players using the URL scheme is Twitter and Facebook, which inevitably means it started to spread as a sure-fire, absolutely perfect way of getting AJAX-ed content crawled.
We’ll, like most truths, this isn’t completely true. As a matter of fact, there is hardly ever a time you want to use this scheme. As Mike Davies awesomely outlined, there are so many disadvantages to using the scheme, and most of the advantages are myths (I’ll wait here ’til you go read that).
For the lazy, to summarize, you are in essence making a client-side front-controller: the “#” in the URL breaks it into a resource (example.com) and a fragment (!some/unique/identifier) – the fragment is used as the unique identifier for what content to AJAX in, but it always routes through the same resource. It’s as if your website only has one page, that goes against all of our traditional SEO rules. Additionally, this AJAX scheme is not a standard, it’s a Google facility – meaning you’ve now ousted every other spider – kudos!
What about private web apps?
Mike Davies article was awesome, and you should really take to heart his guidance about not using hash-bangs in public sites. However, I wanted to begin a tangent off of a topic that Davies brought up – using hash-bangs in web applications. In his article he writes:
“Engineers will mutter something about preserving state within an Ajax application. And frankly, that’s a ridiculous reason for breaking URLs like that.”
Given his context, publicly searchable content, I’ll give him that. But the land of private, SaaS web applications, where the crawl-ability of your content isn’t a requirement or concern, let’s re-evaluate:
- Using hash bang URLs prevents non-Google crawlers from consuming AJAX content, but a login system prevents them all from reading it! Crawl-ability is not a concern.
- Using #! preserves back/forward buttons – for web apps, our main concern is UX, and using hashes is a native browser navigation feature, as opposed to binding to the forward/back buttons.
- Using hashes allows you to develop an AJAX navigation framework using window.location.hash
To wrap up, Mike Davies definitely hit it on the head. In regards to using hash-bang URLs for public content sites … don’t. But for private web applications, the land of make-it-work, hash-bangs can provide some much needed functionality to accommodate AJAX navigation. I’d love to open the floor up to some other pros/cons or solutions about using hash-bangs in SaaS web applications.