The Ajax crawling specification was originally intended for JS apps that use the hash fragment in the URL, which was a popular technique for creating permalinks when the spec was initially developed. However, we can still use the same spec, with a few tweaks, for modern JS apps using HTML5's pushState to modify the browser's URL and history.
Firstly, add the follow meta tag onto every page that needs to be spidered:
<meta name="fragment" content="!">
This will instruct Google's spider to use the Ajax crawling specification with your site. When it sees this tag it'll then proceed to fetch your site again, this time with the
_escaped_fragment_ parameter. We can detect this query parameter, and serve up spider safe content.
You can see an example of this on Monocle, for the index page, and also the post page. As you can see, if the query param is present (and not even set to a value), I serve up raw HTML instead of the JS app.
The code to do so is pretty straightforward. I'm using Sinatra, but the example below should give you a good indication on how to implement this in your framework of choice. I have two routes which are conditional on
_escaped_fragment_ being present as a parameter.
helpers do set :spider do |enabled| condition do params.has_key?('_escaped_fragment_') end end end get '/', :spider => true do @posts = Post.published.popular.limit(30) erb :spider_list end get '/posts/:slug', :spider => true do @post = Post.first!(slug: params[:slug]) erb :spider_page end
Make sure that you provide at least a title, meta description, header and text content on each page. Also make sure the meta description matches what you want to be displayed on the search results page.
And that's all you should need to do. You now have all the user-experience benefits of a JS web application without any of the SEO drawbacks.
One other technique that was pointed out to me was rendering html content straight into a
<noscript> tag embedded in the page. I prefer the Ajax crawling spec approach though, as it means you're not forced to do any SQL requests or rendering unnecessarily for non-bots.