
AJAX websites tend to be much faster as it’s about updating parts of a web page, without reloading the whole page. AJAX lets web pages to update asynchronously by retrieving data from server behind the scenes. This means that it is possible to update parts of a web page, without reloading the whole page.
But the problem is that crawlers are unable to see any content that is created dynamically i.e., fetched using AJAX. Some search engines such as Google, Bing etc execute JavaScript but still it doesn’t make an AJAX website search friendly. In this tutorial I will tell you how to create a search friendly AJAX website. This tutorial is independent of any frontend framework i.e., AngularJS, Ember etc.
Ways to Create an AJAX Website?
There are basically two ways to create an AJAX based website:
- Using
#
fragment - Using HTML5 History API
Let’s see how to make websites created using any of these techniques SEO friendly.
Creating an AJAX Website using ‘#’ Fragment
Here is example of an AJAX website which uses hash fragment
<html>
<head>
<title>Sample AJAX Website</title>
</head>
<body>
<div id="content">
</div>
<a href="#page=1&full=true">Previous</a> <a href="#page=2&full=true">Next</a> <a href="#">Home</a>
<script type="text/javascript">
function handle_ajax()
{
var query_string = window.location.hash;
query_string = query_string.substr(1);
query_string = query_to_object(query_string);
var page = query_string.page;
var full = query_string.full;
document.getElementById("content").innerHTML = "Page " + page;
}
window.addEventListener("load", function(){
if(window.location.hash)
{
handle_ajax();
}
else
{
document.getElementById("content").innerHTML = "Home";
}
}, false)
window.addEventListener("hashchange", function(){
if(window.location.hash)
{
handle_ajax();
}
else
{
document.getElementById("content").innerHTML = "Home";
}
}, false)
function query_to_object(str)
{
return (str || document.location.search).replace(/(^\?)/,'').split("&").map(function(n){return n = n.split("="),this[n[0]] = n[1],this}.bind({}))[0];
}
</script>
</body>
</html>
Now this site is not search engine friendly. #
fragments are used for inter linking content of a page but we are using it to load content dynamically. For search engine perspective all the #
links are the same page.
To make the site search friendly we need to provide alternative static pages for each of the #
links to search engine for crawling and indexing. In nutshell we need to tell search engine that #
links are actually pointing to different pages not linking the same page internally.
Change the above code to this to make the site search friendly
<html>
<head>
<title>Example AJAX Website</title>
<?php
if(isset($_GET['_escaped_fragment_']))
{
if($_GET['_escaped_fragment_'] == "")
{
echo '<meta name="fragment" content="!">';
}
}
else
{
echo '<meta name="fragment" content="!">';
}
?>
</head>
<body>
<div id="content">
<?php
if(isset($_GET['_escaped_fragment_']))
{
$query = $_GET['_escaped_fragment_'];
if($query == "")
{
echo "Home";
}
else
{
$query = parse_url($query);
parse_str($query["path"], $query_array);
if($query_array["page"] == 1)
{
echo "Page 1";
}
else if($query_array["page"] == 2)
{
echo "Page 2";
}
}
}
?>
</div>
<a href="#!page=1&full=true">Previous</a> <a href="#!page=2&full=true">Next</a> <a href="#">Home</a>
<script type="text/javascript">
<?php
if(!isset($_GET['_escaped_fragment_']))
{
?>
function handle_ajax()
{
var query_string = window.location.hash;
query_string = query_string.substr(1);
query_string = query_string.substr(1);
query_string = query_to_object(query_string);
var page = query_string.page;
document.getElementById("content").innerHTML = "Page " + page;
}
window.addEventListener("load", function(){
if(window.location.hash)
{
handle_ajax();
}
else
{
document.getElementById("content").innerHTML = "Home";
}
}, false)
window.addEventListener("hashchange", function(){
if(window.location.hash)
{
handle_ajax();
}
else
{
document.getElementById("content").innerHTML = "Home";
}
}, false)
function query_to_object(str)
{
return (str || document.location.search).replace(/(^\?)/,'').split("&").map(function(n){return n = n.split("="),this[n[0]] = n[1],this}.bind({}))[0];
}
<?php
}
?>
</script>
</body>
</html>
Here is how the above code works
- We replaced all the
#
with#!
. When search engines see#!
in anhref
attribute of an anchor tag then they fetch the link as?_escaped_fragment_=
. For example, If our domain name is qnimate.com and we have a hyberlink as#!page=1&full=true
then search engines fetch the hyperlinked page ashttp://qnimate.com/?_escaped_fragment_=page=1%26full=true
. On server side we need to handle these ugly URL requests and serve an alternative static HTML of the page to search engine for indexing. - We may not want to point some AJAX pages using
#!
fragment in that case we can add<meta name="fragment" content="!">
in the<head>
tag of the page so that search engines fetch the ugly version of the page. For example, we want users to visit our homepage as qnimate.com notqnimate.com#!page=home
but our home page content is rendered by AJAX so we can add this meta tag to ask search engine to fetch the static version of our home page. So now search engines will request the alternative version of our home page i.e., simplyhttp://qnimate.com/?_escaped_fragment_=
.
This solution was introduced by Google and other search engines have been following this method since then. Learn more about “#!”.
Creating an AJAX Website using HTML5 History API
Using HTML5 History API to create AJAX website is the modern way of creating AJAX websites. History API lets you change the URL of the browser address bar without reloading the page. Everytime URL is changed a new history entry is made. For users perspective they look more pretty as there are no dirty #
or #!
fragments.
Here is an example site code which uses history api.
<html>
<head>
<title>Example AJAX Website</title>
</head>
<body>
<div id="content">
<?php
if(isset($_GET["page"]))
{
echo "Page " . $_GET["page"];
}
else
{
echo "Home";
}
?>
</div>
<a href="?page=1&full=true" onclick="handle_ajax(event)">Previous</a> <a href="?page=2&full=true" onclick="handle_ajax(event)">Next</a> <a href="http://localhost/" onclick="handle_ajax_home(event)">Home</a>
<script type="text/javascript">
function handle_ajax_home(e)
{
if(window.history && window.history.pushState)
{
document.getElementById("content").innerHTML = "Home";
window.history.pushState("", "Example AJAX Website", "http://localhost/");
e.preventDefault();
}
}
function handle_ajax(e)
{
if(window.history && window.history.pushState)
{
var query_string = e.srcElement.getAttribute("href");
query_string = query_string.substr(1);
query_string = query_to_object(query_string);
var page = query_string.page;
document.getElementById("content").innerHTML = "Page " + page;
window.history.pushState("", "Example AJAX Website", window.location.href.split('?')[0] + "?page=" + page + "&full=true");
e.preventDefault();
}
}
window.addEventListener("popstate", function(){
var page = getParameterByName("page");
if(page == "")
{
document.getElementById("content").innerHTML = "Home";
}
else
{
document.getElementById("content").innerHTML = "Page " + page;
}
}, false);
function query_to_object(str)
{
return (str || document.location.search).replace(/(^\?)/,'').split("&").map(function(n){return n = n.split("="),this[n[0]] = n[1],this}.bind({}))[0];
}
function getParameterByName(name)
{
name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
results = regex.exec(location.search);
return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
}
</script>
</body>
</html>
Here if crawlers don’t support JavaScript and History API then they will fallback to normal page navigation. We don’t need to take any extra steps to make it search engine friendly. Just make sure that user’s whose browsers don’t support JS can also use your site.
Conclusion
We saw two ways to create an AJAX website which is search engine friendly. Second method is my favorite method as you aren’t taking any extra steps to make it search friendly and is compatible with all the search engines. Some search engines are not advanced enough to understand #!
fragments.