- First, it doesn’t solve the duplicate pages problem that a great many dynamic sites have. Even the Google Store suffers from this (which I blogged about previously but here’s a more recent example of a Google Store product page being duplicated times in Google’s index). The Google Sitemaps protocol does not provide a way for webmasters to convey which pages are duplicates of other pages. A site that gets crawled incorrectly by Googlebot, due to superfluous or non-essential parameters/flags being included in the URLs of links on the pages, will continue to get crawled incorrectly. An “Official Google Sitemaps Team Member” states that the sitemap XML file will merely augment their crawl, it won’t replace existing pages in the index:
This program is a complement to, not a replacement of, the regular crawl. The benefit of Sitemaps is two fold: — For links we already know about thro our regular spidering, we plan to use the metadata you supply (e.g., lastmod date, changefreq, etc.) to improve how we crawl your site. — For the links we dont know about, we plan to use the additional links you supply, to increase our crawl coverage.The high-level Google engineer who goes by GoogleGuy in the online forums explains Google Sitemaps in this way:
Imagine if you have pages A, B, and C on your site. We find pages A and B through our normal web crawl of your links. Then you build a sitemap and list the pages B and C. Now there’s a chance (but not a promise) that we’ll crawl page C. We won’t drop page A just because you didn’t list it in your sitemap. And just because you listed a page that we didn’t know about doesn’t guarantee that we’ll crawl it. But if for some reason we didn’t see any links to C, or maybe we knew about page C but the url was rejected for having too many parameters or some other reason, now there’s a chance that we’ll crawl that page C.So, the way I read GoogleGuy’s explanation, if pages A and C are essentially duplicates of each other, with A containing an additional superfluous parameter in its URL (like sortby=default or lang=english), then BOTH could end up in Google’s index. Thus, Google Sitemaps won’t reduce the amount of duplication in Google’s index; in fact, I believe it will increase it. Duplicate pages, on its own, may not sound like a problem for webmasters as much as it is for Google itself, which has to dedicate additional resources to maintain all this redundant content in its index. However, it does have serious implications for webmasters, because it results in PageRank dilution â€” where multiple versions of a page split up the “votes” (links) and PageRank score that a single version of the page would aggregate.
- This brings me to the second, related problem with Google Sitemaps: it doesn’t do anything to alleviate the phenomenon of PageRank dilution. PageRank dilution results in lower PageRank, which in turn results in lower rankings. For example, consider that the above-mentioned Google Store’s product page (the “Black is Back T-Shirt”) is in Google’s index 5 times instead of just once. So each of those 5 variations earns only a fraction of the total potential PageRank score that it could have earned if all the links pointed to a single “Black is Back T-Shirt” page.Google Sitemaps needs to provide a way to convey, or to sync up with, the site’s hierarchical internal linking structure, so that it’s clear which pages should get how much of a share of the PageRank flowing into the site’s home page. Since the primary holder of PageRank score is the home page (that is, after all, the page that most everyone links to), it’s up to the site’s internal hierarchical linking structure to pass the PageRank of the home page to the rest of the site. As such, a page that is 2 clicks away from the home page will get a much larger share of PageRank score passed on to it from the home page, versus a page that is 5 clicks away from the home page.
- which parameter in a dynamic URL is the “key field”
- which parameter is the product ID and which is the category ID (specifically for online catalogs)
- which parameters are superfluous or that don’t signficantly vary the content displayed
Q: URLs on my site have session IDs in them. Do I need to remove them? Yes. Including session IDs in URLs may result in incomplete and redundant crawling of your site.Remember, getting indexed only gets you to the party, it doesn’t mean you’re going to be popular at the party. Google Sitemaps may help you get more pages indexed, but if those pages all have a PageRank score of 0, then what was the point? It’ll be like sitting along the wall the whole time with no one asking you to dance! GravityStream, our SEO proxy technology (the concept of SEO proxies is explained in my article in Catalog Age last October) deals with PageRank dilution by distilling URLs in links into their lowest common denominator and replacing them on the proxy. We’ve found that, even as Googlebot gets more aggressive at spidering dynamic sites with complex URLs and starts indexing one of our clients’ sites more fully, our proxy still has a major leg-up on the native site that it’s proxying. Until Google extends Google Sitemaps to deal with PageRank dilution, I’d expect that a GravityStream proxy will still trump a native site, even if it’s using Google Sitemaps. That means that currently, despite Google Sitemaps, GravityStream still plays an important role for online retailers. Nonetheless, it’s my sincere hope that Google takes my feedback on board and reworks their protocol!