This article was originally published under Search Engine Land.

Whenever you make changes to a web site, one of the most important considerations should be how to use โ€œredirectsโ€ to alert the search engine to your changes to avoid having a negative impact on your search rankings. Whether youโ€™re moving pages around, switching CMS platforms, or just wanting to avoid duplicate content and PageRank dilution, youโ€™ll want to employ redirects so as not to squander any link juice (PageRank) that your site has acquired. There are multiple ways of redirecting, and itโ€™s important you get it right if you want the SEO benefit without risk of falling outside search engine guidelines (such as is the case with โ€œconditional redirectsโ€).

Programmers and sysadmins who are not SEO-savvy will likely default to using a โ€œtemporary redirect,โ€ also known as a โ€œ302 redirect.โ€ Unfortunately, such a redirect does not transfer link juice from the redirected URL to the destination URL. It isnโ€™t that the programmers are intentionally negligent. Itโ€™s simply a case of them โ€œnot knowing what they donโ€™t know.โ€ Just gently inform them that what they really need to be using is a โ€œpermanent redirect,โ€ or a โ€œ301 redirect.โ€ If they ask why, just tell them โ€œBecause the SEO consultant said so.โ€

What would be some of the โ€œuse casesโ€ for a 301 redirect? I mentioned some in my opening paragraph, but letโ€™s examine several scenarios in greater detailโ€ฆ Generally speaking, if any of your URLs are going to change, youโ€™ll want to employ 301 redirects, like if you are changing domain names (tiredoldbrand.com to newbrand.com). Or if you are migrating to a new content management system (a.k.a. CMS), thus causing the URLs of your pages to all change. Youโ€™ll want to do it even if you are โ€œretiringโ€ certain pages to an archive URL (e.g., the current yearโ€™s Holiday Gift Guide once the holiday buying season is overโ€”although Iโ€™d make the case that you should maintain such a page at a date-free URL forever and let the link juice accumulate at that URL for use in future yearsโ€™ editions and NOT redirect at all).

Then there are situations where you will want to mitigate with 301s where multiple URLs respond to the same content, thus creating multiple copies of the same page in the search engineโ€™s index. Duplicate content is bad enough, but the bigger issue is that of โ€œPageRank dilution,โ€ where the votes (links) are spread across the various versions instead of all aggregating to the one single, definitive, โ€œcanonicalโ€ URL. This can happen when tracking codes are appended to a URL (e.g., โ€œ?source=SMXadโ€). A current example of duplicate copies of pages with tracking code appended URLs getting indexed in Google can be found hereโ€”ironically, itโ€™s Googleโ€™s own site (yes, it happens to the best of us, even to Google!). Or when essential parameters are not always ordered in a consistent manner (e.g., โ€œ?subsection=5&section=2โ€ณ versus โ€œ?section=2&subsection=5โ€ณ). Or when parameters are used as flags but the setting does not substantively change the content (e.g., โ€œ?photos=ONโ€ versus โ€œ?photos=OFFโ€). Or when multiple domains or subdomains respond to the request with the same content but no redirect (e.g., โ€œjcpenney.com/jcp/default.aspxโ€ and โ€œwww1.jcpenney.com/jcp/default.aspโ€ and โ€œjcp.com/jcp/default.aspโ€ and โ€œjcpenny.com/jcp/default.aspโ€ versus โ€œwww.jcpenney.com/jcp/default.aspโ€). In all the above cases, 301 redirects pointing to the canonical URL would save the day.

Usually a single redirect โ€œruleโ€ can be written to match against a large number of URLs. This is referred to as โ€œpattern matching,โ€ and it allows you to use wildcards (such as the asterisk character) and to capture a portion of the requested URL in memory and to utilize it later in the redirect. This is possible whether you are running Apache or Microsoft IIS Server as your web server. Consider some of the above-mentioned examples, and how to handle each of them using Apacheโ€™s mod_rewrite module (which comes bundled with Apache):

# Changing domain names
RewriteCond %{HTTP_HOST} tiredoldbrand\.com$ [NC]
RewriteRule ^(.*)$ http://www.newbrand.com/$1 [R=301,QSA,L]

# Removing tracking parameter (but tracked URL still registers in the analytics). Assumes no other parameters.
RewriteCond %{QUERY_STRING} ^source=
RewriteRule ^(.*)$ $1 [R=301,L]

# Reordering parameters
RewriteCond %{QUERY_STRING} ^subsection=([0-9]+)&section=([0-9]+)$
RewriteRule ^(.*)$ $1?section=%2&subsection=%1 [R=301,L]

# Redirecting the non-www subdomain to www
RewriteCond %{HTTP_HOST} ^example\.com$ [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,QSA,L]

The above examples change somewhat if you are on Microsoft IIS Server. For example, for those of you using the ISAPI_Rewrite plugin for IIS Server, use โ€œRPโ€ in place of โ€œR=301.โ€

Sometimes itโ€™s not possible to pattern match and a lookup table is required. This can be accomplished easily by creating a text file and referencing it using the โ€œRewriteMapโ€ directive. You could even reference a script that does some fancy search-and-replace work, rather than a text file, like so:

# Search-and-replace on the query string part of the URL, using my own Perl script
RewriteMap scriptmap prg:/usr/local/bin/searchandreplacescript
RewriteCond %{QUERY_STRING} ^(.+)$
RewriteRule ^(.+)$ $1?${scriptmap:%1} [R=301,L]

Intrigued by all this geeky stuff and want more about pattern matching and rewrite rules? Then you might want to check out my PowerPoint deck from my presentation on โ€œUnraveling URLsโ€ at SMX West.

Thereโ€™s another type of redirect that bears mentioningโ€”the โ€œconditional redirect.โ€ And it comes with a warning: it could get you in big trouble with Google. Matt Cutts, the head of Googleโ€™s webspam team, advised during his keynote at SMX Advanced that folks not employ conditional redirects due to the risk of a Google penalty or ban. For those unfamiliar with the term, it refers to serving a 301 redirect selectively to search engine spiders like Googlebot. Obviously, when you start serving up different content to humans than you do to spiders (and yes, this includes differing redirects), you get into dangerous territory with the search engines. You might have the purest of โ€œwhite hatโ€ intentions, but the risk remains. Consider the above-mentioned case of two URLs both substantively similar contentโ€”one containing โ€œphotos=OFFโ€ and another containing โ€œphotos=ONโ€โ€”and both receive some number of links. You could make a compelling argument that, to eliminate duplicate content filtering and PageRank dilution, both versions should collapse into one, but only for Googlebot. After all, if you redirect ALL requests, then low-bandwidth users could not toggle off and on the loading of thumbnail product images. However, this is a false choice. You donโ€™t actually need a redirect in the first place, let alone a conditional one. You could add rel=nofollow to all links pointing to the photos=OFF URLs, so no link juice is โ€œspentโ€ on this version. Then make photos=ON implied so that photos load by default when the parameter is not specified, and remove photos=ON from the URLs of any and all internal links.

Or consider the case of needing to retain a tracking parameter in the URL throughout the user session. By employing a conditional redirect, spiders requesting the tracking URL could be redirected instead to the canonical URL (namely the URL minus the tracking parameter)โ€”thus maintaining the integrity of your tracking system for visitors but not needlessly tracking spiders or creating numerous copies of pages for the engines to index. But again, I bet you could find another way through this without having to resort to conditional redirects. For instance, you could store the tracking parameter in a cookie at the same time as you do an un-conditional redirect to the canonical URL, as soon as the request comes in for the tracked URL.

No question that conditional redirects can solve difficult problems for legitimate businesses. Indeed, one of the worldโ€™s largest retailers is using conditional redirects, as I had discovered the day before I presented at SMX Advanced. The retailer is conditionally redirecting bots requesting affiliate URLs to the canonical product URL, so as to capture the link juice from their affiliates. Seems like a smart thing for them to do. But the approach is fraught with risk.

Instead, the retailer could choose to redirect ALL visitorsโ€”humans and spiders alikeโ€”to the canonical URL without the affiliate ID. Some affiliates might get in a huff because then theyโ€™d notice that the retailer was now sending PageRank to the merchant. But you know what? Itโ€™s within the affiliateโ€™s power to not pass PageRank; they can simply โ€œnofollowโ€ the links. Thus, it becomes merely a public relations exercise for the merchant to manage their affiliates through the switchover to unconditional 301s and to remind them of their right to nofollow the links.

Another scenario I learned of just recently is from a leading media property who conditionally redirect two types of URLs. The first type have tracking parameters to differentiate clickthroughs on links that lead to the same content. Their use of conditional redirects allow them to collapse duplicates and aggregate PageRank. But guess what? If the redirect were unconditional, the clicks could still be differentiated because the clicktracked URL would still register in their log files.

The same would hold true if the tracking parameter were for differentiating marketing programs / traffic sources. When a request comes in for a URL containing โ€œ?source=blog,โ€ for instance, itโ€™s not necessary to send human visitors to a different destination. Even if the traffic source were to be carried through the userโ€™s session and then included as a hidden field on the siteโ€™s Contact Us inquiry form, that can be accomplished by using cookies and storing the source information in a session variable. No conditional redirect required.

The other type of URL being conditionally redirected by this company led to co-branded partner sites. And you guessed it, the content is substantially duplicate. Here it isnโ€™t a simple matter of switching to an unconditional redirect, because the partnerโ€™s rev share on the ad revenue is calculated by keeping the visitor on a separate subdomain. By unconditionally redirecting to the canonical URL on the main subdomain, each session would go from multiple pageviews to a single one. Partner revenue would drop significantly; the partners would not be happy campers. Cookies could be employed to track the entire sessionโ€”not a minor undertaking for them to switch toโ€”and even if they did, a non-negligible number of visitors have Norton Internet Security or a similar utility installed that zeros out the referrer line from web page requests, thus obscuring potential click revenue that would be rightfully due to the partner. A bit of a predicament. For this one I havenโ€™t figured out a workaround.

One other scenario where a case could be made for conditional redirects is when the siteโ€™s URLs are being rewritten to be spider-friendly. This can be a major initiative that Iโ€™ve seen take many monthsโ€”even yearsโ€”to roll out over a large complex website. One retailer of outdoor gear spent over two years and over 1000 man-hours implementing URL rewrites and still arenโ€™t finished. Itโ€™s not always feasible to implement rewrites on all URLs, or to replace every occurrence of spider-unfriendly URLs. The larger the site, and the less flexible the CMS, the bigger the headache. In such a situation, conditional redirects could help collapse duplicates and channel PageRank to the spider-friendly version of each URL until all occurrences of the spider-unfriendly URLs are replaced across the site. Alternatively, duplicates could be eliminated with a robots.txt disallow or meta robots noindex of all spider-unfriendly URLs, but this wouldnโ€™t be feasible if the URL rewriting is still in progress, and it wouldnโ€™t allow PageRank to be directed to the rewritten version of the URL.

Thereโ€™s one workaround I will leave you with that negates the use of redirects altogetherโ€”including conditional ones. Itโ€™s useful specifically for tracking, and involves appending tracking information to URLs in such a way that tracked URLs are automatically collapsed by the engines. No, it doesnโ€™t involve JavaScript. Curiously, I donโ€™t ever hear this method being discussed. The method makes use of the # (hash or pound character), which is normally used to direct visitors to an anchored part of a web page. Simply append a # to your URL followed by the tracking code or ID. For example: www.example.com/widgets.php#partner42. Search engines will ignore the # and everything after it; thus, PageRank is aggregated and duplicates are avoided.

Hopefully this has challenged you to think critically about redirectsโ€”temporary, permanent and conditionalโ€”and their implications for SEO. Opt for permanent (301) over temporary (302) if you want the link juice to transfer. Conditional redirects should be avoided, especially if your risk tolerance for penalization is low. If you take a good hard look at your โ€œneedโ€ for conditional redirects, I think you may find you donโ€™t really need them at all.