I recently shared my insights about Google rich snippets, Schema.org, and microdata at SMX Advanced on the panel session “From Microdata & Schema To Rich Snippets: Markup For The Advanced SEO”, along with Marcus Tober, founder of Searchmetrics, and Julia Gause, Director of SEM at Scripps Network. The session was all about structured data markup, a topic that is fast evolving in the search-marketing world.
Now that you’ve got some background, let me give you the 411 on my portion of the session…
Rich snippets boost click-through on your Google listings using eye-catching attributes like author headshots, video thumbnails, and star ratings to make your product and category pages pop on the SERPs. Another really cool thing that happens with authorship snippets is that additional listings from the author show up if the searcher hits the back button after visiting an author’s article.
As cool as it is though, there are misfires with Google authorship. For example, my co-author Eric Enge was being credited by Google as the author of my Search Engine Land articles, despite the fact that the page had rel=author tags pointing to my bio page on SEL and to my Google+ profile. Consequently, searches like “seo myths” that produced my SEL articles displayed Eric’s mugshot next to my article listing instead of my own. Grrr.
One of the best features of rich snippets is the ability to embed a video thumbnail into the SERPs. In order to get a nice thumbnail, you have to add one of the supported formats to the on-page markup. This includes Schema.org VideoObject (the recommended format), Facebook Share, or RDFa markup. In addition to simply adding the markup to your videos, make sure you validate the code using Google’s Structured Data Testing Tool after implementing, and submit a Video XML Sitemap to ensure the indexation of your videos. Don’t neglect to also add mark-up to your page for star ratings, breadcrumbs, product/offer, and events if applicable (more on this below), because Google may display a different kind of rich snippet besides the video thumbnail depending on the search query. That’s right; Google displays different rich snippets for the exact same page — depending on the nature of the search query (star ratings for one query, video markup for another, author headshot for yet another, etc.).
Ratings & Reviews
Ratings and reviews improve visibility, trust and CTR for your search listings, however they are not always shown in the SERPs. It depends on the trust signal your site is sending to Google. Low-trusted websites will not be allowed to benefit. A clever use of reviews microdata is to use Aggregate Reviews on category-level pages:
Product/offer markup gives a lot more product related data to searchers. At this point, the only data from this Schema that are being reflected in the SERP are price and stock availability, but prices in a SERP display can definitely improve CTR, so don’t neglect this.
Breadcrumbs add breadcrumb-style pagination links to your search listing. This helps your listing stand out and adds more links to your listing in the SERP. This can boost CTR.
Some tools worth checking out:
Google Data Highlighter – The data highlighter can be found in Webmaster Tools under “Optimization” and is used to create semantic markup for Google only. It adds no code to the page and your competition can’t see your markup, which makes it good for non-techie clients and sites. However, it is hard to scale for large sites, though you can build “page sets”. The Google Data Highlighter can be used for things like articles, events, local businesses, movies, products, restaurants, software apps, and TV episodes.
Google Structured Mark-up Helper – The structured mark-up helper is found in Webmaster Tools under “Resources”. This tool marks up the same data types as Highlighter, but outputs actual HTML code. This can work for web pages or emails. It can be useful for small sites or building examples of code for developers as it operates on a page-by-page basis.
Open Graph Markup – Open Graph Markup is for marking up your data for Facebook. It offers a cool new feature based around location markup. Specifically, Facebook automatically will create Facebook pages for each location when a location with OG markup is “Liked”. This allows for easy build-out of location pages for Facebook.
So…which type of semantic markup should you use? Schema.org or RDFa?
Schema.org was created by the engines to be search friendly and tends to be easier to understand. RDFa on the other hand conforms to W3C standards. The newer, RDFa Lite, basically mirrors Schema. Personally I think RDFa Lite is the most compelling choice, but I’m a big proponent of standards. Take a look at this article, and then decide for yourself.
So back to this whole evolving landscape of search thing… I think that as time goes on, more and more data types will be supported in Schema and more semantic data will be integrated into the SERPs. Engines will continue to display more of that data directly into search listings. We will continue to do more and more of the engines’ work for them. They will “reward” us content creators by serving to the searcher instant answers (aka “Knowledge Graph”), sidestepping our search listings.
For details on how to claim authorship of your content with Google and more information on the topics I covered, view my full slide deck on SlideShare:
It’s exciting to think about what the future holds with the merged company. Netconcepts’ GravityStream technology combined with Covario’s Organic Search Insight promises an end-to-end SEO solution like never before seen. Together we’ll enable the SEO practitioner to scale SEO across large dynamic websites by automating aspects of keyword research, on-page analysis, link building, web content management, and more.
Our mantra at Netconcepts has for a long time been “data-driven decision-making”. Turns out that’s been Covario’s mantra too!
Another bonus… with the merger we also significantly increase our market reach. We’ve really established ourselves in the retail vertical, not as much in other verticals. Covario, on the other hand, services Fortune 500 global brand advertisers across a number of verticals — including media, high tech, consumer electronics, and CPG (consumer packaged goods). So now, those are all our verticals, and our clients, too!
Exciting times ahead!
One way to bait for links is a blog contest. If you do the contest right, even the most un-sexy of products (like stationery) can become sexy, creating a buzz that can drive a torrent of search traffic to your virtual doorstep. Consider for example the contest we (Netconcepts) dreamed up for the overnight printer of stationery and business cards OvernightPrints.com that I mentioned a few posts back (“Hiring a Link Builder“). The contest was to design Internet celebrity and Technorati Top 100 blogger Jeremy Schoemaker’s business card and you could potentially win business cards for life.
Here’s the winner, which is an awesome business card IMO:
Let’s take a closer look at what made this blog contest a successful link building strategy:
- Come up with an impressive prize (or at least one that sounds impressive). In the above, the prize was a lifetime supply of business cards. A “lifetime supply” of anything sounds impressive. You can use the fine print to put some limits on it — like OvernightPrints.com did by capping it at 1000 business cards per year for a maximum of 20 years. That adds up to, well, peanuts.
- Get a partner with some name recognition who’s willing to promote your contest. If you’re a blogger, try to land a partner organization that you can piggyback off of their brand recognition. If you’re a brand, get a well-known blogger to partner with you. Jeremy Schoemaker was great; he has a massive following. Ride on the coattails of that partner’s brand by enlisting their help in spreading the word about the contest. They need to be willing to hawk your contest on their blog and in social media. Jeremy posted multiple blog posts (with good keyword-rich links) and a YouTube video and some tweets on Twitter, for example. (Thanks Shoe!)
- Promote the heck out of the contest yourself too. Don’t just rely on your partners to do it for you. With the above contest, we reached out to a bunch of design sites. And they took the bait. They loved the contest and promoted it to their community and linked to our contest page. What a great thing to add to your resume if you’re a designer, that you came up with the winning design of the business card for a famous blogger — out of over 400 entries no less!
- Make sure the contest entry pages lives on your site. Not on your partner’s. You want the link juice flowing directly to the site you are looking to promote in the search engines. As you might guess, the contest entry page was on OvernightPrints.com, not on Shoemoney.com or anywhere else.
- Keep it simple. There are numerous ways to run(ruin) a blog contest. If you want it to be a success, create a contest that is easy for users to participate in. People online are lazy and impatient — even if they aren’t like that in the real world (Something about being in front of the computer triggers it!). So, the more effort a contest requires, the lower the participation level. OvernightPrints.com kept it simple: “Design ShoeMoney’s business card”.. and win a lifetime supply of business cards.
- Make it relevant to your business and to your targeted search term. It wouldn’t have made any sense for OvernightPrints.com to run a contest where you write a letter to the President and win a trip to Washington DC. For Overnight Prints, their money term is “business cards”. Being on page 1 in Google for that term is worth big bucks to Overnight Prints. This contest moved them onto page 1, and in fact, onto the top half of page 1.
- Involve the community. Jeremy narrowed it down to 7 finalists and then asked his readers to help him decide. The participation factor is huge. It makes the blog’s readers much more invested in the outcome.
A good contest has synergy — it’s a win-win for all parties (blogger, brand, contestants, readers) and having the right partners means that overall the whole is greater than the sum of the parts (i.e. everyone does much better than if they had embarked on it individually). Yes this contest was a huge success for everybody involved. Of course OvernightPrints was the biggest winner of them all: they got relevant exposure, buzz, links, rankings and traffic. Use the above 7 step formula and hopefully you will have similar success yourself.
Bummer that I missed PubCon this week but I have just been traveling way too much lately. Speaking of traveling, I was in Indianapolis last week visiting the offices of Compendium Blogware. I got a demo of their hosted blog platform — including a look under the hood — and it’s pretty slick. There were features and functionality I had never seen before in blog software. One of the key differentiators, and the reason for the company’s name, is the “compending” capability that their solution does.
A compended blog is comprised of a collection of posts from other blogs, but all from within the same company. A company can have many employees blogging — customer service reps, salespeople, product developers etc. If a manufacturer, then dealers/distributors/retailers could join in on the fun too.
The appeal for companies who want to encourage employee blogging is that it’s dead simple to use, which is critical if you want wide adoption across the company. Here’s how it works: say that Bob from a Ford dealership blogs about the new Ford Mustang after he takes it for his first test drive. There are compended blogs for Mustangs, for sports cars, for pickups, etc. Without Bob having to think about it, his blog post gets compended automatically (using sophisticated content analysis algorithms) to the “Mustangs” and the “Sports Cars” blogs, but not the “Pickups” blog.
Blog posts that have been compended still maintain a canonical URL on the main blog, and that one canonical URL (of the permalink post page) is referenced consistently across all compended blogs on permalink post pages via a canonical link element (i.e. canonical tag). That eliminates duplicate copies of the permalink post page. The content of the post is nonetheless included on the compended blogs — in a fashion not dissimilar to post content being included on category pages, tag pages and date-based archives on WordPress blogs.
When considering duplicate content as it relates to SEO, bear in mind it’s not a penalty, but a filter, and that filter works query-time to favor the most relevant and authoritative result for the query entered. Given that, a particular compended blog will be most appropriate to the query, e.g. the query “2010 mustang sports car” would be most relevant to the Sports Cars blog. Note also the compended blogs are in subdirectories, not subdomains. The typical company will have a handful or perhaps dozens of compended blogs, large enterprises may have hundreds. It wouldn’t be unusual that a new post published on a WordPress blog and is in a couple categories and in a dozen tags would be duplicated (16 times including the date-based archives and home page, to be exact) more than a post on a typical Compendium network.
I just got off the panel on The Future of Search at Search Engine Strategies San Jose. There was a bit of discussion about social media and whether SEO will still be relevant if users are spending their online time inside of social networks like Facebook and YouTube. The consensus from the panel was that SEO will still be alive and well, and that social networks offer just another venue within which searchers can conduct their queries. That makes the large social networks like Facebook into search engines. YouTube is now the #2 search engine, after all (there are more search queries on YouTube than Yahoo). Yesterday’s Mashable article, The New Search War: Google vs Facebook highlights the threat to Google that Facebook poses. It’s an interesting read. The point in all this: optimizing for better visibility in search engines, whether Google or Facebook, isn’t going away.
Not only does social media provide another venue for searching, it serves as an invaluable tool to the SEO practitioner, specifically for link baiting. It’s link building on steroids. The value lies specifically in the social news and social bookmarking sites (Digg, StumbleUpon, del.icio.us, etc.). If you make it to the front page of Digg, the visibility you get in front of the “linkerati” (e.g. bloggers and journalists) is invaluable. I describe a process for seeding link bait into social media in my Search Engine Land article The Social Media Underground. A word of caution: be respectful of the social community. Don’t submit junk, don’t spam your friends with vote requests, and don’t bait-and-switch.
A few months back at the eMetrics Summit, I was interviewed by Web Marketing Today about this whole process, and some other things. Here is the video:
You’ve read my speculatations on the Future of Search. Now wouldn’t you like to watch me speculate on the future? I know you do! So here’s the WebProNews interview of me from last month at SES San Jose, where I recap some of what I spoke about in my Future of Search session…
I spoke on The Future of Search on Tuesday at Search Engine Strategies San Jose. It was a fun panel. Not only did I get to pontificate (which I love to do), but I also got to be really ‘out there’ — not my typical presentation style, which is typically pretty geeky and focused on the really practical, actionable stuff. I was talking about such things as the Law of Accelerating Returns, quantum computing, the LUI (Linguistic User Interface), AI (as in “autonomous intelligence” rather than the more innocuous “artificial intelligence”), and the Singularity (I’m a big fan of Ray Kurzweil, by the way). If I had had time, I would launched into a discussion of one of my personal favorite future technologies: utility fog.
Here are some of my favorite websites on such topics:
- Singularity University
- Acceleration Studies Foundation
- Singularity Institute for Artificial Intelligence
- Institute for the Future
What does all this have to do with search? A lot. Search is going to be nothing like what we envision it today. That’s because we aren’t thinking in exponential terms. We as humans tend to extrapolate forward linearly, because our brains think linearly. When you look forward towards the horizon, do you think about how we’re on a giant curved sphere, or do you think of it as a long straight (flat) drive ahead until you reach your destination? Right, my point exactly. So when we think of all the progress Google (or technology in general) has made in the last 5 years, it’s only natural to think about the next 5 years as an extrapolation of the past 5. But the Law of Accelerating Returns says that technology is evolving at a faster and faster clip, i.e. on an exponential curve.
How can the Law of Accelerating Returns hold true? Because it takes into account such certainties as Moore’s Law and Metcalf’s Law and the unwavering predictability of these laws mean that a very specific subset of things can be forecasted with great accuracy.
So the folks who think that in 10 years teenagers are still going to be exercising their thumbs all day TXTing are mistaken. The LUI will be how we interact with computing devices — interfacing with our computers by conversing with a simulated personality, rather than clacking away at keyboards and keypads. It’s so much more efficient, considering how many words per minute we speak versus type. The advent of the LUI will be as much of a paradigm shift in computing as the shift from DOS prompts to the Windows GUI (graphical user interface). Welcome to a world of ubiquitous computing where we will be wandering around, “computing” with our voice, rather than tethered to a desktop computer, screen, keyboard and mouse. Makes you think a bit differently about “mobile search“, doesn’t it?
Marry that vision with one of swarms of Utility Fog that can make utterly lifelike representations of other people, creatures, and objects in an environment not that unlike Gene Roddenberry’s vision of the Star Trek holodeck. This could allow computers to take a tangible form when you are interfacing with them (via voice or movement). And it could allow you to interact with all five senses when “video conferencing” with far-away loved ones; it would be as if the person were really there beside you! Talk about “total immersion”! Oh, and utility fog will allow us to hover and fly around too.
Continuing advances in AI will bring us, in the 2020′s, to a decade where the intelligence of a computer will exceed the intelligence of a human being. Computers will be able to compose symphonies, paint masterpieces, fall in love, etc.
Somehow I don’t think Google will, for too much longer, be basing its importance, authority and trust algorithms on the link graph. I think they will develop an artificial intelligence “expert system” that can use its own judgment in determining whether a web page or website is spammy.
Fun times ahead!
As a blogger I can’t tell you how many spammy link requests from “link builders” that I get on an ongoing basis. It’s just way too many. You certainly don’t want to hire the kind of link building firm that sends out such spam link requests using cheap third-world labor. It’s these same sorts of firms responsible for a big percentage of the useless keyword-rich link-containing comments posted all over the blogosphere.
On one end of the link builder spectrum you have the solo operator guru, somebody of the caliber of Eric Ward, to the other end of the spectrum — those link building sweatshops out of India that will spam the hell out of the blogosphere and of webmasters’ email inboxes on your behalf. You’ll want to hire a consultant or firm more towards the former rather than the latter (obviously). At least if you wish to be working with a top-notch link building outsource partner, one that’s really going to “do the business” for you, and not spam in the process.
A good SEO firm should also be a good link building firm but this is not always the case. In fact, it is not often the case. On-page SEO and technical tweaks like rewrites and redirects are a very different animal from the outside-the-box thinking and unbridled creativity required for link building, and link baiting in particular.
A great example of such creativity is the business cards for life contest that we at Netconcepts dreamed up for our client Overnight Prints that involved the Internet celebrity and Technorati Top 100 blogger Jeremy Schoemaker. The contest was to design Jeremy’s business card, with Jeremy serving as the ultimate judge. (Here’s the winning entry, btw. It’s one sweet business card.)
Another great example (not sure which consultancy is behind this one) is A&E’s “Hammer Pants” flash mob stunt:
What are some things to look for when hiring a link building consultant or agency? Here are a few:
- examples of creative, out-of-the-box thinking (as already explained above)
- demonstrable success with link bait being well-received by social news and social bookmarking communities
- the tools necessary to do the job well (e.g. LinkScape, Raven, BuzzStream, SQUID, Enquisite, Internet Marketing Ninjas, SEO Book tools like the Hub Finder, etc.)
- happy customer references
- a good reputation in the industry (as judged by mentions on SEO blogs, forums etc.)
- ideally, evidence of thought leadership (e.g. conference speaking, magazine articles, quotes in mainstream media, a great blog…)
Plus I asked the link mensch himself, Eric Ward, to chime in with some more pointers, which he did. According to Eric, you should ensure that you…
- are given a rationale as to why they want to pursue any given target
- have final approval on every target site they contact
- have final approval before you agree to link back or pay for a link
- are provided a stated deliverable and you have agreed to it
- get an expert to review the contract, like me or Stephan. (Paying a few hundred bucks for a deliverable review might be the wisest money you can spend.)
- are given at least monthly reports of progress
In my most recent Search Engine Land article I wrote about tracking ROI and cost-justifying link building initiatives. It may be helpful to incorporate such metrics into an engagement.
Remember that this is much more like hiring a PR firm than an ad agency. PR firms (and link builders) can influence — but not control — outcomes. Ad agencies can control the number of brand impressions just by simply spending more dollars.
I have to say, I am impressed with Wolfram Alpha. I think it’s a game changer. It provides a powerful new way of interacting with the large repositories of data available on the Web. For instance, instead of googling for “number of google employees” (incidentally, it isn’t until the 4th result down that you get the answer), then googling for “number of yahoo employees”, then doing the math to compute the ratio, you would simply input into Wolfram Alpha “google/yahoo employees“. (The answer is 1.487:1, if you’re curious.)
Welcome to the brave new world of computational engines.
What’s a computational engine, you ask? The best definition I can think of is: an online data mining and analysis tool.
What can a computational engine do? A lot. It can segment the population by gender (“u.s. male population, u.s. female population“). It can tell you what that ratio is (“u.s. male population / u.s. female population“). It can graph the growth of the U.S. population over the last several decades (“population u.s.“). And it can calculate population density in the U.S. (“population density u.s.“).
It’s a simple matter to do head-to-head comparisons and generate comparative charts. Just separate the terms with commas. For example, type in “google, yahoo” and you’ll get a bunch of charts and graphs comparing the two companies’ financials, stock performance and price history.
And wow can you drill down into the data easily. For example, start with the query “google.com” and you’ll see all sorts of pertinent facts about the site and the company. To see a report of all the subdomains of google.com, click on the “Subdomains” link. From there you can click on “More subdomains” to get a more exhaustive list:
I just wish I could have typed “subdomains of google.com” or “google.com subdomains” to get to the answer. Neither of those queries works.
Wolfram Alpha can even tell you how long you’ll live. I queried “life expectancy age 38 male u.s.” and it returned 77.54 years. Then I queried my birthdate and learned that was 38.45 years ago. Then “77.54 – 38.45 years” returned not only 39.09 years, but also 14,268 days — which feels a lot longer to me! Finally, “39.09 years from now” gives the time and date of my demise: 5:31:06 pm CDT on Thursday June 25, 2048.” I’m loading that in my iPhone’s calendar with an alarm 10 minutes beforehand, so at least I won’t get caught offguard.
I also tried “(77.54 – 38.34) years from now” but Wolfram Alpha choked on that one. However “now + (77.54-38.34) years” did work.
If you’re curious which countries have the longest life expectancy (or shortest), type in “life expectancy”. Here’s the answer:
Perhaps I can buy myself a bit of extra time by moving to Macau? Exactly how much time is anybody’s guess. Oh wait, Wolfram Alpha can answer this too!
Not only is the output interesting, the presentation of it is really slick, with great-looking charts and graphs. Note that the charts are rendered as images, not as text. If you want to copy and paste the data within the chart, simply click on it and a “Copyable plaintext” popup box will display.
I find the overly critical comparisons with Google unfair. Remember, Wolfram Alpha is a computational engine, not a search engine. Comparing Wolfram Alpha to Google is like comparing a cell phone to a TV remote. Sure a cell phone and TV remote may both be about the same size and they both have buttons, but the functions they perform are vastly different.
And it’s very early days. We need to cut them some slack. Yes it is frustrating to get so many “Wolfram Alpha isn’t sure what to do with your input” messages, but when Google debuted in 1997 it was pretty rough too, right?
Watch Stephen Wolfram’s screencast demonstration before trying to use the engine. Otherwise it’ll frustrate you when you get so many failed queries.
One piece of feedback I would offer to the engineers at Wolfram Alpha is to provide segmentation options to users. In other words, suggest the various ways the requested data can be sliced and diced. For example, the following queries all work properly:
- life expectancy male
- life expectancy age 38
- life expectancy u.s.
- life expectancy u.s. age 38 male
- ldl 100 nonsmoker age 38 male u.s.
but these queries do not:
- life expectancy nonsmoker
- life expectancy ldl 150
- life expectancy wisconsin
- life expectancy wisconsin age 38 ldl 150 male nonsmoker
even though Wolfram Alpha is properly interpreting the syntax of the query and its components (“life expectancy”, “male”, “age 38″, “u.s.”, “nonsmoker”, “wisconsin”, and “ldl 100″). I kept running into trouble when I attempted to further refine the life expectancies from U.S. residents to Wisconsin residents, from males to non-smoking males with slightly high cholesterol levels. Teasing out subgroups within a population could be facilitated by an intuitive visual interface for viewing and selecting from the available segmentation properties. Or by better error messages, like: “Life expectancy data is not available segmented by state, only by country. Please try a broader query, like life expectancy u.s.“.
Using Google engineers’ terminology will help you look like a search industry insider. For example, talk about “signals” rather than SEO “factors”. Describe weak, undifferentiated content as “thin” (as in a “thin affiliate”). Work “canonicalization” into a sentence at least once every 5 minutes. Share your enthusiasm for “shingles” (yeah, NOT the disease). Speak in TLAs (three letter acronyms) like QDD (query deserves diversity) and QDF (query deserves freshness). And so on.
- Chameleon = internal Google codename for the algo that does mid-page suggestions (like search for “labor” and get in the middle of the SERPs “See results for labor and delivery“)
- Spellmeleon = internal Google codename for the algo that preempts the first natural result with 2 results from what Google believes is the correct spelling of your query (like search for “ipodd” and get “Did you mean: ipod Top 2 results shown”
- Google Squared = a not yet launched Google Labs project that returns search results in a structured format (i.e. as a spreadsheet). Search for “small dogs” and get a matrix with breeds, descriptions, sizes, weights, origins, etc.
- Rich snippets = search listings with addition info in the snippet, such as star rating and number of reviews. Google gets this extra data hReview and hCard microformats – simply put, it’s semantic, agreed-upon markup in your HTML pages. Kinda reminiscent of Yahoo’s SearchMonkey. More about it here. (Incidentally, Dries – of Drupal fame – has an interesting take on what this could mean for SEO.)