24 10, 2016

How to Zero In On The Consumer Purchase Funnel

By | 2016-11-18T14:47:45+00:00 October 24th, 2016|Categories: SEO|

Search Engine Optimization is tricky and to make it harder consumers are constantly in various stages of the buying process. Fortunately, we can craft content to appeal to searchers at various stages of the customer buying cycle. By digging into the consumer journey and considering searcher intent within the funnel we can tailor our content accordingly.

Consumer Purchase Funnel

Awareness

In this stage of the customer buying funnel, searchers have identified a need they would like fulfilled. They are also not aware of your company or its services and products. The goal here is to create Top of Funnel Content to target specific pain points utilizing PPC and SEO. The content needs to be informational and highlight a problem or pain point.

For example, let’s say you run an e-commerce site and you sell hardware typically used in home repair. You could create:

  • Detailed informative blog posts
  • Case Studies
  • eGuides & eBooks

For example, you could create editorial content like ‘How a Renovation can Increase a Home’s Value’. It is important to conduct some exploratory research initially. This can be accomplished by visiting forums and listening to what your community is saying. Epicbeat Epictions and Buzzsumo can be a robust tool when it comes to targeting the right funnel channel. These tools give unique, insight into the types of content your target market is consuming. At this stage, you can gain insight into different types of informational content is performing well in your niche.

This type of content is best suited for your blog section of your site. You can also use this as an opportunity to create link worthy content or clickbait content if you are so inclined.

Familiarity

During this stage, you will still be creating top of funnel content. Searchers are aware they have a pain point but do not know how to satisfy it. This content is still largely informative in nature. Think blog posts, case studies, infographics etc. The goal here is to show searchers how to solve their particular problem and create an additional touch point with your brand. In the end, your goal is to inspire the searcher to take action. Make searchers aware of your brand.

You can uncover insights in the familiarity stage by:

  • Forums – “they are a treasure trove of data”
  • Keyword Research – Keyword planner tool
  • Social Media
  • Google Alerts
  • Specialized Content Research Tools

Let’s say you have an e-commerce company that sells furniture and you are implementing a customer buying cycle. You could build out some content on “Hardware needed for a bathroom renovation”. In our fictitious example, you address a specific problem and a group of people who are experiencing this issue.

Our next step would be to motivate them to take action. At this point, you have a few options. You can ask them to subscribe, leave a comment, or send an inquiry.

Consideration

At this point, searchers are further down the funnel. They are familiar with your brand and are looking for a solution to their problem. They know they need a solution and roughly what they are looking for. In this part of the funnel, you want to optimize middle of the funnel content. This is the part of the funnel people typically optimize for and forget the other sections of the funnel. Don’t be one of these people. Optimize for each stage of the customer buying process for search engine optimization bliss.

Typically this type of content may take the form of product/service related category pages.

You can uncover insights in this stage by:

  • Keyword Research – Keyword Planner Tool
  • Google Search Console
  • Google Analytics – Site Search Analytics

For example, continuing with our previous example let’s say you want to target this funnel. Depending upon the industries you cater to you could produce content for each category page. You could target:

  • ‘Decorative Kitchen Hardware’
  • ‘Cabinetry Hardware’
  • ‘Ambient Lighting Solutions’

In these category pages, you can inspire confidence in your services and explain how you can help solve the searcher’s problem. This is also a great time to get searchers to take action. Consider building out micro-conversions at this point. Such as, “download our buyer’s guide”, “Sign up to receive special offers”, “register for our workshop”.

Purchase

In this stage, you will target searchers in the bottom of the funnel. Consumers are aware of your product or service and are looking for confidence building material. This is a good time for user generated content, testimonial pages and consumer reviews.

consumer reviews

Customer reviews and testimonials have an immense impact on organic traffic and conversion rates. According to a survey by BrightLocal, “88% of consumers trust online reviews as much as personal recommendations”. Essentially, you can spread word of mouth on your product level pages and generate content around it.

Of course, you also want to motivate searchers to perform an action at this stage of the funnel. We want to drive purchases of course. The buy button or ‘shop’ option needs to be prominent.

buy button

Loyalty

In this stage of the funnel, we are targeting the bottom of the funnel searchers. The goal of this funnel is customer retention. The tools in your arsenal to achieve this goal is our email subscription pages, sales related pages and any community-related forums that you can implement on your site. Wayfair is a good example of a company implementing a sales page stuff full of clearance items in a user-friendly way.

wayfair sale page

Also, you can also build a community forum page where customers can ask product related questions and gather advice for specific problems and issues.

The customer buying cycle represents a framework for visualizing the searcher’s journey with your brand over time. By properly tailoring content around each stage in the buyer’s journey, you can foster growth and sales over time. By creating tailored content you will be able to target customer wherever they are in your funnel.

 

 

 

 

 

 

 

25 07, 2016

Here’s How to Use Google Analytics to Improve Your Search Results Appearance

By | 2016-11-18T14:47:46+00:00 July 25th, 2016|Categories: SEO|

It’s not everyday that the SEO world catches a break from Google, but they’ve recently connected two of their greatest resources for us. You may have noticed a new tool in the  Acquisition section of your Google Analytics reporting – the Search Console Beta. This report closes the gap between Search Console and Google Analytics, providing a new level of insight into what’s happening with your SEO efforts. Now, in addition to seeing the typical behavior and conversion metrics next to your landing pages, you can see the information that has been siloed in Search Console for years:

  • Impressions
  • Clicks
  • CTR
  • Average Position
  • Sessions

Now, how do you leverage this data to improve your visibility and organic traffic? The simplest thing you can do is identify pages that aren’t garnering the amount of clicks they should be. In the video below, I use advanced filters to show only pages with an average position on the first page of Google (by selecting less than 11) and also have a click-thru rate of less than 1.5%. Of course, you can set these thresholds to whatever you like, but this is a good starting point. In my opinion, any URL ranking on the first page of search results should have a pretty good CTR. Once you’ve filtered out the better performing URLs, you’re left with a list of URLs that need some improvements. Export this list and make updates to your title tags and meta descriptions. It’s helpful to run a few queries and see what other sites are showing up on the search results pages along with your pages. There’s clearly something better about their results, so take a close look and see how you can out-write them with stronger title tags and meta descriptions. Don’t forget to come back in a few weeks, or however long it takes to collect significant data, and conduct a before and after analysis on those URLs. *Take note, that the first thing I do is set the date range to include 90 days worth of data, but not the 2 most recent days. Search Console data lags a couple days and only back-logs 90 days’ worth.

1 04, 2016

Google’s Latest Algorithm Update: Bacon

By | 2016-11-18T14:47:46+00:00 April 1st, 2016|Categories: SEO|

The sizzling algorithm update you probably haven’t heard about: Bacon.

It’s no secret that Google releases dozens or hundreds of updates to their ranking algorithm every year. Of course, some of these updates have a larger impact on search results than others, you may be familiar with the Panda or Penguin updates, which targeted content and link quality. These two updates have seen a series of on-going updates over the last few years as they continually make tweaks to improve the de-spamming of search results pages.

Most of the time, Google’s updates are focused on demoting websites that are using black-hat SEO tactics, have low content quality, or have a bunch of junky links pointing to the site. Historically speaking, these are all the things “spammers” have been doing for years to try and game the system.

google-bacon

So what’s the deal with the Bacon update? Well, as with all things, Bacon makes it better! Simply putting the word ‘Bacon’ in your on-page optimization efforts is a sure-fire way to improve the rankings of your site. Recent studies have shown that bacon-optimized pages are 17 times more likely to rank on the first page, experience a 243% increase in click-thru rate, and cut bounce rates in half! How do you go about making updates to optimize your site for Bacon? Follow these simple steps:

  • Add Bacon to the beginning of your page titles
  • Work Bacon-related content into your keyword-rich topic pages
  • Include the word Bacon in your page headings
  • Configure URLs to include Bacon
  • Anchor-text links loaded with Bacon
  • Add Bacon images on every page of your site

With these bacon-optimized updates, you’ll be cooking in no time!

 *if you made it this far and haven’t looked at the nearest calendar yet, April Fools! This is of course, total non-sense and you should in no way, shape, or form make any of the changes mentioned above (unless you actually run a bacon-related website)

9 02, 2016

How to Identify Stolen Content and Take Action!

By | 2017-06-16T12:46:34+00:00 February 9th, 2016|Categories: SEO|Tags: , , , , , , , , |

Imagine that you and your staff have spent countless hours creating engaging content for your website, only to discover that much of it has been stolen and repurposed by others – without your consent.

The appearance of duplicate content could adversely affect your website search rankings, making it more difficult for prospective students, alumni and the community to find you. And as we all know, good content rules. So, why let others break them (the rules, that is)?

At Beacon, we’ve seen what unethical practices such as copy scraping can do. Having personally experienced the theft of our content fairly recently, I thought I’d share the steps I took to alert Google to this offense and protect our company from the negative fallout that can follow.

Here are six easy steps for getting back at the thieves who steal copy.

Step 1 – Verify that your suspicions are correct.

Perform a quick Google search to determine where your copy is showing up across the internet. You can randomly select copy from a webpage (copy and paste a few sentences in a Google search box) to run a query. The search results will indicate if your copy appears on another site on the web other than your own.

For example, here are the results from my search.

Scraped Content

The search results will provide you with a list of webpages where that content appears (including your own, of course). As you can see in this example, there is another website using content I wrote without my consent (see the red arrow above).

Step 2– Investigate the extent of the theft

Stolen ContentScraped Content

When investigating the extent of plagiarism, check to see if your content was been copied verbatim. Also, you’ll want to check if this is an isolated event or if the website in question has copied multiple pieces of content. In our example above, you will notice multiple instances of stolen content. It’s time to take action.

Step 3 – Reach out to the website’s administrator

Reach out to the webmaster of the website that stole the copy. If the webmaster’s email contact isn’t readily displayed, check the about or policy sections of their website. The webmaster’s address is often hidden within these pages.

Once you’ve found an email address, notify him that you are aware of the offending activity and request that he remove the stolen content within a defined period of time. A week to ten days is more than enough.

Should the webmaster voluntarily remove the stolen content, your job is done. Have a latte. However, most nefarious webmasters will ignore such warnings and hide behind a perceived veil of anonymity.

Now, the fun begins.

Step 4 – Contact the hosting provider

It’s time to perform a who-is-lookup. This online tool provides you with the webmaster’s identity and more importantly, their website hosting provider. Armed with this new information, I reached out to the hosting provider and let them know that a website they host had blatantly infringed on my intellectual copyrights. I respectfully requested that they take down the website in question.

Step 5 – File a DMCA request

If the hosting provider fails to respond, then it’s time to file a dirty DMCA request. Only take this step once you have exhausted the other options. Also, keep in mind that you need to have the authority to act on behalf of your organization prior to filing this request.

You have the option of drafting your own DMCA takedown request or downloading this DMCA Take Down Notice Template to customize and send to the offending website owner. After you have sent the DMCA notice, give the website a week to ten days to respond. If you don’t hear back within the time you designate in your notice, it’s time to elevate the complaint to Google and get some sort of resolution.

Step 6 – Request Google remove the stolen content

Log into Google Search Console: https://www.google.com/webmasters/tools/dmca-notice. This will take you to the copyright removal section within Google (see below). Simply follow the instructions and be sure to describe the nature of the work being copied and include URLs where the copyrighted work can be viewed. Also, include the link to the infringing material.

Scraping Site

The DMCA request tends to work pretty quickly so you want to keep an eye on how many pages are currently indexed and compare it over the next few days or weeks. You can double check this by running another search query containing a snippet of your stolen copy. If you were successful in your attempt at protecting your content, you will see that Google has removed pages from its search engine that were infringing upon your copyrights once they complete their investigation.

Monitoring tip: If you would like to check the progress of your request, perform a site search if the offending site and make a note of the number of pages Google has indexed (see below). Compare this number to future searches and you may find the Google now indexes fewer of the website’s pages than before your request. This is a sign that Google may be taking action.

stolen content before after

You’ll know you’ve reached a final resolution when you run a search query and see the following highlighted message displayed:

stolen content example

Good luck and happy hunting!

18 01, 2016

RankBrain in 2016

By | 2017-06-16T12:52:33+00:00 January 18th, 2016|Categories: SEO|Tags: , , , |

RankBrain

Google has used word frequency, word distance, and world knowledge based on co-occurrences to connect the many relations between words to serve up answers to search queries. But now, due to the recent breakthroughs in language translation and image recognition, Google has turned to powerful neural networks that have the ability to connect the best answers to the millions of search queries Google receives daily.

RankBrain is the name of the Google neural network that scans content and is able to understand the relationships between words and phrases on the web.

Why is it better than the previous methods? In a nutshell, RankBrain is a better method because it is deep learning self-improving system. Training itself on pages within the Google index, RankBrain looks upon the relationships between the search queries and the content contained within the Google index.

How does it do this? Neural networks are very good at conducting reading comprehension based on examples and detecting patterns in those examples. Trained on existing data, Google’s vast database of website documents is able to provide the necessary large-scale level of training sets. When conducting training, Google changes key phrases or words into mathematical entities called vectors which act as signals. RankBrain then runs an evaluation similar to the cloze test.  A cloze test is a reading comprehension activity where words are emitted from a passage and then filled back in. With a cloze test, there may be many possible answers, but on-going training from a vast data set allows for a better understanding of the linguistic relationships of these entities.  Let’s look at an example:

The movie broke all (entity1) over the weekend.

Hollywood’s biggest stars were out on the red carpet at the (entity2) premiere.

After deciphering all of the intricate patterns of the vectors, RankBrain can deliver an answer to a query such as “Which movie had the biggest opening at the box office?” by using vector signals from entities that point to the search result entity receiving the most attention. It does this without any specific coding, without rules, or semantic markup. Even for queries that may be vague in nature, the neural network is able to outperform even humans.

With RankBrain, meaning is inferred from use. As RankBrain’s training and comprehension improves, it can focus on the best content that it believes will help answer a search query. As a result, RankBrain can understand search queries never seen before. In 2016, be prepared to provide the contextual clues that RankBrain is looking for.

1 12, 2015

How-To: Robots.txt Disaster Prevention

By | 2017-08-11T16:09:50+00:00 December 1st, 2015|Categories: SEO|Tags: , |

It’s any SEO‘s worst nightmare. The Production robots file got overwritten with the Test version. Now all your pages have been ‘disallowed’ and are dropping out of the index. And to make it worse, you didn’t notice it immediately, because how would you?

Wouldn’t it be great if you could prevent robots.txt file disasters? Or at least know as soon as something goes awry? Keep reading, you’re about to do just that.

The ‘How’

Our team recently began using Slack. Even if you don’t need a new team communication tool, it is worth having for this purpose. One of Slack’s greatest features is ‘Integrations’. Luckily for us SEOs, there is an RSS integration.

5 Simple Steps for Quick Robots.txt Disaster Remediation:

  1. Take the URL for your robots file and drop it into Page2Rss.
  2. Configure the Slack RSS integration.
  3. Add the Feed URL (from Page2RSS) for your robots file.
  4. Select the channel to which notifications will post. ( I suggest having a channel for each website/client, read more on why later)
  5. Relax and stop worrying about an accidental ‘disallow all’.

The Benefits of this Method

4 Benefits of Using Page2Rss & Slack to Watch Your Robots File:

  1. You can add your teammates to channels, so key people can know when changes occur! One person sets up the feed once, many people get notified.
  2. Page2Rss will check the page at least once a day. This means you’ll never go more than 24 hours with a defunct robots file.
  3. No one on your team has to check clients’ robots files for errors.
  4. You’ll know what day your dev team updated the file. This enables you to add accurate annotations in Google Analytics.

robots damage prevention

The ‘Why’

OK, cool, but why is this necessary? Because you care about your sites’ reputation with search engines, that’s why!

Mistakes happen with websites all the time, and search engines know that. They’re not in the business of destroying websites. But they are in the business of providing the best results to their customers. So if you neglect to fix things like this with a quickness, you’re risking demotion.

I’ve seen sites go weeks with a bad robots file, and it is not pretty. Once search engines have removed your site from the index, it is quite difficult to get back. It can sometimes take weeks to regain the indexation you had prior. Don’t undo hard work put into search engine optimization because your file was overwritten. Do yourself a favor and setup this automated monitoring feature.

I’ve armed you with this info, now there is no excuse for getting de-indexed due to robots.txt issues. If this has happened to someone you know, please share this post with them!

29 10, 2015

Google and Its Book Scanning Initiative – Trick or Treat?

By | 2018-05-01T08:26:50+00:00 October 29th, 2015|Categories: SEO|Tags: , , |

This Halloween, Google has toilet papered your entire yard and the US Second Circuit Court of Appeals just rang the doorbell, left a flaming bag of you-know what on your doorstep and ran like a bat outta’ Hell. Who are you?

You’re an author with a career worth of product, mostly published offline through traditional literary mediums. You have every right to feel that you’ve been wronged. I know I do.

While I don’t advocate for the trampling of anyone’s rights in favor of another (one of my pet peeves), the 2nd Circuit Court decision has some upside. Think Frankenstein and fire.  Let me explain…

poster-art-smallA Quick Overview

As you probably know, the objective of Google’s book scanning initiative is to scan every book available and make the contents available online for educational purposes. The book scanning initiative (the way I understand it) does not make copyrighted materials available online for free to those who wouldn’t otherwise have it available to them. This project is meant to aid libraries in copying their current catalog for use by library patrons who would otherwise have access to already paid for, hard copy versions.

The Authors Guild had taken great exception to the book scanning project, as one might expect. Citing existing laws on copyright infringement (17 U.S.C. § 107) The Author’s Guild has argued that Google’s book scanning initiative deprives writers of revenue from their work.

This court battle started way back in 2005.

The 10 year ordeal appears to be over. The US Second Circuit Court of Appeals ruled with Google and its “fair use” defense. The “fair use” defense (admittedly greatly simplified here in the interest of expediency) argues that since the content is being used for educational purposes, it serves a greater good. Additionally, it does not “excessively damage the market” for the current copyright holder.

If you’re not a creator (or even if you are), you’re probably wondering what this means for your website or agency. Will the fire Google started be used for good or evil? Will users see a benefit or will my SEO efforts become a horror show?

The answer is yes, yes, yes and maybe. Let’s talk about the bad first.

More Panda Updates

There is no doubt that while Google may be providing a service through this massive book scanning effort, they’ll get their sweat equity when they use this data to fine tune their algorithm in their pursuit to rid the internet of duplicate content. While this means a better user experience for most (yeah!), it could mean sleepless nights for SEOs and website operators who have used nefarious means to add “new” content to their blogs or websites.

Imagine your agency gets a new client. That’s a good thing. What you don’t know is that this client has in the past employed an SEO firm that had resorted to using re-purposed content from rare books. The next Panda update comes and your client gets slammed. Guess who gets blamed? FIRE, BAD.

However, there’s a great deal of good news too. Consider this:

More Books for the Disabled

In a related ruling, the appeals court decided in favor of HathiTrust Digital Library and their application of “fair use”. A non-profit project, HathiTrust Digital Library consist of a consortium or university libraries with a mission to provide digital books for the disabled. FRIEND, GOOD.

Better Experience for The End User

Less fluff and more real content will result from future algorithm changes. That’s great news for users and all of us who do things the right way. FRIEND, GOOD.

More Work for Content Creators

This is a big maybe but, in theory, this could work to a writer’s advantage. As the algorithm detects new re-purposed copy, something of value has to replace the fluff copy that had previously been used.  FRIEND, GOOD.

In Conclusion

Like the blind man in the original Frankenstein movie, I probably won’t convince any traditional writer that that his or her rights are not being subjugated in favor of commerce and the rights of another. And in the end, you can’t ignore the fact that the “monster” enjoyed a big, fat cigar with a friend. It ain’t all bad.

And on a side note, I hear the new Panda algorithm can scrape poop from shoes.

18 10, 2015

How to Properly Handle Pagination for SEO [Free Tool Included]

By | 2017-08-11T16:08:58+00:00 October 18th, 2015|Categories: SEO|Tags: , , , |

Let’s start out by defining what I mean by ‘pagination’. This mostly applies to ecommerce sites where there are multiple pages of products in a given category, but it can occasionally be seen on lead-gen sites as well. Here’s an example of what this might look like:

  • http://www.freakerusa.com/collections/all
  • http://www.freakerusa.com/collections/all?page=1
  • http://www.freakerusa.com/collections/all?page=2
  • http://www.freakerusa.com/collections/all?page=3
  • http://www.freakerusa.com/collections/all?page=4

(pages 3 & 4 don’t actually exist on this site, but it helps illustrate my example a little bit more)

In this case, you’ve got 4 pages all with the same meta data. It’s likely that search engines are going to index all of the pages listed above, and count the pages with parameters as duplicates of the original first page. You’ve also got a duplicate hazard with /collections/all and/collections/all?page=1. If you’re concerned with search engine optimization and your organic visibility, you’re going to want to keep reading.

Proper Pagination for SEO

So, how do you go about solving this problem? Fortunately, all the major search engines recognize and obey rel= tags; rel=canonical, rel=prev, and rel=next. The canonical tag says “hey, we know this page has the same stuff as this other page, so index our preferred version”. The ‘prev’ and ‘next’ tags say “we know these pages are paginated and have duplicate meta elements, so here’s the page that will come next, and here’s the one the precedes it”. There are HTML tags that go along with each of these that you’ll need to have your dev team add to the <head> section of the pages. Rather than show you what these tags are and how to generate them for each page, I’ve built an Excel spreadsheet that will generate all necessary tags (for paginated categories up to 20 pages in depth), all you need to do is add your base-URL at the top and hit enter. By ‘base-URL’ I mean this: “http://www.freakerusa.com/collections/all?page=”, basically it’s the paginated URL without the actual number of the page.

Tag Builder CTA

14 10, 2015

Avoiding Duplicate Content – Get the Most From Your SEO Efforts

By | 2018-09-14T15:24:31+00:00 October 14th, 2015|Categories: SEO|Tags: , |

The Panda algorithm is just another example of Google’s effort to identify “thin” content and enhance the user experience. To clarify, the actual quality of the copy is secondary. The objective is to add content that is of value to the user. Quality of copy and value of content can mean two very different things. So for example, the word count of any page theoretically isn’t that important as it does not correlate to value or thinness.

What specifically constitutes duplicate content, then?

Yes, thin content would include republished material or very similar pages. But, that just scrapes (pun intended) the surface. In general terms, anything that may obfuscate a page ranking or make it difficult for Google to determine which page to index may be construed as a duplicate content issue. These could include (but are not limited to):

duplicate content elvis

  • Printer friendly versions of pages
  • Same URL for mobile site
  • www. and non-www. pages (no canonical tags)
  • Identical product descriptions for similar products
  • Guest posts

How can I solve my duplicate content issues?

Canonical tags can help solve many duplicate content issues. Proper use of rel=canonical tags can ensure that Google passes any link or content authority to the preferred URL. Your preferred URL will show up in the Google search results.

There is a clear, preferred method to eliminate mobile URL issues. Move to a responsive site. While you may feel that budget constraints make this a less desirable option, responsive design enhances the user experience – which is what the Google algorithm is all about. The seo benefits of responsive design make this an investment that will pay off immediately and well into the future.

Expanding your product descriptions can be a laborious task, particularly when one considers the sheer volume of products any one website may offer. You can bolster product description content in any number of ways. As well as expanding the product description verbiage, one can include specifications or details, include “related purchases”, or add testimonials from previous users. For items that require assembly, how-to videos are a great alternative.

If your site accepts guest posts, search online before posting any new guest content to ensure that the content does not reside elsewhere.

Creating New Content: Does Size Matter?

I’ve heard it said that Google determines the quality of its search results using the time to long click method. In other words, a significant factor in determining the value of a search result is the amount of time a user spends on a website after leaving the Google search page. Additional emphasis is placed on the user’s next move. So, if the user does not go back to google search to perform another search, the presumption is that the question was answered adequately. It doesn’t matter how long the user spent on the page that was served up as the result. If this is accurate, the length of copy is not important. If the content was lengthy but did not meet the user’s expectation, they presumably would return to re-search the topic. If the resulting article was short but to the point and adequately answered the user’s query, then they would not likely return to perform another search. Assuming the time to long click method is used, size does not matter so much as the actual value of the material to the user.

That being said, sometimes less isn’t more. Larger articles seem to rank better in my personal experience. This may simply be due to the fact that when writing a longer article, more information is being shared thereby increasing the likelihood that the user finds what they’re looking for. While not consistent with stated policy, why not err on the side of caution and not only include valuable information but as much of it as possible?

8 10, 2015

Should I Use Canonicals or 301 Redirects?

By | 2017-08-08T08:42:09+00:00 October 8th, 2015|Categories: SEO|Tags: , , |

Should you 301 redirect that page to another, or should you use a rel=canonical tag? There are tons of reason why you might have some redundancy on your site. If it’s an eCommerce site, you’re probably displaying product listing pages a few different ways (sort by price, color, rating, etc.), or you might have navigation pages that are similar to your SEO landing pages. Whatever the case may be, chances are pretty good you have some form of duplication on your site that needs addressing.  This topic has been debated for years, but the real answer lies in one simple question:

Should people be able to access both pages in question?

Should I use canonicals or 301 redirects?

If the answer to this questions is Yes, you want to use rel=canonical. Doing so will point search engines towards your preferred page, but won’t prevent people being able to access, read, and interact with both pages. Here are some times you might see the rel=canonical tag in action:

  • www & non-www versions of URLs
  • parameters that change how a product listing page is sorted
  • navigation pages that point to an equivalent SEO landing page (it doesn’t always make sense to put content on a nav page)

If the answer to your question is No, you should remove that page and 301 redirect it. Page removal is much more common among eCommerce sites where products are discontinued but you can’t just remove the page (what if someone is linking to it?!?). Occasionally, you’ll see cases where this needs to be done for SEO landing pages. In the case of large SEO projects, where there are hundreds or thousands of keywords, content can get duplicated easily. Keeping a perfect account of every single SEO landing page that’s been written is basically impossible, so you might end up with two different pages with URLs like this: /blue-widgets and /widgets-that-are-blue. Obviously, even if the content isn’t identical, you can’t keep both of those pages around. Figure out which one has the most authority, links, and traffic – keep that one, and redirect the other one to it.

Next time you come to this fork in the road, remember to ask yourself whether or not there is value in people being able to see both versions.

Load More Posts