by GUEST BLOGGER on JANUARY 9, 2013

This is a guest post by Stanley Harpers, who is a freelance tech writer.

Duplicate content was one of the biggest SEO killers of the past few years, especially where Google traffic was concerned.

Most of those sites that have lost traffic in the recent past have done so because of many factors, among them, duplicate content.

Therefore, dealing with duplicate content is paramount for effective SEO from 2013 moving forward.

Before the Pandas and Penguins of 2011 and 2012, duplicate content was simply filtered out of the search engines. However, abuse continued.

To stem this, Google stepped in and when you duplicate content, the repercussions could be worse now than before.

This article looks at duplicate content and how it affects SEO/traffic in general.

Checking duplicate content

There are different ways you can use to check duplicate content on the web.

The easiest and most straight forward way is to get snippets of content in double quotes in Google. This should give the different pieces of content that are same or similar.

This method is generally manual and hence, laborious.

The other way you can do this is through duplicate content checkers.

These checkers work on intricate general web content databases. Many of them exist in the market today but a more effective one like Plagspotter works best.

The third way you can detect duplicate content on the website is through the Google webmaster tool. Under HTML improvements in the webmaster tools, you can detect a lot of duplicate content in the tags.

Causes of duplicate content

Duplicate content is caused by many reasons, top among them being mere laziness on the part of the website owner.

In some cases, when site owners use the Google Keyword tool to rank for more keywords, they end up duplicating content. On the other end, duplicate content results because of a few technical problems on the website.

Having same URLs with the same or similar content is a leading cause of duplicate content on the website.

In some cases, it happens because; a website has similar content, yet it’s put on different pages.

The most effective way to deal with this kind of duplicate content is to lump similar content in one place, preferably accessible via the same URL.

The other way you can fix this is by implementing the rel canonical tag on pages with similar content to a single preferred, well performing page.

Poor categorization

Another leading cause of duplicate content on websites is poor categorization of products on commercial websites.

When you lump products roughly by their target market, the chances of having duplicate content issues on the pages increase.

This problem is generally more common than it is supposed to be.

To deal with this kind of duplicate content is by re-categorizing the content on the website.

The best way to accomplish it is to create categories by the products.

Similar page sections

One of the leading causes of duplicate content happens on commercial sites with different products, placed on unique pages but with the same basic similar information.

For instance, if the products only differ by their color, it can be hard to make them unique when the descriptions and characteristics are the same.

You can deal with this duplicate content by making drop downs on product categories such color and size, but on the same page.

In general, duplicate content is a huge problem facing webmasters and online business owners. The sad truth is, sometimes, it’s because of greed or shear laziness.

Creating spam for your site’s benefit leads to duplicate issues which can be costly.

Laziness can stop you from making the right technical choices for your website.

This explains the many innocent websites that are punished by Google for duplicate content.

Vigilance and constant work are what are required to manage duplicate content on websites, especially commercial ones.