A Comprehensive Guide to Manage Duplicate Content
Google defines duplicate content as the content that appears in more than one place on the internet. Identifying duplicate content is way more complex than it sounds. The content doesn’t always need to be identical; similar content can be considered duplicate too.
Search engine crawlers identify duplicate content by tracking a webpage’s source code. Substantive blocks of duplicate content can exist both within and across domains. Though duplicate content is often considered malicious, it is mostly not. A few examples of non-malicious duplicate content are:
- Products on an eCommerce portal shown on different web pages or interlinked with multiple distinct URLs
- Pages generated by discussion forums intended for regular desktop use and a more simplified version for mobile devices
- Web page versions created only for printers
However, sometimes marketers deliberately copy the content that’s ranking on top and publish it as their own. They also often add some of their own content to it to manipulate search engines. It’s malpractice; it may lead to poor user experience and a Google penalty.
Google seeks distinct, fresh information and has zero tolerance for deceptive practices. According to a Google Search Central document, the search engine filters and chooses one of the pages if your site has a “printer” and a “regular” version of a webpage, which you haven’t blocked using a noindex tag.
Google penalizes websites that deliberately use duplicate content to manipulate search engines by downgrading their ranking. In more severe cases, it may remove the site entirely from the Google index. In that case, the site will no longer show on search results.
How to Fix Duplicate Content?
To fix instances of duplicate content, you can take the following steps:
1. Use 301s: You can use 301 redirects in the *.htaccess file on all the pages with duplicate content to divert the traffic to the desired pages. It will also redirect Google crawlers to ensure that the search engine doesn’t index that page.
2. Manage internal linking: Sometimes, website builders and developers erroneously or deliberately create several versions of the same page to build internal links. However, Google recommends that your internal links should point to a web page’s main version. Therefore, you need to streamline internal linking and keep it consistent.
Internal links help users to navigate between relevant pages. Internal links that point to irrelevant or duplicate pages confuse search engines and may affect rankings and traffic on the main page. Efficiently managing them becomes even more critical when you have a big website with hundreds of pages.
3. Use top-level domains: To make Google use a web page or a document’s most appropriate version, you should use top-level domains, especially if you manage websites with country-specific content.
The domain Amazon.de is the German version of Amazon’s eCommerce portal. It uses *.de as a “top-level domain” to reflect a website that is region and language-focused.
On the flip side, domains like http://de.xyz.com are the subdomains of the main site xyz.com and don’t reflect whether it is the website’s country-specific version. Similarly, http://www.xyz.com/de is a subdirectory, it doesn’t clearly indicate that it’s a country-specific website.
Using a Hreflang tag in the website code (i.e., rel=” alternate” hreflang=” x”) is the best way to tell Google that you’re using a specific language on that page, so it can show the page on SERPs when users perform a search in that particular region or use that language.
4. Be cautious while syndicating content: Google shows the most appropriate web page version as per the search query during content syndication. But it may or may not be the page you want.
You can add a canonical tag to the syndicated page to suggest Google the right page to index. Here is an example of how a canonical tag looks like. You can insert the target URL inside the quotes:
<link rel=”canonical” href=”http://xyz.com/wordpress/seo-plugin/” />
Google wants you to provide the original article’s link on each site that you’re using for content syndication. Should anyone seek your permission to syndicate your content, you can add instructions using the noindex tag. It will help prevent Google from indexing the version used by third parties. So, even if other sites syndicate your content, Google SERPs will show your original version.
5. Avoid content repetition: When you need to include lengthy copyright text, terms and conditions, product features, and benefits on every page, add a summary and then link it to the page where the user can find detailed information.
6. Manage URL parameters: Developers use URL parameters for a better user experience, but multiple parameter combinations can create numerous URL variations with the same content. It can be an SEO nightmare. To manage URL parameters efficiently, use the Parameter Handling Tool. It will suggest Google bots the right way to treat URL parameters.
7. Understand your content management system: It’s crucial to know how your website content is displayed. Different website templates for blogs, forums, and related networks often show the same content in various places. For example, a newly published blog may appear on your home page, in the archive, or some other page. But Google won’t consider it as duplicate content.
8. Avoid similar content: If you have many pages with similar content, you can:
A. Expand each page, so it appears unique
B. Consolidate all the pages into one page
For instance, if you have a fitness website with separate pages for similar workouts, you can either merge them or add fresh content to each page to create several unique pages.
Should You Block Crawler Access to Certain Web Pages?
Google doesn’t like it when website owners use robots.txt files or some other code to block crawler access to web pages with duplicate content. It can lead to indexing issues.
Instead of blocking the bots, Google recommends using the rel=”canonical” code to mark duplicate pages. You can also use the URL parameter handling tool or 301 redirects.
If there are numerous pages on your website with duplicate content, you can adjust it by changing the Googlebot crawl rate.
When Does Duplicate Content Not Hurt Your Website?
Duplicate content hurts a website’s ranking if search engines perceive that it’s used to manipulate search results. However, if you have created several versions of the same page unintentionally when structuring your website, you need not worry about it. Google usually does an excellent job of filtering the best version of a web page to display on their SERPs (search engine results pages).
What if Someone Else Copies Your Content?
Though it’s rare, however, if you find a site copying your content without your permission, you can:
- Contact the webmaster and request its removal.
- File a report under the Copyright Act. Google removes all content that infringes copyright laws from its search results.
Duplicate content can be a major issue for your website if used to manipulate search engine results. In most other cases, it won’t harm your site’s rankings or indexing. However, it can significantly diminish the user experience. Therefore, you must work proactively to ensure that your website is free of duplicate content.