SEO

How NOT To Do Keyword Research

Fair warning; this is going to be a bit of a rant.

First, a bit of background. I have conversations every week about SEO – if not every day.

They tend to be about implementation and range from talking about designing and building out a keyword strategy to more tactical topics like how to scale link building.

With those conversations inevitably comes talk of keywords, and more specifically keyword research. If you’re looking for information on the tools I like to use, check out this post on keyword tools.

The fact of the matter is that it all goes down hill when I get asked questions like this:

Which is pretty much a red flag for me, as you see – 99% of people have no bearings on how to find keywords they actually should be targeting, let alone any process for prioritizing their term list into something actionable.

So when I dive a bit deeper and inquire about how they came up with these keywords, I more often than not, get a response like this:

If you just read that above snippet of conversation and thought “yeah, what’s wrong with that?”

I’m Going To Tell You

For starters, the conversation was about SEO (not PPC) so the keyword planner provides no useful information when it comes to competition for organic rankings – the competition scale is for advertisers on AdWords.

A keyword can have almost no competition from advertisers (which just means no one is willing to pay for the clicks) but can be extremely competitive in terms of organic search.

This tends to happen for informational, non-commercialized terms, where there’s no buying or shopping intent – but loads of bloggers and journalists have written on the topic.

For example, if we look at an informational keyword like “how to grow roses” in keyword planner:

you can see it’s reporting the “Competition” as Low, and that’s because there aren’t advertisers clambering to spend money to buy these clicks…

Whereas if we use a tool that actually correlates the difficulty to rank for this term using SEO, like Ahrefs keyword explorer – we see a much different picture:

you might see 10 and think “oh, well that’s still easy” – and to be honest, it should bebut you’d still be wrong.

Instead you need to look at the SERP and gauge who you’re actually competing with.

Here’s the URL’s you’d need to overtake:

To put this in perspective, pay specific attention to the domain rating (DR) of this SERP.

The average DR (or relative measure of authority and ability to rank) for this keyword’s search results is 64.1.

What that means is that unless your website’s baseline link profile and associated trust is equivalent to that of almanac.com… it’s probably going to be pretty difficult to crack this SERP, let alone rank in a position that would bring any form of meaningful traffic, i.e. in the top 5.

More So Still

Simply writing blog posts and creating pages where you shoehorn your target keywords into the page titles, URL’s, and header tags isn’t going to do much of anything.

Every blue moon you might get lucky and rank for a term that has zero competition, but that’s likely because it either a) has no meaningful intent or b) has no traffic.

Instead, use your keyword research data to create a map of all the content you have versus what you need, map your keywords to your content, build them into your site’s architecture and internal link structure – and then go promote those pages.

For those of you who don’t understand my subtlety – promote means build links.

How To Not Failboat Your Keywords

As I mentioned above – you need to take the existing SERP into consideration, more specifically – the URL’s and Domains that already have the rankings you’re after.

To do this you need to capture the baseline ranking heuristics that Google is likely using to calculate and score the relevancy of those pages.

These include (among other attributes):

  • Keyword usage in meta attribute
  • Word count
  • Page level / position in the site’s document architecture
  • Internal links to the page
  • External links to the domain and page
  • The domains trust score
  • Anchor text variation
  • The age of the domain

and a litany of other factors, but the above list is a good starting point, and then from here you need to go into data mining mode.

Get All The Data You Need To Analyze

This means scraping Google to get:

  • The URL and ranking position for all of page 1
  • The ranking page title and meta description
  • Any SERP features or additional UI elements affecting ranking positions and SERP CTR

Then you need to scrape each of the ranking URL’s for:

  • Keyword usage in meta attributes;
    • Page title
    • Meta description
    • Header tages
    • On-page content
  • Word count
  • Page creation date or last updated date

Lastly, you’ll want to grab some 3rd party data for each of the URL’s

  • Domain age
  • Links to domain
  • Links to page
  • Trust flow
  • Citation flow

Gauge The SERP’s Rank Potential

Welcome to the final 10%.

90% of people won’t get to this point, and even fewer will take all the data you’ve just collected and analyze it to find what’s actually important, which is based on all the ranking heuristics for this keyword’s SERP – can you (and more so YOUR WEBSITE) actually rank here – and if so, what’s going to be required.

To do this – it’s best to have all of this data loaded into a spreadsheet, and then to throw in some conditional formatting.

Here’s an example spreadsheet you can use
just remember to click File > Make a Copy

This allows you to look for rows with the most green to identify potential opportunities.

What’s more, the opportunities you’re looking for are to identify specific rank potential for each keyword.

What is Rank Potential

It’s where you (and your website) can realistically rank for a given keyword.

It doesn’t mean “create this page” and *POOF* you magically rank on page 1.

It means, based on your website in it’s current form – with all the above data considered, what needs to be DONE to rank in the position highlighted in the row with all the green boxes in it.

If you follow the process in this post for each and every keyword in your target universe, what you will ultimately be left with will look a lot like this:

From there you can filter down, sorting specific columns in ascending order to find opportunities that are a good fit for you and your website based on your strong points.

For example you can sort and find opportunities that:

  • Require the least links
  • Require the least content
  • Don’t require you to create new pages
  • Can be attained simply by creating a new page targeting the keyword

and so on.

Best of All

If you scraped ALL this data, you now have a comprehensive snapshot of your SEO market, including all your competitors, their keyword footprints, and your current keyword marketshare.

This would allow you to quickly spin up pivot tables to visualize your competitors market share by groups of rankings based on attributes like ranking position, intent, volume, etc.

Ready to Do Keyword Research the RIGHT Way?

I’m here to help – feel free to drop any questions in the comments.

If you want to really get started down the right path, use the Google Sheets template I linked to above and start filling in your data – then drop comments here for help as you hit roadblocks.

Nick Eubanks

 | June 29 ,2017