Keyword researching is one of the tasks that the SEOs we do very often to to understand what our target audiences are searching for and how these searches are done.

In order to help us to make these researches much easier, there are a bunch of tools with different databases which can give us an estimate about the volume of searches the keywords that we are interested in have and even some keyword suggestions that can be useful to reach our buyer-persona.

Many of these tools are very well-known by our community such as Semrush, Ahrefs, Ubersuggest, the Keyword Planner Tool or you can even use Google Search Console. In fact, if you are interested in using Google Search Console I recommend you this article written by Jean-Christophe Chouinard where he explains how to get all the keywords from Google Search Console by using Python and Google Search Console API.

However, another good strategy to get some ideas about what the users are looking up at Google is having a look at the “Related Searches”, which appear at the bottom of the page or even when the query is being typed down on the search box through the autocomplete option.

In the end, through this feature Google is recommending us what the other users are searching for, so from a SEO point of view we can get very valuable insights to enrich our keyword researches.

In this article we’ll pay special attention to the “Related Searches” feature and I will show how the script that I have created with Python to get these searches work.

How does the script work? Step by Step

1.- Where the data is scraped from

Google provides an URL which can be used to request the related searches. This URL looks like:


The query of the keyword for which we want to get the “related searches” is entered through the “q” parameter which appears at the end of the expression. In the case we would like to enter a query which would comprise several words such as “cola cao”, we would need to reemplace the spaces for “+” symbols, therefore, the final URL would look like:


This will return a XML file with the related searches for that keyword. In this case, the request is done for the Spanish language, but the language can also be changed by changing the “hl” parameter to the language we are interested in.

2.- What is the technique we use to scrape as many keywords as possible

The pattern we follow is very simple and straightforward. If we type on Google (carrying on with the “cola cao” example) “cola cao a”, we will be able to see all the suggestions which start with a such as “cola cao azucar” or “cola cao anuncio”.

So if we create an iteration and we do the same for each letter of the alphabet, we will get the most extensive list of related searches for the initial keyword “cola cao”.

In short, what the script will do is requesting the XML file for “cola cao a”, “cola cao b”, “cola cao c”, “cola cao d” and so on, scraping the results for each XML file and returning an ouput in form of Excel file with all the extracted keywords. 

3.- Using the Python script

Once we have understood how this script works, it is time to make use of it. The script can be found in the URL as follows:

When we execute this script in our terminal, it will ask us for the keyword we want to introduce:

Once the keyword is entered, it will start scraping the data from the different XML files it will request. At the same time, it will print the queries it is doing on the terminal to notify us about the progress it is making.

Finally, once the data is scrapped from all the XML files, the script will return an Excel file with 260 keyword suggestions that it has got from the XML files scraping.

4.- More information about the script

To run the script you will probably have to install some Python libraries such as Beautiful Soup, Requests and XLSXWriter. In addition, the script uses the Python version 3.6.8.

If you are capable to program, you can also include “Proxies” on the script to scrape more XML files without being banned by Google and in a faster way, as the current one uses the sleep function to stop from 5 to 10 seconds for each URL it scrapes to avoid Google banning the IP.

There is an extended version on the code as a comment, which would request the XML files for each of the related searches which are gotten initially, to get the most comprehensive related searches list possible.