In today’s post I am going to show you some easy and useful tricks for SEO with basic Python language which can enhance your performance. To make use of these tricks, it will not even be necessary to install any special module, we are just going to use some Python features which come by default such as for loops, len or split().

Which sort of tasks we can do with basic Python language?

  • Count characters and words from a text.
  • Find word occurrences and calculate keyword density from a text.
  • Text spinners.

Does this sound interesting? So let’s get it started!

1.- Counting characters and words

Counting characters and words from a text can be a very interesting exercise to check how many words the best performant competitors have on their pages. Of course we all know that Google algorithm does not weight text length as much as it used to do, but it still an interesting exercise to have a reference about the quantity and sort of content that your competitors are using on their pages.

This is something which can be done very easily with Python with len() and .split(). Let’s take a text fragment from one of my previous post to demonstrate how it would work:

text = "PageSpeed Insights API is a very powerful tool as it can give us lots of data to enhance the speed performance in a bulk way for many pages and we can even store this data in a database to analyze the speed evolution over the time as we make changes to improve the pages speed. The only thing that we need for getting the most out of PageSpeed Insights is being aware of all the data we can extract and being able to manipulate JSON files."

number_characters = len(text)
number_words = len(text.split(" "))

If we print the variables number_characters and number_words we will get: 439 characters and 86 words. Easy, right?

2.- Words occurrences and keyword density

Now that we know how to calculate the number of characters and the number of words from a text, let’s go a bit further and check the words occurrences and keyword density. For this, what we are going to do is creating a dictionary were we will be adding the words from the text and if that word is already in the dictionary, what we will do is increasing the number of occurrences in one.

At the end of this process, we will sort out the dictionary in descending order so that the most used words in the text will be appearing in the first position and the least used words at the end.

count_words = dict()
words = text.split(" ")

for word in words:
    if word in count_words:
        count_words[word] += 1
    else:
        count_words[word] = 1

sorted_count = sorted(count_words.items(), key=lambda kv: kv[1], reverse=True)[0:20]

Our final output will be a list called sorted_count where there will be the 20 most used terms in the text with their number of occurrences. If we print this list we will get something like:

It would be recommendable to add a stop words list to avoid common prepositions or articles to appear in the first positions as it happens in my example. We can also use the same logic to create another code fragment which would return a list with those 2-word terms which are the most used.

count_words2 = dict()
words = text.split(" ")

counter = 0
for word in words:
    if counter != 0:
        if old_word + " " + word in count_words2:
            count_words2[old_word + " " + word] += 1
        else:
            count_words2[old_word + " " + word] = 1
    old_word = word
    counter = counter + 1
    
    
sorted_count = sorted(count_words2.items(), key=lambda kv: kv[1], reverse=True)[0:20]

In this case, the final output would be a list with the most used 2-word terms in our text. If we print it, it would look like:

Finally, if we would like to add the keyword density to each of the tuples, we should iterate through them, take the number of occurrences and divide it between the total number of words. As the tuples have not the option of appending, we will transform them into lists to be able to add the keyword density:

for iteration in range (len(sorted_count)):
    sorted_count[iteration] = list(sorted_count[iteration])
    sorted_count[iteration].append(str(int(round(sorted_count[iteration][1]/len(text.split(" ")),2) * 100)) + "%")

The final output that we will get is a list which contains other lists with the 2-word terms, number of occurrences and keyword density. If we print it, it would look like:

3.- Text Spinners

Finally, we can also use Python language to make a text spinner. Although this technique is not as useful as it used to be as Google algorithm has become quite sophisticated and is able to detect this kind of text pattern, it is still a resource that we can use to generate different sentences combinations.

So for instance, imagine that you need to create metatitles for a real state company where the only difference between them would be the neighborhood and the option of buying or renting. We can generate those metatitles very easily including the different options into lists, nestling them and iterating through them.

Let’s have a look at how it would work!

neighborhoods_barcelona = ["Eixample", "Poblenou", "Gracia", "Poblesec", "Bogatell", "Montjuic", "Sants"]
status = ["For Rent", "For Sale"]

for iteration1 in neighborhoods_barcelona:
    for iteration2 in status:
        print("Flat " + iteration2 + " in " + iteration1 + " ,Barcelona - MySite" )

This piece of code would print:

That is all folks, if you happen to have other simple tricks for SEO with basic Python language, do not hesitate to share them with me!