How does Google’s new algorithm SMITH will affect SEO?

Every time you search any query on Google, it presents the best possible outcomes for your search query in seconds. But how?. Have you ever thought about how it manages to do so? How, every time you ask something to Google, it delivers the relevant content in its top search results?. Lemme tell you, the main hero behind this picture is their algorithms (Google Algorithms).

If you are a webmaster, you must hear the word “Algorithm”. Algorithms are nothing but a set of rules and instructions that helps to find a particular problem easily in a lot of ways. Google also uses algorithms to provide its users the relevant information in the form of web pages on its SERPs.

Recently, there is a huge buzz in the digital world about Google’s New Algorithm SMITH. In December 2020, Google had published a research paper on it but hadn’t rolled it out.

 “As per Google, the new algorithm SMITH shows optimum results in understanding the long passages and contents in an entire text document.


The AI-Based  New Algorithm from Google, SMITH  understands better the meaning of long passages in the context of an entire text document in a similar fashion that BERT used to perform in understanding words and sentences.

As per the research paper, the SMITH algorithm can take a text input 400 % more than the BERT and this makes it better in terms of language processing.”


However, it has not been clarified yet whether it is using the SMITH algorithm or not. But that’s the google thing not to declare which algorithms they are using at a given time.

So, let’s move to know more and see how it will impact the search results.

What is the “SMITH” Algorithm?

SMITH ( Not Smith) is abbreviated as ‘Siamese Multi-depth Transformer-based Hierarchical’. It’s nothing but a Google new algorithm that works to understand the big text documents well. It is a new model that works on understanding passages in the context of an entire document whereas the algorithm like BERT understands the words within the context of sentences.

SMITH works in two tower structures in which one works at the document level, divides long passages into sentences and blocks whereas the second understands the meaning of those sentences and blocks. It has been done as language processing TPU/GPU is an intensive task.

Both SMITH and BERT are pre-trained models. Like, BERT trained on predicting random hidden words in the context of sentences whereas SMITH is pre-trained to understand what the next blocks are.

This kind of training helps the algorithm and Google understand the quality of content and help it to index the most relevant information for its users.

What are the findings of the research paper about Google’s New algorithm SMITH?

As per the research paper on the SMITH algorithm by Google, BERT has certain limitations. It can understand only short documents. Therefore they had to come with a new algorithm that can perform heavy lifting that BERT was unable to do while working on larger documents.

BERT faces the problem of semantic matching between a long text. This kind of problem has been solved here in SMITH Algorithm- as per the researchers. This way researchers think SMITH is more intriguing as it can fulfill all the limitations of BERT.

 ‘As per the research paper, this new ‘SMITH’ algorithm from Google is not intended to replace BERT.    But, it can be said as an extension of BERT, was created to fill the places that BERT is unable to do’

What are the Details of Google’s SMITH Algorithm?

The research paper clearly depicts SMITH, a pre-trained model, quite similar to the BERT algorithm. I will not go into full details here but will try to understand that the SMITH algorithm undergoes a pre-training model to predict random hidden words within the context of a sentence.

Before, getting into deep let’s first understand-

Algorithm Pre-training

It’s an algorithm that works on a data set.  SMITH algorithm is known as a proposed technique model for language processing or pre-trained model, helps in the prediction of random hidden words in the sentences.

Let’s understand with an example, suppose a sentence is written as “Johny Johny___ Papa”. Then, the pre-trained algorithm should predict “yes” as the hidden word.

The pre-training algorithms are trained in a way so that it could let the machine be accurate, optimized, and make fewer mistakes.

Is SMITH a pre-trained model?

Yes, Smith is a pre-trained model. It works in two towers (towers not which you regularly see, they are models i.e proposed technique model for language processing). It segments the longer passages text into sentences, blocks and so what SMITH algorithm is intended and trained to predict random hidden words and blocks of sentences.

The primary focus of this algorithm is to deduce the relationship between words and then levelling up to understand the context of sentences and how these sentences are related to each other in the longer text documents.

Is SMITH an important event for SEO?

As per the SEO experts, it has not too much involvement with SEO as it has not performed on SEO perspectives. SMITH is a new algorithm from Google that outperforms the BERT in terms of understanding the passages within the longer text documents. It is a pre-trained model and has a lot to do with the prediction of random hidden words within the texts and understanding what the next blocks are.

Results of SMITH

As per the wordings of researchers, this Google’s new algorithm ‘SMITH’ has won the battle of understanding longer text documents compared to the BERT. This makes them conclude the SMITH, a better option for larger text documents.

Whether Google is using SMITH Algorithm or not?

Till today, Google has not indicated that it is using SMITH Algorithm or not. As I mentioned above, Google generally does not show publicly which algorithm it is using currently. But, as far as the researcher’s wordings, SMITH has taken the burden and ticked all the boxes which BERT is unable to do. Until Google clearly states, it is purely hypothetical whether it is in use or not.

Final Words- Is SMITH really Outstrips the BERT Algorithm?

Yes, SMITH is a powerful model, outmatches the BERT in understanding longer input texts. However, it is unfair to say that it outperforms BERT completely.

The research paper shows that language processing TPU/ GPU is an intimidating task. It demands a lot of brain and advanced AI systems. Therefore, Google might use Both SMITH and BERT to gain optimal effectiveness in depicting both the longer and shorter queries.