Flexible Error Detection in Product Titles Using Probability

Computers are good at evaluating measurements that are either yes or no. With guidelines for product titles, error detection is often maybe.

Tim Gilbert · 5 September 2017

Have you tried reading the Google Shopping Guidelines for Product Titles and gotten confused?

Even the cliff notes version of the rules is full of vague instructions that seem open to interpretation, or don't provide a full list of potential problems. For example: "Don't use words from foreign languages, unless they are well understood". English is a language that has adopted many german, french, and latin words. How do you know which words are still foreign? How can you tell if they are commonly understood across many regions? Or "Don't use capital letters for emphasis... You should still use capitalization when it's appropriate." When is it appropriate to have non-standard capitalization? What if a brand or trademark would normally be all caps or capitalized in a non-standard fashion? If it's okay for abbreviations, does that also apply to units of measurement?


In typical programs, computers are even worse than humans on evaluating fuzzy conditions for product titles. There are generally three methods to approach the problem.

The first is to write a very detailed set of rules that are each checked in order to decide if a problem should be flagged or not. This is a decision tree, and it can quickly become highly complex, difficult to maintain, and nightmarish to add new conditions to because each condition must be weighed against all the others to see which should be the deciding factor first. Its benefits are that your program can explain exactly why it flagged or didn't flag a given title.

Another approach is use a machine learning model (also called A.I.) where you manually construct a set of thousands to millions of examples of product titles and what should be flagged, and let the computer learn how to emulate what you did. Constructing a balanced and extensive training set can be time-consuming, and it is futile to try to extract an explanation of why a particular title was flagged which makes debugging very hard. On the good side, sometimes it can distinguish between borderline cases that would be hard to articulate a reason for treating differently, and you get a confidence score of how likely the model was correct in identifying the problem.

A third approach is compromise between the first two. In this approach, the conditions are specified by a human, but instead of telling the computer that the condition leads to error/not-error, you instead specify a probability percentage. Then instead of trying to have a complex set of decision rules, you simply sum or multiply the probabilities together to come up with a final decision on whether it's a problem. Through this method, you get the explainability of knowing which factors were triggers in the error, it's easier to maintain, and you also get a confidence score.


For problems in product titles relating to quality in general and specifically to meeting the Google Shopping Campaign guidelines, I've realized that there are really two types of fuzziness.

Confidence, as mentioned above, where we evaluate how likely a flag is to be correct.

The other fuzzy factor is relative severity. Not all kinds of product title errors are equally important to fix. Some issues will get a product immediately flag and blocked by a distribution channel, some have a bigger impact on bottom line performance, some may affect how professional customers perceive the company to be, and some may only violate minor grammatical rules but have no real impact on bottom line or customer opinion. We can use the same approach of giving each condition a probability for severity.

Then we can combine the two factors of confidence and severity to prioritizes problems within each product title, and to prioritize which product titles have more severity problems with in your catalog's Google Shopping feed.

Share:

Social

Search


Highlighted Posts

  • Announcing the Product Title Performance Grader

    Today, we’re announcing the release of our new Product Title Performance Grader, a free tool...

    Read More
  • Case Study: Optimizing Product Titles for Google Shopping

    The Problem Our client suspected that their product titles were limiting the effectiveness of...

    Read More

FREE Product Title Grader

Are Your Product Titles Optimized?

  • Increase Impressions
  • Increase Clicks
  • Increase CTR

Tags