How Google Plans to Make Autocomplete Safer

Google released the autocomplete feature in 2004, and has fine tuned it to a sweet science. It’s a handy tool that helps to streamline queries by serving suggestions based on the first few letters that a user types into the search bar. Some may argue it’s invasive and creepy, but it’s a very handy service and can lead to some interesting queries.

It can also lead to some very offensive searches, and that’s why Google is trying to make some changes to the system with “Project Owl”, a new update that will allow users to police the system and make it safer and better.

When a query is made, Google will still make suggestions but users will have the option to report a particular query as offensive, or as fake news. Google will look at these reports and add them to the database to try and determine which terms are not related to a user’s query. Ultimately, there is still some room to game the system though.

What if a rival company reports their competition’s brand name as an offensive search? What if someone who disagrees with a politician brands their name or the name of their bill as an offensive search term? It’s not clear how Google will police those types of abuses, but the system is designed in good faith.

It’s hoped that Google will be able to teach its algorithm how to distinguish fake news from the real stuff, and be able to serve more accurate queries that aren’t offensive.

Bio: Fix Bad Reputation specializes in assisting clients with the removal of negative reviews, and negative content, from Google and other search engines.