Google’s latest infrastructure change is centered around the way in which content is generated and presented to users. For two decades, Google Search has been the invisible hand driving the public market’s curiosity and demand for immediately accessible information on anything and everything in real time.
But now, the rise of artificial intelligence (AI) has changed everything, including the ways in which content is generated and how users want to interact with it. For years, SEO, or search engine optimization, has allowed our world to be “indexed” online through tactical insertions of keywords in URLs, article titles, and of course, embedded into the articles and website themselves for easy discoverability. The closer to Page 1 of Google News something was, the more likely it would be discovered and not lost in translation.
But today, the value of SEO has undeniably changed, where the desire to be on or as near Page 1 as possible, has collided with the ethics of doing anything it takes to be there – even if it is complete spam or bullshit.
Arguably, the power of SEO really came through in 2001, when Google Image Search launched and users began to recognize the power of “talentless hacks,” better known as “Google bombing.”
A recent report from The Verge does an excellent job of breaking down the history and transformation of Google’s algorithm and SEO infrastructure.
In keeping up with Google’s ever-changing algorithm, the search engine recognizes that repetitive and/or low-quality AI content could harm the SEO mechanism and that human contribution still remains a crucial element in how content is created and distributed. We have already seen plenty of examples in which these AI models like ChatGPT, can make up information (machine hallucinations) or steal other’s original work opening the doors to defamation, copyright infringement, and other ethical concerns.
Quality of Content Will Now Take the Throne
A recent report from Decrypt indicated that Google’s most recent update involves the search engine “quietly rewriting its own rules” to accept AI’s arrival.
The latest iteration of Google Search’s “Helpful Content Update,” replaced the long-standing phrase “written by people” to a statement indicating that Google will continue monitoring “content created for people” in order to better index and rank sites on its search engine.
In other words, regardless of whether a human or machine produces the content, Google will focus on the quality of the content, rather than the author of it. However, Google does still penalize the improper use of AI as it impacts content summaries.
As lawmakers and regulators are currently faced with how to implement systems that can distinguish human-generated content from AI-generated content, Google is about to change the traditional rules of SEO.
Taking on ChatGPT
Last week, we learned that Google is also reportedly taking steps to rival OpenAI’s ChatGPT-4 model with its own AI software, Gemini.
According to a September 14 report by The Information, Gemini, which hails from Google’s DeepMind AI division, is developing a similar large language model (LLM) to GPT-4 that will also work as a chatbot and generate “original” text based on text prompts.
Google is currently offering developers early access to a beta version of Gemini, claiming that its larger version under development will directly compete with GPT-4.
Calling Out AI-Generated Images with Watermarks
The search engine giant also revealed a new tool at the end of August that will supposedly help it better identify when AI-generated images are being utilized improperly.
SynthID, which is currently in beta, is designed to combat misinformation by adding an invisible, permanent watermark to images to identify which of them were computer-generated. The watermark is embedded directly into the pixels of an image created by Imagen, and remains regardless of whether the image is modified through filters or color alterations.
UK’s ‘CMA’ Watchdog Issues 7 Governing AI Principles
This week, the Competition and Markets Authority (CMA), which serves as the primary antitrust regulator in the UK, published a report containing 7 proposed principles that are tailored towards the responsible development and use of foundation models (FMs) with AI.
Throughout the September 18 report, the CMA cautioned against behaviors intended to weaken competition and overlook consumer protection laws – including the spread of misinformation or enabling AI fraud via machine hallucinations.
Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-3.5.