Credit: nft now

Should Google Really Be Testing an AI Life Coach? 

BY Andrew Rossow

August 16, 2023

Google is testing an internal AI tool that supposedly will be able to provide individuals with life advice and at least 21 different tasks, according to an initial report from The New York Times

“I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?”

This was one of several prompts given to workers testing Scale AI’s ability to give this AI-generated therapy and counseling session, according to The Times, although no sample answer was provided. The tool is also said to reportedly include features that speak to other challenges and hurdles in a user’s everyday life.

This news, however, comes after a December warning from Google’s AI safety experts who have advised against people taking “life advice” from AI, warning that this type of interaction could not only create an addiction and dependence on the technology, but also negatively impacting an individual’s mental health and well-being that almost succumbs to the authority and expertise of the chatbot.

But is this actually valuable?

“We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map,” a Google DeepMind spokesperson told The Times.

While The Times indicated that Google may not actually deploy these tools to the public, as they are currently undergoing public testing, the most troubling piece coming out of these new, “exciting” AI innovations from companies like Google, Apple, Microsoft, and OpenAI, is that current AI research is fundamentally lacking the seriousness and concern for the welfare and safety of the general public. 

Yet, we seem to have a high-volume of AI tools that keep sprouting up, with no real utility and application other than “shortcutting” laws and ethical guidelines – all beginning with OpenAI’s impulsive and reckless release of ChatGPT

This week, The Times made headlines after a change to its Terms & Conditions that restricts the use of its content to train its AI systems, without its permission.

Last month, Worldcoin, a new initiative from OpenAI’s founder Sam Altman, is currently asking individuals to scan their eyeballs in one of its Eagle Eye-looking silver orbs in exchange for a native cryptocurrency token that doesn’t actually exist yet. This is another example of how hype can easily convince people to give up not only their privacy, but the most sensitive and unique part of their human existence that nobody should ever have free, open access to.

Right now, AI has almost invasively penetrated media journalism, where journalists have almost come to rely on AI chatbots to help generate news articles with the expectation that they are still fact-checking and rewriting it so as to have their own original work. 

Google has also been testing a new tool, Genesis, that would allow journalists to generate news articles and rewrite them. It has been reportedly pitching this tool to media executives at The New York Times, The Washington Post, and News Corp (the parent company of The Wall Street Journal). 

Dive Deep

Features & Guides