3-5 Evaggelikis Scholis, 14231 Nea Ionia, Greece

Google Issues Serious New Warning For All Android And iPhone Users

Originally posted on forbes.

Android and iPhone updates this year will be dominated by AI—much of it from Google. But this comes with a serious new warning for every user, and it should change how we use our phones.

The rollercoaster AI smartphone ride is now well underway. It was always going to be the case, that integrating generative AI into the smartphone apps we use most would eclipse last year’s ChatGPT take-up. And here we now are.

But not so fast—all this comes with a huge risk to your security and privacy.

It seems we all have a blind spot when it comes to generative AI chatbots. We might take care as to the apps we install, the permissions we grant, the browsers we use, and the data we share with Facebook, Google, and others. But put us in front of an AI chatbot and we forget all this—suddenly we find ourselves in what feels like a private chat with a helpful new friend. And we want to share.

But this is clearly no friend, this is a front to a multi-billion-dollar computing ecosystem, ultimately funded by advertising and data broking.

I’ve warned about this before, with the introduction of AI chat into our private messaging apps. Now, just as Bard becomes Gemini and a raft of new apps start to make their way onto our phones, Google has itself warned all Android and iPhone users to be very careful with these new technologies.

“Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies,” Google warns. “Google collects your Gemini Apps conversations, related product usage information, info about your location, and your feedback.” It says the information is used to “improve, and develop Google products and services and machine learning technologies.”

Fortunately, Google assures that, “your Gemini Apps conversations are not being used to show you ads,” albeit that might change, at which point “we will clearly communicate it to you.”

The risk here is that a huge privacy nightmare befalls those over-sharing with the various AI chatbots helping us write our business plans and sales presentations, or cheat on our school homework. When you click off the chat, the questions you have asked and the answers you have been given are part of a record that is stored, which means it can be retrieved and reviewed. And it can also potentially leak.

The standalone apps are just the start, of course, and Google also warns that “when you integrate and use Gemini Apps with other Google services, they will save and use your data to provide and improve their services, consistent with their policies and the Google Privacy Policy. If you use Gemini Apps to interact with third-party services, they will process your data according to their own privacy policies.”

The risks from generative AI are just now becoming understood. When it comes to messaging, for example, I have warned that Google Gemini (née Bard) will seemingly ask to review all your past private messages to shape the context and content for its suggestions. It will also breach Messages’ end-to-end encryption.

That off-device, open storage is the real issue here. Google says that data will be stored “by default” for up to 18 months, “which you can change to 3 or 36 months in your Gemini Apps Activity setting. Info about your location, including the general area from your device, IP address, or Home or Work addresses in your Google Account, is also stored with your Gemini Apps activity.”

This isn’t limited to Google, of course. This level of data collection and usage is fairly typical across the emerging Gen-AI industry. How we come to judge the security and privacy of a Google versus an OpenAI, for example, remains to be seen.

But, as ESET’s Jake Moore warns, “any data that we share online—even in private channels—has the potential of being stored, analyzed and even shared with third parties. When information is a premium and even seen as a currency of its own, AI models can be designed to delve more deeply into users divulging vast amounts of personal information. Data sharing can ultimately create security and privacy issues in the future and many users are simply unaware of the risks.”

Google says you can turn off the long-term data collection within Gemini if you play with its settings. “Future conversations won’t be sent for human review or used to improve our generative machine-learning models… You can also delete a chat from your pinned and recent chats. When you do this, it also deletes the related activity from ‘Your Gemini Apps ‘Activity.” But even when you do this, “your conversations will be saved with your account for up to 72 hours to allow us to provide the service and process feedback. This activity will not show up in your Gemini Apps Activity.”

As I’ve commented before, the on-device/off-device nature of the AI analysis that will drive this new generation of smartphone functionality will become the new great divide. Apple will likely do as much as it can with its own apps and services on-device—if that works. And we know it’s now experimenting with on/off device performance. Google will do much more in its cloud, given its very different setup and focus.

For the millions of Android and iPhone users now with Gemini-powered apps on their devices, there are some hard choices to make. The rest of us won’t be far behind.

We can’t have it both ways. Either we value our privacy and the huge strides made in recent years on private browsing, tracking and location sharing. Or AI integrated into mainstream apps is just too cool to live without, and we will use it for everything.

If it’s the latter, then this may become the ultimate “be careful what you wish for.”

Source: forbes

Related Posts