While there is now a new, official ChatGPT app that you can now download on your best iPhone, Apple has just told its staff that they cannot use ChatGPT – or any generative AI system – in their work for the company. In an exclusive report on the Wallstreet Journal, you can see that Apple sent around an internal note to all its staff, with a simple, clear message – ‘No AI.’
This doesn’t just seem to be about the implications and the growing weariness in the public eye of AI, or that Apple doesn’t want a competitor being used within its head office – instead, it’s an issue of security.
AI chatbots have a problem
And it’s all about making sure about chat history. Previously, when a user typed something into ChatGPT or another chatbot, their words would be remembered and used in teaching and training the chatbot at hand. This allowed for an issue – the chat histories were saved and then left completely open to being viewed by bad actors. Since then, OpenAI has fixed this issue with an option not to save chat history, in the process not teach the AI using your inputs.
It still remains an issue to some degree, however, and Apple is obviously very keen to avoid the security issues at hand. Another named piece is GitHub’s Copilot, which can be used to write code.
Apple’s worried that any confidential company data or even code written by AI that is then used in its products could leak from an AI chatbot, like ChatGPT. It’s not alone either – other businesses, such as Verizon and JPMorgan Chase have also banned AI chatbots for staff, presumably for the same reasons as Apple.
With ChatGPT coming to iPhone, it is likely that AI could be used by Apple to make certain aspects of its ecosystem more powerful – something that we might see at the upcoming WWDC 2023.