From a Chief Information Officer's perspective, “Star Wars” is simply the story of a massive data breach.
Stolen electronic files were entrusted to artificial intelligence. AI then leaked the most sensitive secrets in the galaxy to a teenage moisture farmer with a side hustle in droid repair.
Two hours later there is one less planet in the galaxy and only a cloud of greeble where the Death Star used to be.
Someone in IT probably got force-choked for that!
There may be a contemporary cybersecurity lesson to apply. Can we trust artificial intelligence? Should we? What controls are available?
At the dawn of The Artificial Intelligence Era, concerns about privacy and trust are paramount. It's essential to understand how our data is handled and whether we can truly trust AI with our most sensitive information. Let's explore these concerns and a little background in detail.
Large Language Models, like ChatGPT, have become part of our digital lives since OpenAI made them widely accessible in late 2022. These models are trained on massive datasets containing texts from books, websites, and human interactions. Through extensive analysis, they learn the intricate relationships between words, allowing them to generate human-like text.
We asked ChatGPT to use the tone of Gilbert Gottfried to explain how large language models work and it replied: “These models are like super-duper word wizards that can talk like us thanks to all the internet munching and word math they've done."
Indeed!
Privacy policies, terms and conditions reveal how our data is treated by AI developers.
Here is an excerpt from Microsoft’s Bing Conversational Experiences and Image Creator Terms §8.
"…by using the Online Services, posting, uploading, inputting, providing or submitting content you are granting Microsoft, its affiliated companies and third party partners permission to use the Captions, Prompts, Creations, and related content in connection with the operation of its businesses (including, without limitation, all Microsoft Services), including, without limitation, the license rights to: copy, distribute, transmit, publicly display, publicly perform, reproduce, edit, translate and reformat the Captions, Prompts, Creations, and other content you provide; and the right to sublicense such rights to any supplier of the Online Services."
In a similar vein OpenAI indicates that ChatGPT “may use the data you provide us to improve our models.” “When you share your data with us, it helps our models become more accurate and better at solving your specific problems and it also helps improve their general capabilities and safety.”
In these terms, Microsoft and OpenAI establish the right for their products to train on your prompts and mirror your language.
Your interactions with AI can inadvertently influence its outputs.
Care should be taken to ensure that your conversations with artificial intelligence do not include sensitive information. “Sensitive information” could include personal information, health information, trade secrets, client information or client data.
In your work life, you should be especially careful about trusting artificial intelligence with trade secrets.
The reason? If your trade secrets address a proprietary process or truly unique product, your language will make up a large portion of the total conversation on the topic. The dearth of information (other than yours) on the topic will make your input prominent. The more esoteric your topic and input, the more likely that your concepts, ideas, and speech patterns will become part of the conversation AI has when others stumble onto this topic.
If you're concerned about your data being used to train AI models, there are protections available.
OpenAI offers a User Opt Out Request form.
Both ChatGPT Enterprise and Bing Chat Enterprise provide enhanced data protection features.
ChatGPT Enterprise licensing includes several protections among its features.Despite the safeguards expressed in the enterprise agreements for ChatGPT and Bing Chat, no system is entirely invulnerable. In fact, in September of 2023, Microsoft AI Researchers Accidentally Exposed 38 Terrabytes of Confidential Data.
Refrain from sharing sensitive data and strategies to ensure your privacy while harnessing the power of artificial intelligence.
While enterprise level services like ChatGPT Enterprise and Bing Chat Enterprise offer additional protections, remain cautious about what you share with these tools. There is no guarantee that the services will not be breached. Be prudent. Avoid sensitive specifics in your prompts to these artificial intelligence services. Even though things seemed to work out for the best in the end, our advice is: don't entrust secret blueprints for the Death Star to a robot, no matter how intelligent it seems to be!