603-766-3343 sales@www.macedge.com

How Much Do You Trust an AI Conversation?

Generative AI chatbots like ChatGPT, Microsoft’s Bing/CoPilot, and Google’s Gemini are the vanguard of a significant advance in computing. AI chatbots can be compelling tools. Whether you need to find just the right word, draft simple legal documents, start awkward emails, or code in unfamiliar languages. Much has been written about how AI chatbots “hallucinate,” making up plausible details that are completely wrong. That’s a real concern, but worries about privacy and confidentiality have gotten less attention, let’s think about AI security.

Of course many conversations aren’t sensitive, such as if you ask for a recommendation of bands similar to The Guess Who or help writing an AppleScript. People have used AI chatbot to analyze or summarize some information and then pasted in the contents of an entire file. Services like ChatPDF and features in Adobe Acrobat let you ask questions about a PDF you provide. AI chatbots offer a good way for you to extract content from a lengthy document.

These tools are potentially useful when we look from a productivity standpoint. Such conversations may provide a troubling opportunity to reveal personally sensitive data or confidential corporate information. We’re not talking hypothetically here: Samsung engineers inadvertently leaked confidential information while using ChatGPT to fix errors in their code. What might go wrong?

What will the ChatBots do with the information? (What Me Worry?)

One significant concern is that sensitive information might be used to train future versions of the large language models used by the chatbots. That information could then be regurgitated to other users in unpredictable contexts. People worry about this because early large language models were trained on text publicly accessible online. This was done without the knowledge or permission of the authors of that text. We all know, lots of stuff can unintentionally end up on the Internet.

Privacy policies for the best-known AI chatbots state that uploaded data won’t be used to train future versions. But there is no guarantee that companies will adhere to those policies. Even if they intend to, there’s room for error—conversation history could accidentally be added to a training model. Chatbot prompts aren’t simple database queries, there’s no easy way to determine if confidential information made its way into a large language model.

Other Concerns For Future

Conversation history for chatbots store conversation history, anything added to a conversation is in an uncontrolled environment. where at least employees of the chatbot service could see it. There is a potential that could be shared with other partners. This feature is something that you can turn off in certain chatbots. Thinking about AI security, such information could also be vulnerable should attackers compromise the service and steal data. These privacy considerations are the main reason to avoid sharing sensitive information with chatbots.

Adding emphasis to that recommendation is the fact that many companies operate under agreements that specify how data must be handled. For instance, a marketing agency is tasked with generating an ad campaign new product. They should avoid using details about the product in AI-based brainstorming or content generation. If those details were revealed in any way, the agency could be in violation of its contract with the manufacturer. That may subject the agency to significant legal and financial penalties.

In the end, although it may feel like you’re having a private conversation with an AI chatbot, don’t share anything you wouldn’t tell a stranger. As Samsung’s engineers discovered, loose lips sink chips. Think about AI security when you are interfacing with a bot, keep your secrets secret.