It’s no secret that artificial intelligence (AI) is on the rise. People and businesses are using it to write their copy, craft their designs, and even strategize their businesses. But when you remove the human barrier from work, it poses a security risk. Who or what is cross-checking the work of AI? What are the copyright or plagiarism laws when it comes to material AI writes? There are many security questions that arise when it comes to artificial intelligence. So let’s dig into 3 different platforms and their security.

ChatGPT

“Chat GPT” is short for Chat Generative Pre-Trained Transformer. It’s a chatbot that uses artificial intelligence to essentially mimic human conversation. It’s also known as a large language model (LLM) pulls from data from large data sets, and uses neural networks to process information and generate human-like responses. Just like with any platform, ChatGPT comes with its security risks. When you sign up for a ChatGPT account, you’re registering with OpenAI—ChatGPT’s creator website—and inputting your personal information. This and the conversations you have with the bot then become available to OpenAI. In March 2023, OpenAI had a 9-hour outage, which allowed other users to view the chat conversations of other users. Another concern is the potential misuse of the platform. ChatGPT spits out code faster than humanly imaginable, and therefore has become an invaluable tool for hackers and other cyber criminals. 

With the security risks, however, also come the security measures OpenAI has in place to keep your information safe. Here are some of the measures the company currently has in place:

  • Access control: OpenAI limits access to information to a select group of people within the company to mitigate the spread or exposure of users’ information.
  • Encryption: Communication and storage are encrypted to protect against data breach. 
  • Monitoring and logging: OpenAI monitors usage to detect and deflect any unauthorized activity.
  • Auditing: The creators of ChatGPT and OpenAI conduct routine security audits and assessments to identify gaps and address vulnerabilities. 
  • Authentication: Platform users are required to authenticate their identity. 

A few ways to ensure your safety as you use ChatGPT is to cross-check the information you get or give, be aware of any bias the platform may have, use a strong password, and always, always, always report security issues. 

Microsoft Co-Pilot

This artificial intelligence platform is built on the Microsoft Azure OpenAI Service, and is run solely through the Azure cloud. The difference between ChatGPT and Microsoft Co-Pilot is that Co-Pilot is solely integrated into the Microsoft ecosystem, giving it the opportunity to leverage your data to provide more personalized answers. It’s important to note that Microsoft doesn’t share your information with any third party—including OpenAI—unless you’ve granted them permission to do so, and the Microsoft team doesn’t use your data to train Co-Pilot or its AI features—again, unless you’ve granted them permission to do so. Instead, Microsoft Co-Pilot AI is programmed using existing data, policies, and permissions. 

The prompts and responses (basically, the conversations you have with or information you gather from Microsoft Co-Pilot) are not available to other customers and are not used for training purposes. Additionally, your data does not go anywhere except in the Microsoft Cloud. However, there are some privacy concerns. The Co-Pilot AI analyzes user behavior and data to curate responses and make suggestions. Since Microsoft Co-Pilot is integrated into the rest of the Microsoft ecosystem, it’s important to note that it can use information in your organizational data—things like documents, emails, contacts, etc.—to generate responses. 

The entire Microsoft ecosystem has security measures that Co-Pilot is also compliant with, and there are other measures to be considered given that it’s a large language model/artificial intelligence platform. Since it’s integrated into the entire Microsoft 365 system, it’s important to use your discretion when it comes to using this platform.

Google AI Studio

This platform is a browser-based integrated development environment (IDE), specifically used for prototyping with generative models—a type of machine learning that helps users learn underlying patterns or distributions of data to generate similar data. Google AI Studio allows you to experiment with different prompts until you find one you’re happy with, and once that happens, you can export it to whatever coding language you need. 

There’s a wide variety of prompt interfaces that help you with the work you’re doing:

  • Freeform prompts – open-ended prompting experience to help you generate content and instructions
  • Structured prompts – these allow you to guide the model output by giving examples of requests and replies. This prompt model gives you more control of the structure you’re working with.
  • Chat prompts – these help you build experiences around/based upon conversations
  • Tuned model – an advanced technique to improve a response of any given model by providing more examples

Now that we know what Google AI Studio is and can do, it’s time to talk about the security of it. When you use the Gemini API (aka Google AI Studio), you can adjust the following safety filters: harassment, hate speech, sexually explicit, and dangerous. In the case of these topics, Google AI Studio allows you the opportunity to determine what is best for your audience and use case. There are also default settings and safe harbors to protect against core issues like child safety. The core safe harbors cannot be adjusted. 

When it comes to the original content you create, Google will not claim ownership over that content, but you do give Google the permission to generate the same or similar content. When you use paid services, Google does not use your prompts or associated files. When using the Google AI Studio, other data Google collects while using paid services is subject to the Google Controller-Controller Data Protection Terms and overall Google Privacy Policy. While using unpaid services, the license you grant to Google under the “Submission of Content” extends also to the Google AI Studio. In the case of unpaid services, Google is free to use the data to improve and develop Google products and services. 

The Bottom Line

Just like with any artificial intelligence, it’s important to use your discretion and wisdom. No matter which app you use, you take some sort of security risk. Especially if you’re using a free version, you are subject to more relaxed security standards. So whether you chose to use AI or not, keep your security hat on and use your judgment! If you’re using AI in your business and are uncertain about how it can impact the overall security of your business, work with a trusted IT Managed Services Provider to close any security gaps.