ChatGPT for Business: To Use Or Not To Use

ChatGPT for Business To Use Or Not To Use

There has been a lot of discussion around the use of AI tools, like ChatGPT, and the potential efficiencies it could bring to organizations; however, there’s been less talk around some of the concerns this new technology may have on your organization when it comes to security, privacy, and compliance.  

With any new technology, special care is needed to determine a rationale for its use. Several companies have been jumping on the bandwagon to integrate ChatGPT (or other AI solutions) into their products with claims of improved productivity. A lot of their claims may be true, but there are other areas of concern when using these types of technologies, especially in regulated or contractually obligated industries.

Read on for a high level overview of ChatGPT and exploration of some of the concerns over its use from a security, privacy, and compliance perspective. There may be several other concerns, but we are going to focus on the following areas:

  1. Privacy/Security
  2. Vendor Due Diligence/Contractual Obligations
  3. Reputation

What is ChatGPT?

ChatGPT is a by-product of GPT-3, a third-generation pre-trained transformer developed by OpenAI. In basic terms, ChatGPT is powered by a Large Language Model (LLM), which is a statistical tool to predict the probability of the next word in a sentence. You can ask ChatGPT certain questions in English and it will respond with a ‘statistically plausible’ answer based on a very large data set. The answers appear to be well written and credible, but if you dig a little deeper, you may find it will generate incorrect answers by unintentionally stitching wrong pieces of information together.

Where did GPT-3 come up with the information presented? To the best of our knowledge, this information was obtained by at least five (5) sources:

  1. Common Crawl – by crawling websites, deduping information, and utilizing a ranking process to determine higher quality sites;
  2. Web Text – by extracting texts from websites and again, deduping the information along with utilizing criteria to determine ‘quality’;
  3. Books1 – collection of unpublished novels;
  4. Books 2 – smaller input of books1, which is undetermined; and
  5. Wikipedia – English version.

Again, ChatGPT uses mathematics based on a large quantity of content to determine next words in a sentence, but may not be accurate or understand the ‘context’ of the information being presented. ChatGPT may be good at answering ‘general’ questions but may not be trustworthy when relying on it for precise answers.  

As we all know, we can’t believe everything we read on the Internet. Throughout formal writing (such as reports in schools), most teachers/instructors prohibit using Wikipedia as a reference. You can think of ChatGPT as another type of ‘Google’ search, but since it is capable of writing/summarizing content, folks are easily persuaded into accepting the responses as ‘truth.’ ChatGPT may provide a productivity boost, but it is important to recognize when it responds with inaccurate information.

Privacy/Security Considerations

One major concern regarding the use of ChatGPT is when it’s used to share personal information (or other sensitive/proprietary information). Any information shared with ChatGPT (or OpenAI) can be used to improve their services. OpenAI also warns to use their service at ‘your own risk’ since it is still in ‘beta.’ To be fair, OpenAI allows you not to have your content ‘shared,’ but it could diminish functionality or limit the ability to address certain use cases.  (See OpenAI’s Terms of Use for further information.)

Organizations need to take care in sharing their personal or sensitive information. For instance, ChatGPT may be able to review your code, but your code should be considered proprietary information. When it comes to processing personal data, you must provide privacy notices and obtain consent for processing such data, which includes, in most cases, processing by subprocessors like OpenAI. OpenAI may execute a Data Processing Addendum, but realize once the information is ‘shared’ with the huge data lake, it could be hard to secure (or ensure it isn’t shared any further for other purposes).

Vendor Due Diligence and Contractual Obligations

Organizations may be required to perform vendor due diligence activities on third parties they use. These activities generally include reviews of attestations, certifications, or completion of questionnaires to provide assurance the vendor implements certain controls to ensure adequate security and privacy.  

Organizations may also be under contractual obligations to perform these reviews and obtain assurances. Currently, it doesn’t appear OpenAI maintains any attestations like SOC 2 Type 1 or Type 2 or other certifications like ISO 27001, generally used as demonstrable evidence in vendor reviews.

Furthermore, certain contractual obligations may be passed down to tertiary organizations. For example, you may be using a service provider required to meet your security and privacy requirements. They may use another service provider, which should also meet the same security and privacy standards as the first service provider. This may be the case where the service provider uses an API call to OpenAI to process the data. 

Unless the service provider is establishing their own ‘private’ AI solution, they may be ‘subcontracting’ work to another AI provider. If this secondary provider doesn’t provide the level of security and privacy the end-user expects from the primary service provider, the primary service provider may be in breach of their contractual obligations by using this secondary provider.

Reputational Concerns

If your job (or organization) depends on being an ‘expert’ on certain topics, using ChatGPT may pose reputational hazards. Although ChatGPT may produce content at scale, it may end up taking you longer to ‘vet’ the content.

In addition, accurately describing your product or service is essential when it comes to protecting yourself against any unfair or deceptive trade practices regulated by the Federal Trade Commission. Utilizing ChatGPT to embellish marketing content without human review could lead to violations.

Final thoughts

ChatGPT may be a useful tool, but this is exactly what it is—a ‘tool.’  Just like any tool, you should take special care if you plan to use it. There are potential use cases where ChatGPT would make work more efficient, but some cases need more limitations or restrictions over its use.  

Organizations can not ignore the risks associated with any new technology.They must analyze these risks, determine a plan of action to mitigate these risks, implement appropriate controls, and monitor the use and processes around these tools. ChatGPT will not replace humans anytime soon and must be treated with a certain level of scrutiny considering the risks outlined above. In this Compliance Director’s opinion, organizations shouldn’t fall for the hype. They should perform due diligence related to their particular business case over the use of ChatGPT (and other AI solutions).  

If your organization needs help to determine some of the appropriate use cases to maintain compliance with regulatory, contractual, or industry best practices, get in touch. We have experts at Thoropass (formerly Laika) who can help!

*This blog was NOT generated by ChatGPT.

Share this post with your network:

LinkedIn