ChatGPT, PII, and Data Protection

Using ChatGPT might be convenient, but it runs the risk of exposing PII

Free Robot Pointing on a Wall Stock Photo

ChatGPT has made headlines and established itself as the “AI technology.” Since its inception, the tech industry at large has scrambled to chase the digital goldmine and follow in its footsteps. No doubt you’ve seen variations of AI or Chat models in various software and services. Everyone wants to be the next big thing. But while enterprises are also busy fitting ChatGPT into their business practice – whether that’s a good idea or not – they could be overlooking some critical security concerns.

Every enterprise seeking to use ChatGPT in a business format needs to make serious considerations about what they’re inputting, prompting, and receiving. Remember, ChatGPT is not a conscious, living mind capable of decisions, it’s an advanced machine-learning tool that will create answers based on its catalog of learned data. It cannot parse whether information entered is a security risk, or puts other’s data in a hacker’s crosshairs.

In a business context, this means using ChatGPT for general operations potentially places PII (personally identifiable information) at risk. That’s a serious regulatory offense potentially cascading into devastating breaches or loss of customer information.

The flow of PII

All modern businesses using some form of internet-facing tech will encounter PII, the natural evolution of customer information. Names, addresses, and even social security numbers are part of the PII web and thus require safeguarding. Depending on the organization, PII is housed in data blocks protected by traditional security methods, like network monitoring, firewalls, anti-virus, and even custom solutions.

Personal information turns into a quandary when questions about accessibility form. Who can see, access, transport, and save this data? Within a business network, where does it go? For example, PII could be accessed by accounting, management, and security staff. Secure networks will enclose this data in their WLANs, segmenting a network so PII doesn’t “leak” into unsafe observable areas. But once you introduce ChatGPT into the mix, you generate a brand-new problem.

Where automating workflows create issues

Companies are eager to automate any workflow they can, and so swiftly turn to tools like ChatGPT to fill interpreted time-demanding gaps. This includes items like emails, newsletters, and even responses to customer concerns and queries. Instead of using the human element, why not have an automatically created response to do it instead?

Unfortunately, prompting ChatGPT (or any similar model) to create responses to emails, queries, or other business topics puts PII in the crosshairs of these prompt engines. ChatGPT is not a platform housed or secured on the digital premises of a business and therefore is a “no visibility” zone. Where does that data go? Who can see it? There’s a pandora’s box of compliance problems and numerous instances where PII is exposed to non-business entities, malevolent or otherwise.

ChatGPT combined with human error also makes for a disastrous recipe. If staff workers utilize ChatGPT for email queries, responses, and customer service, what if they disclose sensitive information? What if they expose PII? Considering human error is one of the biggest cyber problems out there, it’s not an ideal scenario.

Laws, mandates, and regulatory compliance

ChatGPT and other forms of AI are not exempt from data compliance laws and legislation. Therefore, freely diving into the service without serious consideration of how it could impact your business is a grievous and potentially costly error.

All standard – and even international – data laws make up the regulatory requirements when handling customer and private data. From HIPAA, PCI DSS, and GDPR, ChatGPT (nor any service similar to it) is excused from it. Depending on the mandate, serious policies and practices are required related to data collection, typically agreements between all relevant parties handling customer data. ChatGPT is not an associated business party – therefore data “given” to it (or similar machine learning models) is effectively handing the data to unknown parties. It puts the information at serious risk and runs afoul of mandates, penalties, and legislative repercussions.

ChatGPT, like any third-party service, presents risk as a blank zone. Any area of organizational dataflow that security teams have no visibility of is an immense hazard to privacy, brand reputation, and security.

Know your risks and how to avoid them

The risks involved with ChatGPT do not mean it has no use, nor that you cannot use it. But said risks involve critical penalties if not addressed. Guidelines and organization rules must be established to avoid regulatory pushback, security breaches, and loss of client trust.

Obtain Permissions and Clarify ChatGPT use

If you plan to utilize ChatGPT in any part of your business operations, then you must provide transparency regarding its use. Therefore, any information collected (PII) using ChatGPT also requires the consent of the third party. It’s similar to other disclosures about how data is collected, and all businesses are required to inform customers or web visitors about these collection policies.

However, in the case of ChatGPT, you will need explicit permission and consent. Details about what ChatGPT is used for and how is also required, such as with customer support or email responses.

Stay Updated

AI and the use of machine learning for purposes as we’ve discussed is rapidly changing. Governance surrounding the use of AI and how it impacts us socially and professionally is still a blossoming topic. Thus, you must remain updated with AI policies, laws, and regulatory concerns. How they change what you can and cannot do with models like ChatGPT within your business framework is expected to shift frequently. You should deploy best practices surrounding data collection, safety, and transport in the context of ChatGPT usage.

Maintain Anonymous PII Data

In the event you’re using ChatGPT or similar services, keep personal information anonymous. Prompting the engine for general responses can prove effective. However, no personal data or info should be entered in ChatGPT, effectively anonymizing the data.

Conclusion

ChatGPT, while a useful service, can face breach events like any tech company. There are already reported cases of compromised accounts from May 2023, and it’s a trend that will continue.

Therefore, when utilizing this service – or any similar AI service – minding where PII data goes is important to shield customer information.

For more information on best practices, IT, and cybersecurity help, you can reach out to Bytagig today.

Share this post: