How to Train Your Staff to Use LLMs
LLMs are AI tools that generate human-like text. Employees may already be using them, knowingly or not, to write emails, summarize content, or generate ideas.
We put this guide together to help ensure your staff uses them safely and effectively.
Risks & Guardrails
LLMs are helpful, but careless use can lead to data leaks, legal issues, or inaccurate outputs.
Biggest recommendation (especially if you stop reading here): Set boundaries early.
Here is a great place to start:
- Share your company’s AI Use Policy.
- Define what NOT to input:
- Personal customer data (names, SSNs, credit card numbers, etc.)
- Proprietary company information
- Legal pleadings, contracts, or sensitive internal docs
- Reinforce: Use only approved company accounts/tools, not personal ones.
- Conduct a risk scenario discussion where teams consider: “What could go wrong if this policy isn’t followed?”
Some Quick Gotchas:
- Don’t upload client data, legal docs, or financials: Sharing this type of information with third-party AI tools can violate data privacy laws (like GDPR, HIPAA, or GLBA), risk exposing sensitive customer information, and breach contractual confidentiality agreements.
- Treat LLM responses with a grain of salt: LLMs can produce inaccurate or misleading information, and relying on them without human review may expose your organization to legal liability or compliance violations.
- Don’t share outputs directly with customers unless reviewed: Unvetted AI-generated content may include biased or non-compliant language that could damage customer trust or result in regulatory scrutiny.
Like what you see? Download the guide to see the rest.
- Use Cases by Role
- Prompt Writing 101
- Rollout Strategy
- Evaluation and Success Metrics
- Real-World Examples
- Troubleshooting & What-Ifs