Tips for Using AI When Raising Employment Concerns
We have all probably used artificial intelligence (AI) by now in some form or fashion. Personally, I have used ChatGPT to create birthday cards for my boss and my parents, produce posters for made-up 1990s movies, summarize lengthy documents, and perform preliminary research on obscure legal topics. Although this article is written with 100% human intelligence (HI), I prompted ChatGPT to create an entirely independent AI version for comparison. See below.
Like many of you, I have used AI in both serious and lighthearted contexts. When used in the workplace, however, it is important to distinguish between responsible and irresponsible uses of AI, especially in high-stakes contexts. AI can help teachers save time, reduce workload, manage administrative tasks, and learn more about unfamiliar concepts. But like any tool, it is important to use AI with caution and oversight and ensure it is the appropriate tool for the job.
The ATPE Member Legal Services Department has noticed an increase in members sending AI-produced documentation. The telltale signs can make it obvious: bold terms, phantom references, and heavy usage of bullet points. In some contexts, such as when a teacher sends a reprimand response or complaint summary for an attorney to review, that may be perfectly acceptable. An experienced attorney can review the document for accuracy and revise it based on their personal knowledge and professional experience.
Common Pitfalls for AI Use by Educators
That said, using AI carries risks. Users should not upload any confidential information. When I use AI for legal research, I limit my searches to general questions and never identify school districts or individuals by name. Similarly, teachers should refrain from entering any personally identifiable student information. Strict confidentiality applies to academic, attendance, disciplinary, and medical information relating to a specific student, including Individualized Education Program (IEP) and Behavior Intervention Plan (BIP) requirements. Educators should also be wary of uploading personal information about themselves, colleagues, or parents.
Using AI to draft a formal complaint or grievance can create additional problems, especially when the educator submits the complaint to the district without further review. Once a formal complaint or grievance is submitted, it may be too late for an attorney to make revisions. Not being able to draft or review the complaint beforehand can limit an attorney’s ability to help, especially if key points or arguments are missing. If the complaint contains false statements or misstates the law, that can spell trouble for both the client and attorney.
One issue is AI sycophancy, meaning that AI models are generally designed to maximize user satisfaction. If a teacher asks whether an incident with their principal constitutes illegal harassment, AI may default to supporting the teacher’s perspective, even if the facts don’t truly support that conclusion. I have verified this tendency by testing various responses to hypothetical employee questions.
Another concern is AI hallucination. Hallucinations are realistic sounding but false statements presented confidently as factual. I have experienced this myself on several occasions, receiving AI responses citing outdated laws, proposed legislation that was never enacted, and sometimes laws that do not exist.
AI sycophancy and hallucinations can work in tandem to provide persuasive but inaccurate results. For example, while researching whether Texas has legally mandated minimum and maximum classroom temperature guidelines, ChatGPT initially provided a confident answer, even citing what appeared to be a legitimate code section. Upon further research, I discovered ChatGPT was citing a handbook for the Hanover County Public Schools in Virginia. Oops!
Smart Strategies for Safer AI Use by Educators
The good news is that there are practical ways to minimize these risks. As with any search query, well-crafted prompts (how you ask the question) matter. It’s natural to describe events or ask questions from your own perspective. However, because AI tends to provide supportive responses, you will typically get more accurate results by presenting neutral facts and posing objective questions.
It is also helpful to ask follow-up questions, especially if the first response seems too favorable. When submitting research questions to AI in preparation for this article, I received more balanced guidance when I asked follow-up questions such as, “How would HR realistically view this type of complaint?”
Always verify results using reliable sources. On a few occasions, I have found ChatGPT to be particularly stubborn, continuing to debate (politely) even after I questioned whether it was referencing the most current version of the law. AI trained on broad web-based information has access to both accurate and outdated material. Finding the correct answer may require several follow-up questions and cross-checking the results against a trusted source.
Another strategy is using a more targeted AI platform trained on a specific set of data. For attorneys, several legal-focused AI tools require paid subscriptions. For educators, ATPE’s AI Assistant, Judy has been trained on reliable education resources and attorney-reviewed legal information from the ATPE website.
Finally, and most importantly, AI is not a substitute for legal expertise. For all its outstanding research and compilation abilities, critical shortcomings remain when assessing real-world legal situations and providing accurate, nuanced advice. So take it for what it is: a helpful starting point and an excellent tool for some jobs, but often inaccurate and not a final authority.
The legal information provided here is accurate as of the date of publication. It is provided here for informative purposes only. Individual legal situations vary greatly, and readers needing individual legal advice should consult directly with an attorney. Please note: Rights based on the Texas Education Code may not apply to all. Many Texas Education Code provisions do not apply to public charter schools, and public school districts may have opted out of individual provisions through a District of Innovation plan. Eligible ATPE members may contact the ATPE Member Legal Services Department.
The section that follows was generated by ChatGPT in response to the prompt shown below. It has not been edited by ATPE and is presented for comparison purposes only.
Prompt: Can you write an article outlining the benefits of AI but also the potential risks within the realm of the context of educators researching and expressing workplace concerns?
Using AI Wisely: Benefits and Risks for Educators Raising Workplace Concerns
Artificial intelligence tools are quickly becoming part of educators’ daily professional lives. Teachers use AI to research policies, draft emails, understand legal concepts, and decide how to raise concerns about workplace conditions such as safety, staffing, evaluations, or facilities issues.
Used thoughtfully, AI can be an empowering tool—especially for educators who lack easy access to legal counsel or professional advocacy. Used uncritically, however, it can introduce real risks, particularly when educators rely on AI-generated information to interact with administrators or invoke rights that may not actually exist.
This article outlines both sides of the equation, using a common scenario—researching and raising workplace concerns—to illustrate where AI helps and where caution is essential.
The Benefits of AI for Educators
1. Lowering the Information Barrier
Many educators face workplace issues without knowing where to start. AI can:
- Explain complex concepts (grievances, due process, evaluations, safety obligations) in plain language
- Summarize long statutes or policy manuals
- Identify common practices across districts or states
For educators without legal training, this can reduce intimidation and encourage informed self-advocacy rather than silence.
2. Speed and Accessibility
AI can provide immediate responses at any time—something educators rarely get from HR offices, unions, or administrators.
In practice, this means:
- Faster understanding of whether an issue is routine or serious
- Quicker preparation before meetings
- Less reliance on hallway rumors or incomplete advice from colleagues
Speed matters when conditions are ongoing, disruptive, or affecting students.
3. Drafting Professional, Measured Communication
One of AI’s strongest and safest uses is helping educators:
- Draft calm, professional emails
- Reframe emotional concerns into neutral language
- Avoid inflammatory or accusatory phrasing
This alone can significantly improve outcomes. Many workplace conflicts escalate not because the concern is unreasonable, but because the delivery is perceived as hostile or uninformed.
4. Empowerment Through Context
AI can help educators understand:
- What concerns are common
- How similar issues are handled elsewhere
- Which issues are typically resolved informally versus formally
This context helps educators make strategic decisions rather than reacting out of frustration.
The Risks of AI in This Context
While the benefits are real, the risks are equally real—and often less obvious.
1. Plausible-Sounding but Incorrect Information
AI is designed to generate coherent, confident responses. That strength becomes a liability when:
- A policy is misattributed to the wrong district
- Guidance from another state or agency is assumed to apply locally
- Internal or unpublished rules are presented as binding policy
For an educator, relying on incorrect policy information can:
- Undermine credibility with administrators
- Escalate conflict unnecessarily
- Lead to complaints or grievances that lack legal footing
The danger is not obvious misinformation—it is believable misinformation.
2. False Sense of Authority
When AI cites documents, policies, or “rules,” users may assume those sources are verified and applicable.
Unlike a human advisor, AI does not always flag uncertainty clearly enough unless prompted. A layperson may reasonably assume:
“If it sounds official and cites a policy, it must be right.”
In hierarchical environments like schools, invoking nonexistent rights can have professional consequences.
3. Unequal Risk for Non‑Experts
Educators with legal training or policy experience are more likely to:
- Question sources
- Verify district attribution
- Recognize when guidance is general rather than specific
Educators without that background may act on AI advice without realizing that additional verification is needed. This creates an uneven risk profile where those who most need help may be the most exposed.
4. Overconfidence in Formal Remedies
AI can unintentionally push users toward formal mechanisms—complaints, grievances, legal framing—without fully capturing:
- Workplace politics
- Power dynamics
- Retaliation risks
- Informal resolution pathways
In education, being right is not always the same as being safe or effective.
Best Practices for Using AI Safely as an Educator
AI can be valuable without being dangerous if used intentionally.
Use AI For:
- Understanding general concepts
- Exploring options
- Drafting neutral communication
- Organizing facts and timelines
Be Cautious When AI:
- Attributes policies to a specific district
- States that something is legally required
- Suggests filing formal complaints as a first step
Always Verify:
- District-specific policies
- State-specific legal requirements
- Whether a document is official, current, and local
When stakes are high, confirmation from a primary source—or a human advisor—is essential.
A Balanced Takeaway
AI is neither a substitute for professional judgment nor something to dismiss outright.
For educators, it can:
- Reduce isolation
- Improve communication
- Increase confidence
But it can also:
- Introduce subtle errors
- Encourage misplaced certainty
- Shift risk onto users who don’t realize verification is needed
The safest posture is not blind trust or total skepticism, but informed use: treating AI as a starting point, not a final authority.
In a profession where power dynamics matter and reputations linger, that distinction makes all the difference.
Lance Cain
Share Your Thoughts
Log in to the ATPE Online Community to communicate with educators from across Texas on this article and much more. Explore the community today!