James Fitzgerald Therapy, PLLC

James Fitzgerald Therapy, PLLC Licensed Clinical Mental Health Counselor (LCMHC) Positive Psychology & Character Strengths Approach Some plans have high deductibles.

Credentials:
* VT roster of psychotherapists by the Allied Board of Mental Health Professionals.
* VT (AAP) Apprentice Addiction Professional by the Office of Professional Regulation. * PESI Certified tele-health provider.
* NBCC (NCC) National Certified Counselor by National Board of Certified Counselors.
* UVM (MS) Master's of Science in Clinical Mental Health Counseling
* CSJ (BA) Bachelor of Arts Degree in Psychology

Payment for services:
I bill insurance electronically and accept:
* Blue Cross/Blue Shield, Cigna, MVP, and Medicaid. I file the claim on your behalf, and after I am issued an explanation of benefits, I email you an invoice or send request for payment through Venmo. It is therefore important that you provide me consent for electronic communication before the first session. Important note:
Some commercial insurance plans have limited coverage of outpatient mental health care or substance use counseling. Every insurance company contracts different rates of reimbursement with providers. Please verify your insurance before your first session. Please call your insurance company before your first session to inquire about covered services. Clients are responsible for paying any out of pocket costs, unmet deductibles, and copay amounts before the next scheduled service unless other arrangements are made. Payment types accepted:
Credit/Debit, HSA/FSA, Venmo, Square, and PayPal. Fees:
$75 - $120
I offer a sliding scale fee. Initial consultations are $0 cost. Modalities:
Individual, Couples, Family, Groups

Hours of availability:
Monday: 9:00 am - 6:00 pm
Tuesday: 9:00 am - 8:00 pm
Wednesday: 9:00 am - 8:00 pm
Thursday: 9:00 am - 8:00 pm
Friday: 9:00 am - 6:00 pm
Saturday: 9:00 am - 6:00 pm (upon request)

I blend the following Philosophies, Theories, and Interventions into an eclectic, cohesive, integral, and integrative approach:
* Neuroscience (Neuroplasticity)
* Positive Psychology
* Virtue Ethics and Moral Discipline
* Person Centered, Trauma Informed, Environmentally and Culturally Sensitive
* Character Strengths Theory
* Internal Family Systems Theory
* Cognitive Behavioral Theory
* Dialectical Behavior Theory
* Motivational Enhancement Theory
* Polyvagal Theory

https://www.who.int/publications/i/item/9789240084759
04/06/2026

https://www.who.int/publications/i/item/9789240084759

This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm

04/06/2026

Ethical Uses of Artificial Intelligence in Mental Health Care: The Technology Is Not the Problem, but Its Design and Use May Be

Artificial intelligence is often discussed as though it were either a miracle tool or a dangerous threat. In mental health care, neither extreme is fully accurate. AI is not inherently ethical or unethical. It is a set of tools, methods, and systems created by people and used within human institutions. The real ethical question is not whether AI exists, but how it is designed, trained, marketed, implemented, supervised, and regulated. Major health and standards organizations have taken this same position. The World Health Organization, for example, recognizes that AI may improve care, research, and health system efficiency, but stresses that ethics, human rights, accountability, and governance must remain central to design and deployment. NIST similarly frames AI risk as something that must be actively managed through trustworthy design, context awareness, transparency, and oversight.

In mental health care, AI already appears in many forms. It may assist with clinical documentation, progress note drafting, treatment planning support, billing workflows, triage tools, symptom screeners, chatbot-style psychoeducation, appointment reminders, translation, transcription, and digital therapeutics. Professional organizations in psychiatry and psychology now openly acknowledge that AI is entering everyday practice and may support efficiency, patient engagement, and some forms of clinical decision support. At the same time, they warn that ethical and legal responsibilities do not disappear simply because a machine is involved. In other words, AI may be useful, but the clinician and the organization still remain responsible for patient welfare, informed decision-making, and professional judgment.

One reason the statement “AI itself is not the problem” has merit is that the same underlying technology can be used in both helpful and harmful ways. A carefully designed transcription tool operating within a secure, HIPAA-compliant environment, reviewed by a licensed clinician, may reduce clerical burden and free more time for patient care. By contrast, a poorly secured chatbot trained on biased data, marketed as therapy without adequate evidence, and used without informed consent may create real harm. HHS explains that HIPAA’s Security Rule is meant to protect electronic protected health information through administrative, physical, and technical safeguards while still allowing health care organizations to adopt useful new technologies. That point matters: the law does not forbid innovation; it requires responsible protection of privacy, confidentiality, and data security.

The mental health field is especially sensitive because it handles some of the most private information a person can share. Therapy notes, trauma history, substance use, suicidality, sexual concerns, family conflict, and psychiatric symptoms are not ordinary consumer data. Ethical AI use in this setting therefore requires more than general enthusiasm about efficiency. It requires strong privacy protections, careful vendor review, clear data use policies, limits on secondary uses of data, and transparency about who can access information and for what purpose. If an AI product trains future models on patient disclosures without clear authorization, or if sensitive behavioral data are exposed through poor security practices, the problem is not “AI” in the abstract. The problem is irresponsible governance and misuse of patient data.

Bias is another major ethical concern. AI systems learn from data created in human systems that already reflect social inequality, diagnostic bias, and unequal access to care. If the training data underrepresent marginalized groups, or if the labels in the data reflect biased clinical assumptions, then the outputs may reproduce those same distortions. In mental health care, this could affect screening, risk prediction, diagnostic suggestions, language interpretation, or treatment recommendations. APA’s ethical guidance for health service psychology emphasizes that responsible AI development must consider the full range of lived experiences and avoid unfair discrimination. WHO also places equity, accountability, and public benefit at the center of ethical governance. That means the danger is not simply the existence of machine learning. The danger is deploying it without testing for bias, without representative validation, and without correcting for structural inequities.

There is also the problem of overclaiming. Mental health care has seen an explosion of apps, digital tools, and AI-enabled products, but evidence does not expand as quickly as marketing. NIMH notes that thousands of mental health apps are available, yet there is still limited regulation and limited information on effectiveness for many products. SAMHSA similarly states that some digital therapeutics can be effective, but not all mental health applications have a sufficient evidence base for therapeutic use. This is a crucial distinction. Ethical AI use requires that clinicians, organizations, and patients avoid confusing convenience with clinical validity. A tool may be easy to use and commercially attractive while still lacking meaningful evidence that it improves outcomes safely.

Another concern is role confusion. AI can generate language that sounds empathic, insightful, or authoritative. In mental health contexts, this can create the illusion that a system understands a person in the same way a clinician or trusted human does. But language fluency is not the same as wisdom, moral responsibility, relational attunement, or clinical accountability. AI can summarize, predict, classify, and imitate conversational support, yet it does not bear legal or ethical responsibility for adverse outcomes. That remains with developers, vendors, health systems, and clinicians. This is why professional guidance increasingly stresses that AI should support, not replace, human clinical judgment, especially in high-risk situations such as su***de assessment, psychosis, trauma treatment, involuntary care, or medication decisions.

At the same time, it would also be misleading to dismiss AI entirely. Ethical use may expand access, reduce administrative overload, support measurement-based care, improve language access, help identify documentation gaps, and extend evidence-based tools to people who might otherwise receive no help at all. NIMH notes that digital technology can broaden reach, reduce cost barriers, and make support more accessible, including for people in remote areas or those reluctant to seek in-person care. SAMHSA also recognizes that rigorously developed digital therapeutics may extend evidence-based treatments and improve access in behavioral health settings. These are meaningful benefits, especially in a mental health system marked by workforce shortages, burnout, and unequal service distribution.

A balanced view, then, is this: AI in mental health care should not be judged as a single good or bad thing. It should be judged by its evidence, transparency, privacy protections, fairness, regulatory status, level of human oversight, and actual impact on patient welfare. Ethical AI use means patients know when AI is involved, clinicians understand its limits, organizations vet vendors carefully, outputs are reviewed rather than blindly trusted, and high-risk decisions remain under qualified human supervision. It also means that developers and institutions must be accountable when systems fail, discriminate, mislead, or expose sensitive information. FDA guidance on AI-enabled software in health care reflects this same logic by focusing on oversight, safety, performance, and regulatory pathways rather than assuming the technology is either automatically safe or automatically dangerous.

The most accurate conclusion is that AI is not the central moral actor in mental health care. People are. The ethics of AI depend on the intentions behind it, the values embedded in its design, the quality of the evidence supporting it, the protections surrounding patient data, and the wisdom or recklessness of those who deploy it. In mental health care, where trust, confidentiality, dignity, and safety are foundational, that distinction is especially important. AI itself is not the problem. The problem is what human beings choose to build with it, what claims they make for it, what safeguards they ignore, and whether they use it to strengthen care or to cut corners at the expense of vulnerable people.

References

American Psychiatric Association. Artificial Intelligence in Psychiatric Care.

American Psychiatric Association. Position Statement on the Role of Augmented Intelligence in Clinical Practice and Research.

American Psychological Association. Ethical Guidance for AI in the Professional Practice of Health Service Psychology (2025 update).

American Psychological Association. Artificial Intelligence in Mental Health Care.

U.S. Department of Health and Human Services. Summary of the HIPAA Security Rule.

U.S. Department of Health and Human Services. Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates.

U.S. Food and Drug Administration. Artificial Intelligence in Software as a Medical Device.

U.S. Food and Drug Administration. AI-Enabled Medical Device List.

National Institute of Mental Health. Technology and the Future of Mental Health Treatment.

National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0).

World Health Organization. Ethics and Governance of Artificial Intelligence for Health (2021).

World Health Organization. Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models (2024/2025 listing).

Substance Abuse and Mental Health Services Administration. Digital Therapeutics for Management and Treatment in Behavioral Health (2023).

04/05/2026

There are things your mind has decided you are not ready to know. And it is very, very good at keeping you from finding out. That is the system working exactly as designed in order to protect you.

We tend to think of defenses as negative things. Denial. Repression. Dissociation. Big psychological words for big psychological events. But most of the time, defense is so quiet you never even catch it.

It is not noticing something that is right in front of you. It is getting to the edge of a difficult thought and suddenly remembering something you need to do. It is being able to talk about painful things with complete calm and zero feeling, because understanding something in your head means you NEVER have to feel it in your body.
It is never quite letting yoourself put two and two together.

At some point in your life, not knowing something was genuinely safer than knowing it.

Maybe the truth about your family was too much to for you to hold. Maybe recognizing what was happening to you would have left you with nowhere to go. So your mind learned to look away and avoid it. To redirect. To stay just busy enough that the thing underneath never quite had room to surface or let you feel it.

That was how you survived.

The problem is that those same defenses do not turn off when the danger passes. They just keep on doing what they do. And unfortunately, they keep you at a distance from the very awareness that could actually change things.

Because you cannot heal what you cannot see.

This isn't about ripping away every protection your system has created. It is about deciding to start getting curious. To just become more aware and notice when you go blank in a conversation. To notice what you reach for when something uncomfortable starts to surface. To gently ask yourself what you might be working very hard not to know. That is how healing begins.

04/05/2026
04/02/2026

The field of psychology has a critical role to play in shaping the future of AI.

As stewards of behavioral science, APA is working to hold technology companies accountable and bringing psychological science into the rooms where these tools are built.

Learn more: https://at.apa.org/33e7b9

04/02/2026
04/02/2026
03/22/2026

Address

359 Dorset Street, Suite 200-2
South Burlington, VT
05403

Opening Hours

Monday 9am - 4pm
Tuesday 9am - 7pm
Wednesday 9am - 7pm
Thursday 9am - 7pm
Friday 9am - 7pm
Saturday 9am - 4pm

Telephone

+18028551209

Alerts

Be the first to know and let us send you an email when James Fitzgerald Therapy, PLLC posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Contact The Practice

Send a message to James Fitzgerald Therapy, PLLC:

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram