04/18/2026
Artificial intelligence is becoming part of many professional systems, including mental health care, and I believe it is important to be transparent about how and why I use it in my practice. My use of AI is grounded in ethics, clinical responsibility, privacy protection, and compliance with professional standards. I do not use AI casually, and I do not use it in ways that replace my clinical judgment, my legal responsibilities, or the human relationship at the center of therapy. I use it as a support tool to strengthen accuracy, organization, continuity of care, and treatment quality.
I did not make the decision to use AI lightly. I had to carefully weigh the pros and cons, and after a great deal of reflection, the pros won. In making that decision, I considered not only the possible benefits to my practice and my clients, but also the much larger personal, cultural, social, economic, political, and environmental issues swirling around AI, data centers, and the rapid development of automation. These are not small concerns, and I do not believe they should be ignored or minimized. I believe thoughtful professionals have a responsibility to think critically about both the promise and the risks of the tools they adopt.
I also maintain business associate agreements with the companies whose systems I use, and the software platforms in my practice are designed to meet HIPAA and HITECH requirements. One platform supports core practice functions such as client account management, insurance claims, medical records, documentation, billing, communication, and client engagement. The other electronic health records system supports treatment planning assistance, treatment outcome measurement, progress tracking, analysis of session data, and discharge planning. These tools help me manage the increasingly complex clinical and administrative responsibilities that are now part of modern behavioral health care, while helping me preserve more time, energy, and focus for the therapeutic work itself. There is a minimal or imperceptible increase in the amount of resources these two platforms now use with AI embedded in their programs, because they already relied on and used data centers (hardware server facilities).
The reality is that the mental health care field is changing. Insurance companies, health systems, regulators, and large organizations are increasingly moving toward the use of AI to streamline auditing, claims processing, utilization review, approvals, and denials. As these systems evolve, clinicians may also need to adapt their business practices and documentation procedures in order to remain compliant, current, and effective. In many cases, practitioners may be influenced to use responsible AI tools to help “AI-proof” their practices so their documentation can stand up to increasingly automated audit and review processes. Whether we like it or not, the systems around health care are changing, and mental health professionals will have to decide how to respond in ways that are both ethical and practical.
At the same time, my view of AI is not simplistic, utopian, or apocalyptic. I have considered the possibility that we may be living through an AI and technology bubble that could burst at some point. I have also considered that much of the hyperbole, both positive and negative, may be overstated. It is entirely possible that many of the grandest claims about AI, and many of the darkest fears about it, are exaggerated. It is also possible that a truly sentient AI, whether benevolent or malevolent, may never evolve at all. I think it is important to remain grounded, skeptical, and balanced, rather than getting swept away by either blind enthusiasm or total fear.
My hope for the future is that we can build systems that reduce or eliminate the personal, cultural, social, economic, political, and environmental harms associated with AI. I hope technological progress moves toward cleaner, wiser, and more sustainable solutions, including advances such as more efficient data centers, smarter electrical grids, quantum computing, and perhaps even forms of energy generation that dramatically reduce the environmental cost of computation. I also hope that AI itself can be used to help solve some of the problems that AI and other technologies are helping create. Just as importantly, I hope technology companies build strong guardrails around these tools so they remain useful, accountable, and benevolent rather than reckless, exploitative, or harmful.
For me, the ethical question is not simply whether AI exists. The real question is how it is used, under what safeguards, and in service of what goals. AI should never replace empathy, informed consent, human accountability, sound clinical judgment, or the therapeutic relationship. It should be used carefully, transparently, and within clear legal and ethical boundaries. When used in that way, AI can help mental health professionals improve documentation quality, strengthen treatment planning, track outcomes more clearly, and remain responsive to the realities of a rapidly changing health care environment. My commitment is to use these tools responsibly, to protect client privacy, and to make sure that innovation serves people rather than the other way around.