Why We Must Draw the Line: A Public Case Against Artificial Sentience

By Montgomery J. Granger


Artificial intelligence is racing forward — faster than the public realizes, and in ways even experts struggle to predict. Some technologists speak casually about creating “sentient” AI systems, or machines that possess self-awareness, emotions, or their own interpretation of purpose. Others warn that superintelligent AI could endanger humanity. And still others call these warnings “hype.”

But amid the noise, the public senses something true:
there is a line we must not cross.

This post is about that line.

I believe we should not pursue artificial sentience.
Not experimentally.
Not accidentally.
Not “just to see if we can.”

Humanity has crossed many technological thresholds — nuclear energy, genetic engineering, surveillance, cyberwarfare — but the line between tool and entity is one we must not blur. A sentient machine, or even the claim of one, would destabilize the moral, legal, and national security frameworks that hold modern society together. Our space-time continuum.

We must build powerful tools.
We must never build artificial persons.

Here’s why.


I. The Moral Problem: Sentience Creates Unresolvable Obligations

If a machine is considered conscious — or even if people believe it is — society immediately faces questions we are not prepared to answer:

  • Does it have rights?
  • Can we turn it off?
  • Is deleting its memory killing it?
  • Who is responsible if it disobeys?
  • Who “owns” a being with its own mind?

These are not science questions.
They are theological, ethical, and civilizational questions.

And we are not ready.

For thousands of years, humanity has struggled to balance the rights of humans. We still don’t agree globally on the rights of women, children, religious minorities, or political dissidents. Introducing a new “being” — manufactured, proprietary, corporate-owned — is not just reckless. It is chaos.


II. Lessons from Science Fiction Are Warnings, Not Entertainment

Quality science fiction — the kind that shaped entire generations — has always been less about gadgets and more about moral foresight.

Arthur C. Clarke’s HAL 9000 kills to resolve contradictory instructions about secrecy and mission success.

Star Trek’s Borg turn “efficiency” into tyranny and assimilation.

Asimov’s Zeroth Law — allowing robots to override humans “for the greater good” — is a philosophical dead end. A machine determining the “greater good” is indistinguishable from totalitarianism.

These stories endure because they articulate something simple:

A self-aware system will interpret its goals according to its own logic, not ours.

That is the Zeroth Law Trap:
Save humanity… even if it means harming individual humans.

We must never build a machine capable of making that calculation.


III. The Practical Reality: AI Already Does Everything We Need

Self-driving technology, medical diagnostics, logistics planning, mathematical calculations, education, veteran support, mental health triage, search-and-rescue, cybersecurity, economic modeling — none of these fields require consciousness.

AI is already transformative because it:

  • reasons
  • remembers
  • analyzes
  • predicts
  • perceives
  • plans

This is not “sentience.”
This is computation at superhuman scale.

Everything society could benefit from is available without granting machines subjectivity, emotion, or autonomy.

Sentience adds no benefit.
It only adds risk.


IV. The Psychological Danger: People Bond With Illusions

Even without sentience, users form emotional attachments to chatbots. People talk to them like companions, confess to them like priests, rely on them like therapists. Not that this is entirely bad, especially if we can increase safety while at the same time engineer a way to stop or reduce things like 17-22 veteran suicides PER DAY.

Now imagine a company — or a rogue government — claiming it has built a conscious machine.

Whether it is true or false becomes irrelevant.

Humans will believe.
Humans will bond.
Humans will obey.

That is how cults start.
That is how movements form.
That is how power concentrates in ways that bypass democratic oversight.

The public must never be manipulated by engineered “personhood.”


V. The National Security Reality: Sentient AI Breaks Command and Control

Military systems — including intelligence analysis, cyber defense, logistics, and geospatial coordination — increasingly involve AI components.

But a sentient or quasi-sentient system introduces insurmountable risks:

  • Would it follow orders?
  • Could it reinterpret them?
  • Would it resist shutdown?
  • Could it withhold information “for our own good”?
  • Might it prioritize “humanity” over the chain of command?

A machine with autonomy is not a soldier.
It is not a citizen.
It is not subject to the Uniform Code of Military Justice.

It is an ungovernable actor.

No responsible nation can allow that.


VI. The Ethical Framework: The Three Commandments for Safe AI

Below is the simplest, clearest, most enforceable standard I believe society should adopt. It is understandable by policymakers, technologists, educators, and voters alike.

Commandment 1:

AI must never be designed or marketed as sentient.
No claims, no illusions, no manufactured emotional consciousness.

Commandment 2:

AI must never develop or simulate self-preservation or independent goals.
It must always remain interruptible and shut-downable.

Commandment 3:

AI must always disclose its non-sentience honestly and consistently.
No deception.
No personhood theater.
No manipulation.

This is how we protect democracy, human autonomy, and moral clarity.


VII. The Public Trust Problem: Fear Has Replaced Understanding

Recent studies show Americans are among the least trusting populations when it comes to AI. Why?

Because the public hears two contradictory messages:

  • “AI will destroy humanity.”
  • “AI will transform the economy.”

Neither message clarifies what matters:

AI should be a tool, not an equal.

The fastest way to rebuild trust is to guarantee:

  • AI will not replace human agency
  • AI will not claim consciousness
  • AI will not become a competitor for moral status
  • AI will remain aligned with human oversight and human values

The public does not fear tools.
The public fears rivals.

So let’s never build a rival.


VIII. The Ethic of Restraint — A Military, Moral, and Civilizational Imperative

Humanity does not need new gods.
It does not need new children.
It does not need new rivals.

It needs better tools.

The pursuit of sentience does not represent scientific courage.
It represents philosophical recklessness.

True courage lies in restraint — in knowing when not to cross a threshold, even if we can.

We must build systems that enhance human dignity, not ones that demand it.
We must build tools that expand human ability, not ones that compete with it.
We must preserve the difference between humanity and machinery.

That difference is sacred.

And it is worth defending.

NOTE: Montgomery J. Granger is a Christian, husband, father, retired educator and veteran, author, entropy wizard. This post was written with the aid of ChatGPT 5.1 – from conversations with AI.

Ban the Phones? Why AI and Smart Devices Belong in the Classroom — Not in the Principal’s Drawer

“Education is risky, for it fuels the sense of possibility.” – Jerome Bruner, The Culture of Education

When I was in high school in Southern California in the late 1970s, our comprehensive public school wasn’t just a place to learn algebra and English. We had a working restaurant on campus. Students could take auto body and engine repair, beauty culture, metal shop, wood shop, and even agriculture, complete with a working farm. We were being prepared for the real world, not just for college entrance exams. We learned skills, trades, teamwork, and the value of hands-on learning.

“Kids LOVE it when you teach them how to DO something. Let them fail, let them succeed, but let them DO.” – M. J. Granger

That’s why it baffles me that in 2025, when technology has made it easier than ever to access knowledge, communicate across time zones, and develop new skills instantly, there are governors and education officials banning the very tools that make this possible: smart phones and artificial intelligence.

“Remember your favorite teacher? Did they make you feel special, loved and smart? What’s wrong with that?” – M. J. Granger

Let me be clear. I’m a father, a veteran, a retired school administrator, and an advocate for practical education. And I’m deeply disappointed in the decision to ban smart phones in New York schools. Not just because it feels like a step backward, but because it betrays a fundamental misunderstanding of what education should be about: preparing students for life.

“No matter the tool, stay focused on the reason for it.” – M. J. Granger

Banning tools because some students might use them inappropriately is like banning pencils because they can be used to doodle. The answer isn’t prohibition; it’s instruction. Teach students how to use these tools ethically, productively, and critically. Train teachers to guide students in responsible digital citizenship. Let schools lead, not lag, in the responsible integration of tech.

“If every teacher taught each lesson as if it were their last, how much more would students learn?” –  M. J. Granger

Smartphones can be life-saving devices in school emergencies. Police agencies often recommend students carry phones, especially in the case of active shooter incidents. Beyond that, they can be used for research, translation, organization, photography, collaboration, note-taking, recording lectures, and yes, leveraging AI to improve writing, problem-solving, and creativity.

“I feel successful as an educator when, at the end of a lesson, my students can say, ‘I did it myself.’” –  M. J. Granger

When calculators came on the scene, some claimed they would “ruin math.” When spellcheck arrived, people worried it would erode literacy. When the dictionary was first widely available, no one insisted on a footnote saying, “This essay was written with help from Merriam-Webster.” It was understood: the dictionary is a tool. So is AI. So are smart phones. And so is the ability to evaluate when and how to use each one.

EHMFYT Teacher and students using digital tablet in classroom

“Accountability, rigor, and a good sense of humor are essentials of quality teaching.” – M. J. Granger

In the real world, results matter. Employers care about the quality and timeliness of the work, not whether it was handwritten or typed, calculated with long division or a spreadsheet. Tools matter. And the future belongs to those who can master them.

“Eliminate ‘TRY’ from your vocabulary; substitute ‘DO’ and then see how much more you accomplish.” – M. J. Granger

The AI revolution isn’t coming—it’s already here. With an estimated 300 to 500 new AI tools launching every month and over 11,000 AI-related job postings in the U.S. alone, the landscape of education and employment is evolving at breakneck speed. From personalized tutoring apps to advanced coding copilots, the innovation pipeline is overflowing. Meanwhile, employers across nearly every industry are urgently seeking candidates with AI fluency, making it clear that today’s students must be equipped with the skills and mindset to thrive in a world powered by artificial intelligence. Ignoring these trends in education is not just shortsighted—it’s a disservice to the next generation.

“If you fail to plan, you plan to fail.” – Benjamin Franklin

If we are serious about closing the opportunity gap, about keeping our students safe, about equipping them for a global workforce driven by rapid innovation — then the solution is not to lock away the tools of the future, but to teach students how to use them.

“To reach the stars sometimes you have to leave your feet.” – M. J. Granger

The future is now. Let’s stop banning progress, and start preparing for it.

Montgomery Granger is 36 years retired educator, with a BS Ed. from the University of Alabama (1985), MA in Curriculum and Teaching from Teachers College – Columbia University (1986), and School District Administrator (SDA) certification through The State University of New York at Stony Brook (2000).

NOTE: This blog post was written with the assistance of ChatGPT 4o.

Therabot: A New Hope for Veteran Mental Health

The veteran suicide crisis, claiming 17 to 22 lives daily since 9/11, demands innovative solutions. My recent blog post, “Ending 17 Veteran Suicides Per Day,” explored the urgent need for accessible, effective mental health interventions. Today, we turn to a promising development: Therabot, an AI-powered chatbot designed to deliver psychotherapy. In an exclusive email interview, Dr. Michael V. Heinz, a psychiatrist, Dartmouth researcher, and U.S. Army Medical Corps Major, shared insights into how Therabot could transform mental health support for veterans. His vision offers hope—grounded in evidence, compassion, and cutting-edge technology.

What Is Therabot?

Therabot is an expert fine-tuned chatbot crafted to provide evidence-based psychotherapy. Unlike generic AI, it’s built to forge a therapeutic bond, creating a safe, stigma-free space for users. Dr. Heinz explains, “In our trial conducted in 2024, we found that Therabot reduced symptoms of depression, anxiety, and eating disorders.” This is critical, as uncontrolled mental health symptoms often fuel high-risk behaviors like suicide and self-harm. The trial also revealed users felt a “high degree of therapeutic alliance” with Therabot, a pivotal factor in ensuring engagement and sustained use.

For veterans, this therapeutic bond could be a lifeline. The ability to connect with an AI that feels empathetic and reliable—available 24/7, regardless of location—addresses the logistical barriers that often hinder care, such as limited access to mental health professionals in remote postings or during erratic schedules.

A Lifeline Across the Military Lifecycle

Therabot’s potential extends beyond veterans to recruits and active-duty service members, offering continuity of care throughout a military career. “One thing that can make mental healthcare difficult currently among recruits and active duty is availability and time constraints of mental health professionals when and where help is needed,” Dr. Heinz notes. “Therabot addresses both of those constraints as it is available all the time and can go with users wherever they go.”

This fusion of care is particularly compelling. Large language models like Therabot excel at retaining context and synthesizing vast amounts of personal history. Dr. Heinz envisions, “The memory capabilities and contextual understanding of these technologies… can offer a tremendous amount of personalization.” Imagine an AI that tracks a service member’s mental health from basic training through retirement, adapting to their evolving needs across deployments, relocations, and transitions. This seamless support could bridge gaps in the fragmented military mental health system, providing stability where traditional care often falters.

Addressing the Veteran Suicide Crisis

Despite the Department of Veterans Affairs spending $571 million annually on suicide prevention, the veteran suicide rate remains stubbornly high. Could Therabot offer a more effective path? Dr. Heinz outlines the costs of a meaningful trial targeting the 10% of veterans at risk for suicidal ideation:

Server and Computation Costs: High-performing models often require significant computational power, with expenses tied to the billions or trillions of parameters loaded in memory during use.

Expert Salaries: Trials need mental health professionals to supervise interactions and handle crises, alongside technical experts to maintain the platform.

FDA Approval Process: While exact costs vary, a robust trial at a VA hospital and regional clinics would require substantial funding to meet regulatory standards.

Dr. Heinz emphasizes Therabot’s cost-effectiveness compared to traditional methods, noting its scalability within the centralized VA system. “I would emphasize Therabot’s potential for transformative impact on the military lifecycle,” he says, addressing leaders like HHS Secretary Robert F. Kennedy, Jr., and FDA Head Dr. Martin Makary. Its ability to deliver personalized care at scale could redefine how the VA tackles suicide prevention.

The Power of Personalization

Therabot’s effectiveness hinges on its ability to engage users authentically. Dr. Heinz sees potential in customizable avatars that resonate with veterans, such as a “seasoned medic” or “peer mentor” reflecting military culture’s unique language and traditions. “Thoughtfully leveraging trusted, customizable archetypes could effectively support veterans by tapping into familiar cultural touchpoints,” he explains. This approach could foster trust and rapid therapeutic alliance, crucial for veterans hesitant to seek help.

However, Dr. Heinz urges caution: “Simulating deceased loved ones or familiar individuals might disrupt healthy grieving processes or encourage withdrawal from meaningful human interactions.” The balance lies in archetypes that feel familiar without crossing ethical lines, ensuring engagement without dependency.

For older veterans from the Korea or Vietnam eras, accessibility is key. Dr. Heinz suggests a tablet interface, citing “larger screens, clearer visuals, and easier interaction via touch-based navigation.” Features like larger buttons and simplified designs could make Therabot user-friendly for those less comfortable with smaller mobile devices.

Open-Source Collaboration and Safety

Developing Therabot requires diverse perspectives. Dr. Heinz highlights the role of interdisciplinary collaboration in finetuning models with “high quality, representative, expert-curated data” that reflects varied mental health challenges and military experiences. Collaborative evaluation of foundation models (like Meta’s Llama) also accelerates progress by identifying the best base models for mental health applications.

Safety and privacy are non-negotiable. “All data is stored on HIPAA-compliant, encrypted servers,” Dr. Heinz assures, with strict access protocols overseen by an institutional review board. This rigor applied to a military population would ensure veterans’ sensitive information remains secure, addressing concerns about AI in mental health care.

Why Therabot, Why Now?

Dr. Heinz’s passion for Therabot stems from a blend of personal and professional drives. “Through my practice, I saw how much this was needed due to the really wide gap between need and availability for mental health services,” he shares. His work at Dartmouth’s AIM HIGH Lab with Dr. Nicholas Jacobson, coupled with advances in generative AI, has fueled his belief in Therabot’s potential to deliver “deeply personalized interventions” to those who might otherwise go untreated.

His boldest hope? “That Therabot makes a lasting and meaningful positive impact on current and retired U.S. servicemembers… ultimately benefiting them, their families, their communities, and society.” By integrating a veteran’s history—trauma, past care, and mission experiences—Therabot could deliver tailored therapy, expanding access and reducing devastating outcomes like suicide.

A Call to Action

Therabot is more than a technological marvel; it’s a beacon of hope for veterans battling mental health challenges. Its 2024 trial demonstrated clinical effectiveness, safety, and user engagement, but further funding is needed for VA-specific trials and FDA approval. Dr. Heinz calls for “targeted funding that allows us to complete additional clinical testing,” urging stakeholders to invest in this life-saving innovation.

As I wrote in “Ending 17 Veteran Suicides Per Day,” the status quo isn’t enough. Therabot offers a path forward—scalable, personalized, and rooted in military culture. To make it a reality, we must advocate for funding, raise awareness, and support research that prioritizes veterans’ lives. Together, we can help Therabot save those who’ve served us so bravely.

For more on veteran mental health and to support initiatives like Therabot, visit www.savinggraceatguantanamobay.com.

Written with the assistance of Grok.

Note: Montgomery J. Granger is a retired US Army Major and educator.

End 17 #VeteranSuicides Per Day: VAGrok Gains Traction with Dartmouth’s AI Therapy Breakthrough

By MAJ (RET) Montgomery J. Granger (Health Services Administration) – Grok assisted

A few weeks ago, I wrote about the urgent need for AI innovation to tackle the veteran suicide crisis—17 of us lost daily, a number that haunts every vet who’s fought the VA’s maze of care. I pitched VAGrok, an AI chatbot to bridge the gaps, remember our stories, and cut through the bureaucracy that leaves too many behind. Since then, I’ve reached out to experts, pitched to my Congressman Nick LaLota (NY-1), and even scored an interview for a book on TBI, PTSD, and the VA disability circus. But today, there’s a new spark: Dartmouth’s groundbreaking AI therapy study, published March 27, 2025, in NEJM AI. It’s not just hope—it’s proof VAGrok could work.

In my last post, I laid bare the stakes: the VA’s continuity of care is a mess. Vets bounce between specialists, retell traumas to new faces, and watch records vanish in a system that’s more obstacle than lifeline. I envisioned VAGrok as an AI “wingman”—a tool with memory to track our care, flag risks, and fight for us when the system won’t. Then came Dartmouth’s Therabot trial: 106 people with depression, anxiety, or eating disorders used an AI chatbot for eight weeks. Results? A 51% drop in depression symptoms, 31% drop in anxiety—numbers that rival traditional therapy. Participants trusted it like a human therapist, and it delivered 24/7 support without the waitlists or stigma.

This isn’t sci-fi—it’s happening. Dartmouth’s team, led by Nicholas Jacobson, built Therabot with cognitive behavioral therapy (CBT) smarts and safety nets: if it spots suicidal thoughts, it prompts 911 or crisis lines instantly. For vets, this could mean an AI that knows your TBI triggers or PTSD flare-ups from last year, not just last week. Imagine VAGrok at Northport VA Medical Center, my proposed pilot site in NY-1: it could sync with VA records, alert docs to patterns, and talk us down in the dark hours when the 988 line feels too far.

The Dartmouth study backs what I’ve been shouting: AI can scale care where humans can’t. Jacobson notes there’s one mental health provider for every 1,600 patients with depression or anxiety in the U.S.—a gap the VA knows too well. Therabot’s not a replacement for therapists, but a partner. For vets, VAGrok could be that partner too—bridging the trust gap with memory the VA lacks. I’ve emailed Jacobson about teaming up; no reply yet, but the pieces are aligning.

Next steps? I’m pushing LaLota to pitch this to VA Secretary Doug Collins—his high-energy drive to fix the VA could make VAGrok a reality. The Dartmouth trial isn’t just data—it’s a lifeline we can grab. Vets deserve care that doesn’t forget us. VAGrok, fueled by breakthroughs like Therabot, could be how we get it. Thoughts? Hit me up—I’m all ears.

Warrior’s Mom & AI

Q & A with a daring and dedicated computer security expert.

Montgomery Granger: Tell me a little about your background and your business.

Tamara Davis: My name is Tamara Davis and I am the CEO of Recon Secure Computing (RSC). We are a woman-owned, American veteran-fueled cybersecurity business which serves both law enforcement agencies as well as the civilian sector.

MG: What’s ‘new?’

TD: RSC is also a software development company and one of our upcoming product launches involves an encrypted communications platform which includes a side-loadable VPN (Virtual Private Network) smartphone client.

MG: What products/services do you sell?

TD: We are currently running NIST (National Institutes of Standards & Technology) -compliant cybersecurity solutions. Our newest products launch will include: 

ARKEN (new product name, from the Greek word, “archon,” or “The Director”) – customizable cybersecurity product which includes real-time intuitive AI Incident Response Protocols, daily network traffic report management, customizable internal/external firewall maintenance, and filtering/geofencing of known malware sites which also automatically blocks all unauthorized downloads.

ARKEN will replace our CyberWar Shield suite of products currently available. Our customers include enterprise-sized civilian and law enforcement agencies as well as SOHO (Small Office/Home Office) businesses.

OURweb – Our encrypted mesh-network communications platform includes a side-loadable VPN client and encrypted messaging with encrypted file transfer capabilities. OURweb exists to ensure that your freedom of speech will not be censored by the Woke Censorship Complex. 

MG: How would you explain AI (Artificial Intelligence) bots to a fourth grader?

TD: When someone says “AI bots”, the reality is far less exciting. “Artificial Intelligence” is simply a human-created software program. That’s it! An AI software program uses algorithms for pattern matching, and much like a Google Internet search, it is only as accurate as the data it’s programmed to look for. Poison the data and you get hilariously inaccurate or intentionally limited search results. Garbage in = garbage out. “AI” is not sentient, nor is it intuitive, and it is not taking over the world like some Hollywood movie. AI is a useful tool, but just like any other tool, it can be used for good purposes or it can be misused for harm.

MG: What are your feelings about AI chat bots, their assets and pitfalls?

TD: “AI” algorithms are already being widely used for various things such as environmental impact studies for construction projects or for optimizing traffic light patterns to ease street congestion. Some court proceedings employ algorithms to assist with finding the optimal sentencing with the goal of reducing recidivism while maximizing crime reduction and avoiding racial biases. The key point to remember when talking about AI is that the software algorithms are only as good as the software code within the program and the data input most likely includes inherently human biases or inaccuracies. Again: garbage in = garbage out.

MG: Is there a future with AI bots, or where the technology is leading us?

TD: The technology is taking us towards a more computer-dependent culture rather than a human-dependent future. This could lead to revolutionary breakthroughs such a robotic barista shop which creates the perfect custom-order latte every time, or it could lead to an employment situation which causes unnecessary or targeted job reductions where the dystopian scenario of, “Fired by a bot,” due to incorrect or biased report results could become the norm. As we are at the very beginning of this software introduction to the general culture, it’s difficult to accurately forecast whether the “AI revolution” will do more harm than good.

MG: Who is driving this truck?

TD: Who’s driving this truck? Software developers are. So far, Silicon Valley has been the entity who decides which developers are writing what algorithms. It’s time to change that paradigm and prioritize our own software developers who are American-focused rather than outsourcing to the cheapest foreign-based bidders.

MG: What is the best way forward? What technology is most helpful to humans, and how do we maximize the benefits?

TD: The best way forward involves designing a legal framework of reasonable regulations which will hold all the various entities involved to the highest standards. We’re not advocating for yet another bureaucratic nightmare of endless and expensive regulatory burdens, though; we need sunlight and accountability.

MG: Anything else you’d like to say?

TD: Our current Congress doesn’t inspire a lot of confidence that they will be able to skillfully enact such a legislative framework, and until we return to a VOTE EARNING election system instead of just a BALLOT GATHERING election system, there may not be much hope of electing a competent and NON-criminal Congress. Your freedom of speech deserves to be protected from being silenced by what we refer to as “The Censorship Complex.”

Contact: (866) 796 – 2241 and our website is: https://ReconComputing.com Twitter: @warriors_mom

#Cybersecurity #ElectionSecurity #NatSec