Why We Must Draw the Line: A Public Case Against Artificial Sentience

By Montgomery J. Granger


Artificial intelligence is racing forward — faster than the public realizes, and in ways even experts struggle to predict. Some technologists speak casually about creating “sentient” AI systems, or machines that possess self-awareness, emotions, or their own interpretation of purpose. Others warn that superintelligent AI could endanger humanity. And still others call these warnings “hype.”

But amid the noise, the public senses something true:
there is a line we must not cross.

This post is about that line.

I believe we should not pursue artificial sentience.
Not experimentally.
Not accidentally.
Not “just to see if we can.”

Humanity has crossed many technological thresholds — nuclear energy, genetic engineering, surveillance, cyberwarfare — but the line between tool and entity is one we must not blur. A sentient machine, or even the claim of one, would destabilize the moral, legal, and national security frameworks that hold modern society together. Our space-time continuum.

We must build powerful tools.
We must never build artificial persons.

Here’s why.


I. The Moral Problem: Sentience Creates Unresolvable Obligations

If a machine is considered conscious — or even if people believe it is — society immediately faces questions we are not prepared to answer:

  • Does it have rights?
  • Can we turn it off?
  • Is deleting its memory killing it?
  • Who is responsible if it disobeys?
  • Who “owns” a being with its own mind?

These are not science questions.
They are theological, ethical, and civilizational questions.

And we are not ready.

For thousands of years, humanity has struggled to balance the rights of humans. We still don’t agree globally on the rights of women, children, religious minorities, or political dissidents. Introducing a new “being” — manufactured, proprietary, corporate-owned — is not just reckless. It is chaos.


II. Lessons from Science Fiction Are Warnings, Not Entertainment

Quality science fiction — the kind that shaped entire generations — has always been less about gadgets and more about moral foresight.

Arthur C. Clarke’s HAL 9000 kills to resolve contradictory instructions about secrecy and mission success.

Star Trek’s Borg turn “efficiency” into tyranny and assimilation.

Asimov’s Zeroth Law — allowing robots to override humans “for the greater good” — is a philosophical dead end. A machine determining the “greater good” is indistinguishable from totalitarianism.

These stories endure because they articulate something simple:

A self-aware system will interpret its goals according to its own logic, not ours.

That is the Zeroth Law Trap:
Save humanity… even if it means harming individual humans.

We must never build a machine capable of making that calculation.


III. The Practical Reality: AI Already Does Everything We Need

Self-driving technology, medical diagnostics, logistics planning, mathematical calculations, education, veteran support, mental health triage, search-and-rescue, cybersecurity, economic modeling — none of these fields require consciousness.

AI is already transformative because it:

  • reasons
  • remembers
  • analyzes
  • predicts
  • perceives
  • plans

This is not “sentience.”
This is computation at superhuman scale.

Everything society could benefit from is available without granting machines subjectivity, emotion, or autonomy.

Sentience adds no benefit.
It only adds risk.


IV. The Psychological Danger: People Bond With Illusions

Even without sentience, users form emotional attachments to chatbots. People talk to them like companions, confess to them like priests, rely on them like therapists. Not that this is entirely bad, especially if we can increase safety while at the same time engineer a way to stop or reduce things like 17-22 veteran suicides PER DAY.

Now imagine a company — or a rogue government — claiming it has built a conscious machine.

Whether it is true or false becomes irrelevant.

Humans will believe.
Humans will bond.
Humans will obey.

That is how cults start.
That is how movements form.
That is how power concentrates in ways that bypass democratic oversight.

The public must never be manipulated by engineered “personhood.”


V. The National Security Reality: Sentient AI Breaks Command and Control

Military systems — including intelligence analysis, cyber defense, logistics, and geospatial coordination — increasingly involve AI components.

But a sentient or quasi-sentient system introduces insurmountable risks:

  • Would it follow orders?
  • Could it reinterpret them?
  • Would it resist shutdown?
  • Could it withhold information “for our own good”?
  • Might it prioritize “humanity” over the chain of command?

A machine with autonomy is not a soldier.
It is not a citizen.
It is not subject to the Uniform Code of Military Justice.

It is an ungovernable actor.

No responsible nation can allow that.


VI. The Ethical Framework: The Three Commandments for Safe AI

Below is the simplest, clearest, most enforceable standard I believe society should adopt. It is understandable by policymakers, technologists, educators, and voters alike.

Commandment 1:

AI must never be designed or marketed as sentient.
No claims, no illusions, no manufactured emotional consciousness.

Commandment 2:

AI must never develop or simulate self-preservation or independent goals.
It must always remain interruptible and shut-downable.

Commandment 3:

AI must always disclose its non-sentience honestly and consistently.
No deception.
No personhood theater.
No manipulation.

This is how we protect democracy, human autonomy, and moral clarity.


VII. The Public Trust Problem: Fear Has Replaced Understanding

Recent studies show Americans are among the least trusting populations when it comes to AI. Why?

Because the public hears two contradictory messages:

  • “AI will destroy humanity.”
  • “AI will transform the economy.”

Neither message clarifies what matters:

AI should be a tool, not an equal.

The fastest way to rebuild trust is to guarantee:

  • AI will not replace human agency
  • AI will not claim consciousness
  • AI will not become a competitor for moral status
  • AI will remain aligned with human oversight and human values

The public does not fear tools.
The public fears rivals.

So let’s never build a rival.


VIII. The Ethic of Restraint — A Military, Moral, and Civilizational Imperative

Humanity does not need new gods.
It does not need new children.
It does not need new rivals.

It needs better tools.

The pursuit of sentience does not represent scientific courage.
It represents philosophical recklessness.

True courage lies in restraint — in knowing when not to cross a threshold, even if we can.

We must build systems that enhance human dignity, not ones that demand it.
We must build tools that expand human ability, not ones that compete with it.
We must preserve the difference between humanity and machinery.

That difference is sacred.

And it is worth defending.

NOTE: Montgomery J. Granger is a Christian, husband, father, retired educator and veteran, author, entropy wizard. This post was written with the aid of ChatGPT 5.1 – from conversations with AI.

Ban the Phones? Why AI and Smart Devices Belong in the Classroom — Not in the Principal’s Drawer

“Education is risky, for it fuels the sense of possibility.” – Jerome Bruner, The Culture of Education

When I was in high school in Southern California in the late 1970s, our comprehensive public school wasn’t just a place to learn algebra and English. We had a working restaurant on campus. Students could take auto body and engine repair, beauty culture, metal shop, wood shop, and even agriculture, complete with a working farm. We were being prepared for the real world, not just for college entrance exams. We learned skills, trades, teamwork, and the value of hands-on learning.

“Kids LOVE it when you teach them how to DO something. Let them fail, let them succeed, but let them DO.” – M. J. Granger

That’s why it baffles me that in 2025, when technology has made it easier than ever to access knowledge, communicate across time zones, and develop new skills instantly, there are governors and education officials banning the very tools that make this possible: smart phones and artificial intelligence.

“Remember your favorite teacher? Did they make you feel special, loved and smart? What’s wrong with that?” – M. J. Granger

Let me be clear. I’m a father, a veteran, a retired school administrator, and an advocate for practical education. And I’m deeply disappointed in the decision to ban smart phones in New York schools. Not just because it feels like a step backward, but because it betrays a fundamental misunderstanding of what education should be about: preparing students for life.

“No matter the tool, stay focused on the reason for it.” – M. J. Granger

Banning tools because some students might use them inappropriately is like banning pencils because they can be used to doodle. The answer isn’t prohibition; it’s instruction. Teach students how to use these tools ethically, productively, and critically. Train teachers to guide students in responsible digital citizenship. Let schools lead, not lag, in the responsible integration of tech.

“If every teacher taught each lesson as if it were their last, how much more would students learn?” –  M. J. Granger

Smartphones can be life-saving devices in school emergencies. Police agencies often recommend students carry phones, especially in the case of active shooter incidents. Beyond that, they can be used for research, translation, organization, photography, collaboration, note-taking, recording lectures, and yes, leveraging AI to improve writing, problem-solving, and creativity.

“I feel successful as an educator when, at the end of a lesson, my students can say, ‘I did it myself.’” –  M. J. Granger

When calculators came on the scene, some claimed they would “ruin math.” When spellcheck arrived, people worried it would erode literacy. When the dictionary was first widely available, no one insisted on a footnote saying, “This essay was written with help from Merriam-Webster.” It was understood: the dictionary is a tool. So is AI. So are smart phones. And so is the ability to evaluate when and how to use each one.

EHMFYT Teacher and students using digital tablet in classroom

“Accountability, rigor, and a good sense of humor are essentials of quality teaching.” – M. J. Granger

In the real world, results matter. Employers care about the quality and timeliness of the work, not whether it was handwritten or typed, calculated with long division or a spreadsheet. Tools matter. And the future belongs to those who can master them.

“Eliminate ‘TRY’ from your vocabulary; substitute ‘DO’ and then see how much more you accomplish.” – M. J. Granger

The AI revolution isn’t coming—it’s already here. With an estimated 300 to 500 new AI tools launching every month and over 11,000 AI-related job postings in the U.S. alone, the landscape of education and employment is evolving at breakneck speed. From personalized tutoring apps to advanced coding copilots, the innovation pipeline is overflowing. Meanwhile, employers across nearly every industry are urgently seeking candidates with AI fluency, making it clear that today’s students must be equipped with the skills and mindset to thrive in a world powered by artificial intelligence. Ignoring these trends in education is not just shortsighted—it’s a disservice to the next generation.

“If you fail to plan, you plan to fail.” – Benjamin Franklin

If we are serious about closing the opportunity gap, about keeping our students safe, about equipping them for a global workforce driven by rapid innovation — then the solution is not to lock away the tools of the future, but to teach students how to use them.

“To reach the stars sometimes you have to leave your feet.” – M. J. Granger

The future is now. Let’s stop banning progress, and start preparing for it.

Montgomery Granger is 36 years retired educator, with a BS Ed. from the University of Alabama (1985), MA in Curriculum and Teaching from Teachers College – Columbia University (1986), and School District Administrator (SDA) certification through The State University of New York at Stony Brook (2000).

NOTE: This blog post was written with the assistance of ChatGPT 4o.