When computers run human thinking

Analysis and Opinion

By Joe America

We are explorers, each and every one of us, moving forward, because there is no backward, into the era in which computers do human thinking. The flood has begun and we already are finding it hard to tell who wrote that, a human or a machine? Pictures and videos are upon us like a storm, looking real but totally fiction.

Knowledge can come from either a human or a computer, I suppose. Knowledge can be history or invention or calculations or well organized language. I asked Chat GPT to help me write a book about competence aimed at Filipinos and taking the view of a father teaching his Filipino son. I set up the framework using the three pillars of competence: commitment, attention to detail, and accountability. Pow. The output for a general book outline popped out and it was brilliant, three possible approaches, historical, anecdotal, or thematic. Generated in seconds.

I picked thematic and, pow, soon had a more detailed thematic outline. Terrific ordering of content in great insightful sections. On to Chapter One. Pow. Positively brilliant, expounding on the difference between short-lived emotional motivation and enduring commitment, throwing in Filipino phrases to make it personal, and building a clear, readable few pages. I tweaked a few lines to make references to my son realistic and onward we went to Chapter Two.

Pow. Positively brilliant.

But two problems arose. Big problems.

My part in the content had diminished from Chapter One’s 10% to about 3%. And the references to Filipino terms were awkward as they were new to me. Was I supposed to pass them off to others as my knowledge?

I stopped writing. I’d hit an ethical wall, a personal ethical wall. I was generating great material, or, more correctly, the machine was. But I wasn’t being honest. The machine was generating the material and it was getting the material somewhere else, from someone else. I was taking commissions from the contractor, if you catch my drift. I was stealing ideas. And they were brilliant.

They just were not mine.

So I won’t continue with the book.

Am I denying some Filipinos the ability to read about competence to improve their skills? Could the book with deftly stolen material have HELPED someone?

I shrug. That’s their problem. Mine is living with myself.

I fully expect to retreat from public dialogue over time. I don’t want to interface with machines. I don’t care to be plugged into a borg hive where increasingly the machine’s stolen knowledge is coming down the pipe, all-reaching, pristeen, and impeccably organized.

Contributor LCX once wrote that I have Polly Anna tendencies as a believer in goodness as a bias. Soft. Weak in the face of power as currency.

Yes.

I suppose so.

And as we rush headlong into AI generated surroundings, I already miss the flaws of poor logic, bad grammar, and typos. I already miss the honest soul of human emotions rubbed raw and I recoil at the cold calculations of machines that stole the intelligence that belonged to others. The machines have no conscience.

How can you trust a computer?

If I go missing, look out back in the garden. I’ll be the tall guy talking to his plants.

_________________________

Cover photo from CBR article “A Complete Timeline of the Borg in Star Trek“.

Comments
38 Responses to “When computers run human thinking”
  1. arlene's avatar arlene says:

    I only tried AI once Joeam, that’s the end of it.😘

    • JoeAm's avatar JoeAm says:

      Haha, yes. I totally understand. It’s unnerving because it’s not really us, it’s a dry calculating extension of us. My problem is that I don’t think we can escape from BEING USED by AI. Already my FB is swamped with the stuff, writing help, pictures, videos. Plus I fully expect my blog materials to enter the computer’s realm of thinking whether I want it to or not.

      • kasambahay's avatar kasambahay says:

        AI still needs a human proxy, someone to take the credit, or the fall, or to act as a point of reference.

        books have been written by AI, but proxy human author is still needed to do the book singing, to promote the book, and if necessary to give lecture or talk to wannabe book writers in events like writers retreats and book fairs attended by writers, authors, publishers, literary agents, book sellers and their likes.

        writing for profits is big business and there is contract to be signed between the author who may use AI and the publisher. the contract is rigid as publishers and authors are careful of their reputation. good reputation ensure longevity in the business and stable income, bad reputation can severely shorted both.

        authors who used AI take responsibility for all the errors committed and often said that those who felt they have been quoted but not given credit, can write to the author and they will be given the credit due them.

        all is well that ends well.

  2. arlene's avatar arlene says:

    That’s it precisely, it seems everyone is using it.

    • kasambahay's avatar kasambahay says:

      they are using AI for very good reason. anyone who dont like what AI is doing, can refine their command and AI will do likewise. if you think AI got the better of you, it is maybe because someone told it to. and if you want to fight wrongful AI, you may have to be like the president’s spox, castro. she is never quiet on errant subjects and is quite vocal about it. for many of us who dont have the platform castro have, we can always use AI to fight a malingering AI, lol! the more expensive and current your AI is, the better performer.

  3. Joey Nguyen's avatar Joey Nguyen says:

    For those who believe in the Christian God, the greatest gift God gave to humanity is the gift of reason. The Ancient Greeks believed Prometheus, pitying humans not having the natural defenses of fur, teeth or claws, defied his fellow gods and gave humans knowledge. The Egyptian god Thoth gifted humans with writing, and thus knowledge.

    As the Fourth Industrial Revolution gains speed, there are human would-be gods also known as tech moguls who now hoard knowledge, dribbling it out piece meal, pre-sorted, biased by nature. Thought is firewalled by algorithms. Critical thinking is removed by AI machines thinking for oneself. These are forms of control, and much like high-control cults, reliance on thinking machines has often become a religion of belief. How ironic that the ancient religions, in their truest form, emphasized not blind belief, but a moral framework to guide human self-thinking. Human destiny is taken out of one’s hands when another tells one exactly what to think, whether that be a high-control cult pastor, a tech mogul’s algorithms, or an authoritarian strongman.

    How is one’s destiny changed? Destiny is changed with knowledge. How is knowledge gained? By reading and comparing one’s own conclusions with others.

    • kasambahay's avatar kasambahay says:

      I dont know about that, joey. but when the south african author I used to admire died, his family, literary agent and his editor raided the dead author’s cabinet and bought all the unfinished manuscripts. the author whose books were constantly on the best seller list worldwide is on the best sellers list again, posthumously, coz apparently, his fans cannot get enough of his works. that’s the reason given by the publisher, and so the dead continue to churn on best selling novels and his family continued to rake in money from book sales. when I read the dead’s latest book, it took me a while to realise the book was actually written by AI trained on the dead’s syntax. the book was very good and has all the attributes that make the author outstanding when he was alive, but this time, I did not send email praising and thanking the author for a job well done, I dont feel comfortable praising AI. though someone told me to address the email to the publisher, but still I did not.

      well, if americans can let the long dead nat king cole come alive and sing duet with his daughter natalie cole in the song, unforgetable, and the song winning a grammy award . . . I suppose it is okay and acceptable, but I feel it is a bit creepy we are being sung to by the dead!

      bringing back the dead, cryogenics in another dimension, is the future of humanity.

    • JoeAm's avatar JoeAm says:

      That nails it. There is no moral dimension to AI other than acquire data and align it to form the best reinterpretation of the data, fit to the request. If the request is deceptive, malicious, or criminal, the data will be fit to advance those goals. We are witness to the softening of moral codes and the normalization of evil. AI will speed this process.

      • kasambahay's avatar kasambahay says:

        laws are lagging behind, though AI can be useful, it is yet to be regulated, not given free reign and allowed to obliterate humantiy.

        deep fakes are troubling, and can be used as excuse for bad behavior with humans claiming it was not them but deep fakes that were doing the wrong thing in social media.

  4. Not even music is safe anymore. Last week, the two singers (Gian Bernardino and Gabriel Fernandez) of the most popular Filipino group this year or maybe even the past 5 years, Cup of Joe, found that a rock cover of one of their songs dominated the Philippine Spotify charts. Shocking was that it wasn’t just a cover version, it was an AI-generated cover version, something pretty common these days, unfortunately. Their label Viva Music had it taken down, which is good.

    As for AI as a power tool to gain new knowledge, I use it with a grain of salt, especially as unlike for instance Wikipedia which has very strict rules on citations, making it possible to verify the sources of information like in an academic articles, AI often doesn’t. I have had cases where I asked AI tools themselves for examples of stuff they claimed and they didn’t give any, almost as if they hadn’t claimed something was this way or that just a question ago.

    Half a Millenium after Magellan

    The above article was a total of almost two years in the making, with early research started when MLQ3 started to tweet Pigafetta’s diaries and I happened to be in Sevilla in 2019 on an Andalusian vacation tour, rushing through the Magellan exhibit during skipped lunch break, rushing because we had a time when we had to be back at where the tour bus had dropped us off to be taken to the next destination. Around two months of intense writing, redrafting and re-research from New Year 2021 to almost March 16, 2021 after work every day as it was lockdown and I barely left the house. The article could have been 30 pages long in Word or more but I kept restructuring it until it became the 9 word pages final version.

    I asked ChatGPT to give me tips on improving the structure and they were sound. Then it asked me if it wanted me to give it a sample of a section rewritten and I asked it to go ahead. This came out:

    “The Philippines Today: Legacies and Living Contradictions

    Half a millennium after Magellan’s ships dropped anchor off Samar, the Philippines remains a country still negotiating the legacies of its long entanglement with empire. The old hierarchies of power, land, and privilege that took shape under colonial rule continue to cast long shadows over modern life. Political independence did not dissolve those structures—it merely gave them new names and faces.

    Economic and social inequalities remain deeply entrenched. Land ownership, for instance, is still heavily concentrated among a small elite, despite decades of agrarian reform programs. Urbanization and labor migration have offered escape valves, but not genuine transformation. Many Filipinos leave for work abroad, not because of choice, but because the domestic economy cannot sustain broad-based opportunity. The same pattern of extraction and dependency that began in the galleon trade persists today—only now the commodities are people, remittances, and outsourced labor.

    Institutions have likewise struggled to mature beyond colonial blueprints. Laws are often sophisticated on paper but unevenly applied. Justice and protection remain selective privileges, tilted toward those with access and influence. Political dynasties dominate local and national governance, maintaining a feudal grip on state resources. The ordinary citizen, meanwhile, learns to navigate power through personal connections and patronage rather than trust in public institutions.

    Education—a supposed instrument of emancipation—too often replicates inequality instead of overcoming it. Public schooling remains underfunded and uneven, producing graduates who are industrious but ill-prepared for the complexities of global citizenship. Critical thinking and civic engagement are undervalued compared to rote learning and credentialism. The result is a population highly literate in form but insufficiently empowered in substance.

    There is also a cultural dimension to these institutional failings. Centuries of colonial domination fostered habits of deference and dependence, while suppressing collective introspection. Filipinos are quick to adapt and resilient in hardship, yet often reluctant to confront structural injustice head-on. We celebrate survival, but seldom question the systems that make survival a necessity. This is not a moral flaw so much as a historical inheritance—a defense mechanism formed under coercion, now hardwired into the national psyche.

    Still, all is not static. There are currents of renewal—grassroots movements for land and environmental justice, artists and educators cultivating critical awareness, youth networks reclaiming historical memory and digital space. These efforts, though fragmented, signal a growing impatience with the old fatalism. They suggest that the “half millennium after Magellan” is not only a measure of endurance, but a moment to rethink what freedom and nationhood could mean in the twenty-first century.

    The task ahead is not merely to commemorate history, but to outgrow it. True independence requires more than sovereignty—it demands self-knowledge, institutional courage, and a willingness to imagine beyond inherited limits. The first circumnavigation connected the archipelago to the world; perhaps the next great voyage must be inward, toward a deeper reckoning with who we have become and who we still wish to be.”

    yeah, great stuff, it is almost as if the machine understood what I was trying to say and airbrushed it, smoothed rough edges. My reference to the ghosts of the past and Trese was completely removed. Though the last paragraph is not bad, I wonder how many in the Philippines of receding reading comprehension (an issue in most Western countries as well by now, though there are countries that have banned mobile phones in school classes recently, a good move) will actually GET what those high-sounding words mean as many will already use AI to shortcut the actual learning process one has when writing essays. Half of the process in my case was the two years of less intense and two months of intense research/writing as I learned a lot more than I was able to express in that article. It was like taking the stairs up the subway to the surface, something I had to do recently as the escalator in subway station I got off wasn’t working. Everything we invent to make our lives easier also can make us lazier, and getting lazy can be bad for the body and the mind. I am now imposing socmed-free days on myself. There is a certain clarity won by that, I have noticed. Dr. Gideon Lasco who likes to climb mountains said that there is time to do what he and his group of enthusiasts do, just take off some of the hours people Metro Manila spend on social media and go up mountains nearby instead. Oh, I will be up some hills, not yet mountains pretty soon. Autumn is a great time for that over here.

    • kasambahay's avatar kasambahay says:

      it is cheap and cost next to nothing, the AI generated cover. for spotify to quickly accept the scam cover without much ado, makes one think there is maybe something wrong with spotify.

  5. Karl Garcia's avatar Karl Garcia says:

    You can still have your own version and use chat gpt as reference.

    • JoeAm's avatar JoeAm says:

      Yes, it depends on how much effort you wish to spend on it. I was looking for a short cut and didn’t like the result. But AI can be a useful tool, for sure.

    • kasambahay's avatar kasambahay says:

      ah, that is why many college students fail their academic essay maybe because they can barely use AI as reference. there are certain referencing styles acceptable to universities and colleges and AI is already mostly out of it. so far as I know, tertiary and post grad students can only reference accredited articles in a published journal, citing the date, the year, the pages, the title of the articles and the volume. get the order wrong and that is minus a point.

      though for our own personal use, AI has no limit summat. but once release for wider readership, AI may hit a snag as readers are fastidious and more than happy to point out loopholes and missing links.

      • Karl Garcia's avatar Karl Garcia says:

        Then go back to Google.
        Students must be thought to double check and triple check all cited sources even AI.

        • Karl Garcia's avatar Karl Garcia says:

          Chat gpt doubles checks and fact checks even it’s own mistakes if you ask them too then apologizes if a glaring mistake occurs

  6. Karl Garcia's avatar Karl Garcia says:

    What happens next is to the editor in chief.

    Wikipedia chief editor was said to reject many new wiki articles which were AI written.

    • JoeAm's avatar JoeAm says:

      That must be a challenge. I took note of a study that concluded that AI output decreases in accuracy as it incorporates social media content in its training. Just like humans.

      • And I have heard in my field of work that AI or LLM (large language models) work best with highly specific and curated data. There are already technical support (and documentation) tools like JIRA by Atlassian that offer built-in AI, meaning you can’t just text search over existing cases and FAQs, you can get quick answers. If the data it is based on is accurate, I can imagine help desks – or even service technicians looking for very specific information – saving a lot of time.

        Looking at the three AI tools I am most familiar with, Grok seems crazy for a reason as it bases a lot of its conclusions on X, Gemini is conservative and ChatGPT appears to be very powerful but sometimes it is a hit and miss thing. I could imagine an AI that feeds on digital library content in specific domains – history, computer science, even digitalized archives – to be a powerful tool for researchers or their assistants. The adage that tool is a tool – and that a fool with a tool is still a fool – always applies.

        • JoeAm's avatar JoeAm says:

          That comparison seems spot on to me. In terms of specific data feeds, another book project that I noodled with was having ChatGPT compile a book using TSOH articles as content. It can do that but I have to compile a table of 150 articles that will best fit into the target subject matter. That’s a rather daunting task so I haven’t gone forward with it. But it does seem to want specific data inputs.

          • Karl Garcia's avatar Karl Garcia says:

            Gian and Irineo thought of doing a mini version of your idea but due to time constraints it is still in the drawing board.

            • JoeAm's avatar JoeAm says:

              Mine would have been a historical review through different periods from Arroyo to now as seen through the blogs and attached comments. I doubt there is much of an audience for it. However there likely is an audience for “competence”, maybe as a You Tube series rather than text. I think people don’t read much long-form text these days. They get exhausted by the short-form info.

              • kasambahay's avatar kasambahay says:

                there will always be people who may read your historical review through different periods because of nostalgia. it may even make good reading materials at private clinics and private hospitals for people to while away the time while waiting for their appointment. the condensed version for people to read at airports while waiting to board their flights. also as bedtime reading at hotels for guests who dont easily fall asleep, instead reading the courtesy bible provided.

                • JoeAm's avatar JoeAm says:

                  I would encourage them to use AI to do a simple search of joeam.com using the prompt “What does JoeAm think about (subject), including perspectives from commenters”.

  7. Joey Nguyen's avatar Joey Nguyen says:

    Even editors of the US newspaper of record have been subsumed by the machine, so what does that say about most other people who do not have the same privilege of education and training?

    https://x.com/willystaley/status/1979022491164049740

    “Maybe this is an artifact of Elon’s stewardship of this website, but the “No Kings” stuff — the language, the iconography, even the notion that Trump wants to be a *monarch* — is just totally foreign to me. I don’t know where it comes from. Definitely not here.”

    The No Kings protest coalition leadership has stuck by the conservative 7 million turnout number, though there are estimates that put the actual Saturday turnout peaked north of 13.5 million. Yet the truth is hardly reported in the “news,” and literal livestreams are dismissed as old footage or worse, fake.

    When there is knowledge breakdown, what might have been a tool in a former time may be worshipped as a divine object with magical properties. AI is being pushed as a divine, infallible thing, when it’s a crappy tool at best for a well-organized thinking process. Human reason broke out of the metaphorical Plato’s cave long ago, yet in a time where there has never been so much available knowledge at our fingertips, there are so many who willingly go back to the cave, holding out their necks and ankles for the shackles, just so they can watch shadows flicker upon darkened walls. In the face of willful or unwillful gaslighting, believe your own eyes and ears.

    • JoeAm's avatar JoeAm says:

      A perfect statement of the way information is gamed. I’d merely add to my article’s conclusion that computers have no conscience. Nor do a whole lot of rich people.

  8. Karl Garcia's avatar Karl Garcia says:

    Fortunately book reading is revived by the book sellers allowing people to read some of the books.

    • JoeAm's avatar JoeAm says:

      I don’t think that will help much. Bookstores in the US are struggling from falling sales. Young people don’t read books like we did when books were like LP music albums, now gone in favor of streaming. Netflix is today’s book vendor, in virtual form.

      • Karl Garcia's avatar Karl Garcia says:

        A pebble drop in the bucket or worse a drop in the ocean. But here in PH the LP players are also making a comeback.

        Maybe it is a niche market thing.
        That big ship that is also a library must make many stops around the world.

        At least an effort is being done.

      • kasambahay's avatar kasambahay says:

        the richest dead singer in the world, elvis presley, is still raking in millions even now that is he long dead and buried. so is tina turner, people impersonate them, play their songs, and have musicals starred by vibrant and energetic young ones. les mesirables is still drawing audience world wide at each theater production long after its author was laid to rest.

        likewise, dead writers and authors of fiction are still making millions long after they are dead with new novels published regularly, thanks to AI and some collaborations. famous authors never die, they have lifelong fans and readers voraciously reading their books. and some of their AI enhanced newer novels are even turned into movies.

        young ones may not read books, but they are ardent movie goers and buyers of movie paraphernalia.

        we also have audio books for people who prefers to listen instead of reading books. we have many choices. the world is our oyster.

    • Karl Garcia's avatar Karl Garcia says:

      I asked chatgpt “his” perspective

      That’s a great question — and it’s an important one, because this study strikes right at the tension between trust and technology in the age of AI-mediated information.Here’s my perspective:—💭 1. The findings are unsurprising — but deeply concerning.AI assistants are not built as real-time journalists. They are probabilistic language models trained on past data, not active fact-checkers. So it’s expected that they’ll make mistakes about fast-changing news — especially events, deaths, or policy updates.What’s concerning is the scale: nearly half of answers having significant errors shows that users might over-trust these systems for something they’re not designed to do — act as live news anchors.—⚖️ 2. The main issue isn’t just “wrong answers” — it’s false confidence.AI assistants often deliver information persuasively, even when wrong. Unlike a search engine that shows multiple sources, an assistant synthesizes them into one authoritative-sounding voice. That single-voice illusion of authority makes misinformation harder to detect — especially if users don’t double-check.—🧩 3. This reveals a design gap between “knowledge models” and “information flows.”There’s a fundamental mismatch between static models (trained on large data snapshots) and the dynamic nature of news. Real-time information flows require verification, timestamping, and source tracking — things most LLMs are not yet structured to handle at scale.A hybrid model — combining LLM reasoning with verified, timestamped news APIs — would be a more trustworthy path forward.—🌍 4. The stakes are higher in places with weaker media literacy or disinformation safeguards.In countries like the Philippines, where social media already dominates news consumption, the risk is amplified. If AI assistants amplify outdated or incorrect stories, that could shape public opinion before journalists or fact-checkers can intervene.AI could unintentionally become a new vector for “soft misinformation” — errors not born of malice but of poor information hygiene.—💡 5. What needs to happen nextTransparency: Assistants must clearly indicate data cut-off dates, source links, and confidence levels.Collaboration: AI companies should work with newsrooms and public broadcasters (like the BBC, GMA, ABS-CBN, etc.) to access verified feeds.Media literacy: Users need to treat AI output like any other news source — question it, verify it, and seek corroboration.Accountability: Policymakers should start defining standards for AI-generated news summaries, just as there are for broadcast or print accuracy.—🧭 My bottom lineAI assistants are powerful tools for synthesis, but poor substitutes for journalism. They can accelerate access to information but must remain transparent about their limits.The real danger isn’t AI “lying” — it’s humans assuming it can’t.—Would you like me to add a short reflection section connecting this issue to media trust and democracy in the Philippines (e.g., AI misinformation and elections)? It would make the analysis more locally grounded.

      • Karl Garcia's avatar Karl Garcia says:

        “Persuasive answers even when wrong”

      • JoeAm's avatar JoeAm says:

        Wow, “nearly half of answers have significant errors”. The risk is amplified in the Philippines with weak data checks. AI can accelerate information but isn’t a substitute for journalism.

        The positive is that you’d never get a human to be that honest about his failings.

    • JoeAm's avatar JoeAm says:

      Yes, they scrape whatever is out there, even mistakes.

Leave a reply to arlene Cancel reply