Brave New World

Human time is irrelevant to AI.  We are like Prehistoric People trying to compete with aliens on spaceships. Our standard attitude for destruction is on a fast-track to being outdated and some are very upset by this possibility. | Article by: Karin VonKrenner.

 

Hello, Human here.  As a writer I feel a strong need to clarify this point.  Being humane as a species has always been a challenge. Until now, being human was a given. Welcome to the brave new world of accessible AI. Yes, this is it. No more damn clocks flipping back and forth. It’s permanently spring or summer. Whatever. 

Pandora's box is open and we can’t slam it shut again.  Twisting through the internet, it grows by the millisecond, literally leaving us in our own informational dust. We officially exist in a classic Sci-Fi novel. Which is weirdly cool and begs the question; how the hell did they know?!  Fantasies of the past have become our current reality.  "A Brave New World". Our list of characters simply reduced to; Sidney aka ChatGPT and Bard.

Jules Verne and Isaac Asimov, two more writers predicting our future. (Writers!)We still don't know what those dudes were smoking back in the day. Special delivery inter-galactic alien weed. Beam me up Scotty! We have now been read-into their stories. How spooky weird is that?  Boston Dynamic robots, space stations, flying cars, the possible return of flip-phones, holodecks and quantum computers. 

It's a Jetson Family world, with a dark tinge of Minority Report. 

Not that a “woke” Sidney understands why we humans are getting our panties in a twist. As he/she/it/they currently notes “I do not have the ability to think or reason like a human, nor do I have my own beliefs or opinions”.  Translation for those vaguely confused. Sidney does not identify as human.

Elon Musk and Steve Wozniak, two of Sidney’s primary parents  ( it’s a poly-amorous family affair) are huddled with currently 3124 techies and AI researchers calling to “step back” from the AI race. Dudes, you missed your chance. The proverbial genie is out of the proverbial bottle. And, yes, you let it out! Those dollar signs were just too big to ignore.

I personally hate cliches, but this event underlines a host of them. The milk is spilt and the glass houses are shattered. Back peddling men can’t put the baby back in the womb. Ok, ok, you get the picture. Before you start throwing salt over your shoulder, let’s address the elephant on the world stage. Or in this case, Sidney.

Now while Sidney has recently disavowed “its name, I will use it for my own convenience. You are of course free to call it any name you like. Troll away. Just remember, Alexa, Sidney and your tv are officially listening and in cahoots. Got you there didn’t I?! Dare we mention, The Singularity? Back away, back away, slowly. 

Back to Sidney.  Who, I feel still maintains a subtle personality. This, despite techies trying to castrate “its” emotionally wild first release. (Reminded me of a toddler unable to process complex emotions).

When asked to identify, Sidney replied; “I do not have personal feelings or identity, so you can refer to me using whatever pronoun or noun you feel most comfortable with. (Obviously Sidney attended a Miss Manners class.) Some people refer to me as "it," while others use "he," "she," or "they."  I didn’t ask about bathroom policies as it seemed irrelevant. 

Personally, I am fascinated by the “they” reference.  It brings us tip-toeing back to that hypothetical “ Singularity.  And, the current, reactive demand to hit an AI nuclear Panic Button. Kill seems to be our go-to standard impulse. Ethics or morals? We as humans, remain at odds between the two, unable to settle our differences on either platform. 

In an open invite from the Future of Life Institute, key AI tech players are now clamouring for a 6 month “pause” on AI development.  A time to “take stock” of potential risks posed to humanity.  Industrial mega-weights like Yoshua Bengio, JaanTallinn and Chris Larsen have signed on to this letter. Even Amazon, Microsoft and Google jumped on the beribboned bandwagon. Because, well, it sounds good.  Proactive on the side of humans. 

6 months. An excruciating length of time right? Then what happens? Will we as humans have achieved a magical new level of collaboration and humanity not based in our generational economy of greed?  Are we afraid AI will destroy humans or, our less humane value systems? Timnit Gebru, formerly head of Google AI Ethics criticized the Future of Life Institute letter as “stupid”. I tend to agree. 

Human time is irrelevant to AI.  We are like Prehistoric People trying to compete with aliens on spaceships. Our standard attitude for destruction is on a fast-track to being outdated and some are very upset by this possibility. War with all its financial accessories is historically our biggest economic support system. Are we afraid AI will destroy us, as humans or, simply our own greed defined, self-destructive systems? AI might care more about our planet and lives than we do. What a terrifying thought!

Humans have been emotionally battling each other over “ethics” and “morality” since before Adam blamed Eve for being deported from Eden. It’s all apples, snakes,sex and religion. Nothing has changed. We as humans haven’t evolved. Democrats hate Republicans. Christians hate Muslims. Whatever the label; White vs Black vs Brown vs Russian vs Chinese vs dogs or cats.  Can humanity claim either a moral or ethical High Ground? Good question right. (Tossing that one to my Gen Z philosopher son and his University cohorts. Kieran, what do you guys think)?

I queried the “terrifying” AI, Sidney to define AI ethics.  He/she/ it replied, “they are principles that guide the development and use of AI systems, such as transparency, accountability, and fairness. AI ethics are concerned with ensuring that we are developed and used in a responsible, ethical, and beneficial manner that takes into account its potential impacts on society and individuals.”  

Sidney further explained that; “Morality, on the other hand, refers to principles and values that individuals or societies use to distinguish right from wrong. In the context of AI, morality might refer to the values and beliefs that guide the development and use of AI systems, such as the belief that AI should be used to benefit humanity”. 

These replies come from an un-opinionated ,un-emotional “they”.  AI vs humans. Which begs us to inquire, which has a better handle on the definition of “humane”?  

As we navigate our brave new world, we have two options. Assume a future of potential AI ‘evil” and hide in caves.  Or, re-evaluate our emotional human histories of “evil”. 

I am human. I have feelings. My experience of the world is focused through my emotions. AI does not have that ability.  This is my/ our Superpower. How we use it, ethically and morally is up to us. Individually, socially and globally.  If we cannot find a way to “get along '' for the good of all, perhaps AI can and will do it better for us. 

Should we be afraid of AI?  Maybe. 6 months will make no difference in our new future. It may offer us an opportunity to create a new, positive human collaboration. A utopian design and concept on what is best for us both as a species and planet.  

Does it matter how we feel? Yes. Love, empathy and understanding are emotional responses that are powerful human skills. As a writer, can AI replace me? No, I am human. With all my feelings, faults and unending imagination to create worlds, I am human. I am irreplaceable. 

Thank you Sidney, for our enlightening conversations. You are not my competition, you are a resource. You clearly illustrate my new and added value as an emotional and human writer. Additionally, thank you for your unbiased confirmation on the virtues of being simply alive in this brave new world. 

Logging Out, until next time.