The Day After Trinity & Artificial Intelligence
Scientists often chase big changes without considering the consequences of their actions. Living in the “now” of their research and their laboratory, they move forward with a dangerous carelessness about the future impact of what they are doing. The rest of us become hostages to their “vision” as they insist that we just don’t understand as we are too uninformed and unintelligent. Yet, we must suffer the outcomes of their vision.
In today’s world the big change that is happening is Artificial Intelligence. This is a great research tool but as more and more proponents suggest unleashing it and letting it run everything, we should all take a pause and seriously consider what we are actually doing.
On Tuesday this week, I attended a webinar on the Nonprofit Community and Artificial Intelligence that was organized by Jeff De Cagna. He organized this conversation to discuss his Open Letter on AI to the nonprofit community, and to hear what other people in the community thought. There were many comments, opinions, and even more questions about what we as individuals, as a society, and as the nonprofit profession can do to guide the development of artificial intelligence (AI) tools to positive, supportive outcomes that uplift humanity.
As noted throughout this specific discussion on April 11, there are many opportunities for wrong information at best and dangerous destruction at worst. Thanks to Elizabeth Engel, who is an amazing Zoom Chat contributor, we came away with several articles and resources [note: if you are interested in Jeff’s open letter, or resources and recording from this session, please contact Jeff De Cagna directly].
One of the articles Elizabeth shared was from The Washington Post and it outlined the factions that have already formed around AI – they cover the spectrum of AI Doomers to AGI Believers to AI Ethicists. All of these groups seem a bit extreme but I think I would have to go with the AI Safety faction (if I had to choose) which is described as “a new field of research to ensure AI systems obey their programmer’s intentions and prevent the kind of power-seeking AI that might harm humans just to avoid being turned off.” That is an admirable goal for any new technology or research application; proceeding with caution to scan for Unintended Consequences demonstrates responsibility and accountability.
The nonprofit community is in a unique position to guide conversations about the ongoing development and uses of AI. We should use our voice to point out the benefits and pitfalls, especially for the vulnerable segments of the society that we serve.
The ongoing discussions about artificial intelligence (AI) and its impact and consequences brought me back to The Manhattan Project. As most of us recall from history class, the Manhattan Project was the huge project conducted during World War II to develop nuclear fission and then turn it into a weapon. A segment of the scientists working on this project were Jewish and their goal was to stop Adolph Hitler, who at the time seemed unstoppable.
But then, lo and behold, not only was Hitler stopped, his “thousand-year Reich” collapsed. The scientists that were working to stop Hitler decided the Manhattan Project should be stopped because it was no longer necessary, and its potential for destructive outcomes was too great. However, J. Robert Oppenheimer was running the show and he pressed onward (with the support of the US Government).
Despite letters and impassioned pleas not to move forward with this most dangerous of technologies, their cautions and concerns were swept aside. The scientists who decided it was time to stop the project decided to leave. But a core group of scientists who wanted to move forward remained.
While you may know The Manhattan Project, you may not know the Trinity reference. The result of the project was an atomic weapon and it was decided to move ahead and detonate it. Trinity was the code word for the atomic bomb test. The problem with the decision was this: NO ONE had any idea what was going to happen.
Theories ranged from blasting a hole in the Earth, poisoning the atmosphere, or even ripping the atmosphere away. All these potential theoretical outcomes would have ended life on Earth.
The scientists who moved forward made this decision for everyone else without thought or concern for their lives or feelings. “It’s for science.” They thought that made it okay. While they all had the opportunity to come to terms with Trinity possibly being their last day on this planet, no one else received the same courtesy.
When the Trinity test occurred, and it obviously didn’t destroy life on Earth, Oppenheimer quoted Hindu scripture: “Now I am become Death, the destroyer of worlds,” a line from the Bhagavad Gita.
Not only did we have a powerful and dangerous new weapon that basically mimicked the internal workings of a star, after it was used on Japan to end World War II, it ignited an arms race that still has repercussions today. It led to the development of more powerful atomic weapons, ultimately leading to Edward Teller destroying Oppenheimer’s reputation so that he could build the hydrogen bomb. The Day After Trinity was the day that changed how the world moved forward and how we viewed the world around us. It created a world that for decades was overcast with fear.
This should not be taken lightly when proponents of AI are making statements like “AI will break capitalism” or “AI will replace humans doing jobs.” AI developers are meddling with things they do not fully comprehend and they definitely are not considering the long-term repercussions of what it will do to humanity and how we live.
The Day After Trinity changed the world in a negative way because the long-term impact of a weapon we were too naïve to understand was not considered. Artificial intelligence can mimic a mind of its own but it has no soul. What will be the impact of abdicating our responsibility in this situation?
On February 23, 2023, I posted another article about artificial intelligence. I will close this post with a quote from my previous article as it specifically relates to what the nonprofit community should consider when moving forward with AI:
“The problem with artificial intelligence and tools like ChatGPT is I don't necessarily see these things making us better people. And yes, dear reader, you're going to say, “well many new tools and products don't make us better people.” But there is a very vocal group of humans insisting artificial intelligence is going to change the world and pushing for it NOW before we know its true impact.
If we are going to change the world using a piece of technology, we had better make sure that it actually does make things better and not worse.”