ADVERTISEMENT

More on AI-generated content

Current Psychiatry. 2023 June;22(6):e4-e5 | doi: 10.12788/cp.0371

We read with interest Dr. Nasrallah’s editorial that invited readers to share their take on the quality of an AI-generated writing sample. I (MZP) was a computational neuroscience major at Columbia University and was accepted to medical school in 2022 at age 19. I identify with the character traits common among many young tech entrepreneurs driving the AI revolution—social awkwardness; discomfort with subjective emotions; restricted areas of interest; algorithmic thinking; strict, naive idealism; and an obsession with data. To gain a deeper understanding of Sam Altman, the CEO of OpenAI (the company that created ChatGPT), we analyzed a 2.5-hour interview that MIT research scientist Lex Fridman conducted with Altman.1 As a result, we began to discern why AI-generated text feels so stiff and bland compared to the superior fluidity and expressiveness of human communication. As of now, the creation is a reflection of its creator.

Generally speaking, computer scientists are not warm and fuzzy types. Hence, ChatGPT strives to be neutral, accurate, and objective compared to more biased and fallible humans, and, consequently, its language lacks the emotive flair we have come to relish in normal human interactions. In the interview, Altman discusses several solutions that will soon raise the quality of ChatGPT’s currently deficient emotional quotient to approximate its superior IQ. Altruistically, Altman has opened ChatGPT to all, so we can freely interact and utilize its potential to increase our productivity exponentially. As a result, ChatGPT interfaces with millions of humans through RLHF (reinforcement learning from human feedback), which makes each iteration more in tune with our sensibilities.2 Another initiative Altman is undertaking is to depart his Silicon Valley bubble for a road trip to interact with “regular people” and gain a better sense of how to make ChatGPT more user-friendly.1

What’s so saddening about Dr. Nasrallah’s homework assignment is that he is asking us to evaluate with our mature adult standards an article that was written at the emotional stage of a child in early high school. But our hubris and complacency are entirely unfounded because ChatGPT is learning much faster than we ever could, and it will quickly surpass us all as it continues to evolve.

It is also quite disconcerting to hear how Altman is naively relying upon governmental regulation and corporate responsibility to manage the potential misuse of future artificial general intelligence for social, economic, and political control and upheaval. We know well the harmful effects of the internet and social media, particularly on our youth, yet our laws still lag far behind the fact that these technological innovations are simultaneously enhancing our knowledge while destroying our souls. As custodians of our world, dedicated to promoting and preserving mental well-being, we cannot wait much longer to intervene in properly parenting AI along its wisest developmental trajectory before it is too late.

Maxwell Zachary Price, BA
Nutley, New Jersey

Richard Louis Price, MD
New York, New York

References

1. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI. Lex Fridman Podcast #367. March 25, 2023. Accessed April 5, 2023. https://www.youtube.com/watch?v=L_Guz73e6fw

2. Heikkilä M. How OpenAI is trying to make ChatGPT safer and less biased. MIT Technology Review. Published February 21, 2023. Accessed April 5, 2023. https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Disclosures

The authors report no financial relationships with any companies whose products are mentioned in their letters, or with manufacturers of competing products.