More on AI-generated content
The Al-generated samples were fascinating. As far as I superficially noted, the spelling, grammar, and punctuation were correct. That is better than one gets from most student compositions. However, the articles were completely lacking in depth or apparent insight. The article on anosognosia mentioned it can be present in up to 50% of cases of schizophrenia. In my experience, it is present in approximately 99.9% of cases. It clearly did not consider if anosognosia is also present in alcoholics, codependents, abusers, or people with bizarre political beliefs. But I guess the “intelligence” wasn’t asked that. The other samples also show shallow thinking and repetitive wording—pretty much like my high school junior compositions.
Maybe an appropriate use for AI is a task such as evaluating suicide notes. AI’s success causes one to feel nonplussed. Much more disconcerting was a recent news article that reported AI made up nonexistent references to a professor’s alleged sexual harassment, and then generated citations to its own made-up reference.1 That is indeed frightening new territory. How does one fight against a machine to clear their own name?
Linda Miller, NP
Harrisonburg, Virginia
References
1. Verma P, Oremus W. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The Washington Post. April 5, 2023. Accessed May 8, 2023. https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
Thank you, Dr. Nasrallah, for your latest thought-provoking articles on AI. Time and again you provide the profession with cutting-edge, relevant food for thought. Caveat emptor, indeed.
Lawrence E. Cormier, MD
Denver, Colorado
Continue to: We read with interest...