The Rise of AI-Generated Content: Should We Trust It?

We are now living in a world where some of our friends are robots! They are not physically-appearance robot, but those of artificial intelligence who are 24/7 are living in our mobile phones and the virtual world around us. One of the aspects that most robots are helping humans is generating texts.

Iran (IMNA) - Content generator robots provide grammatically accurate texts incredibly fast. The writing doesn't appear artificial or robotic, and this is confusing. Although it's error-less, should we use this text as our own?

ChatGPT and other cutting-edge artificial intelligence technologies are now turning into an indispensable part of human life. However, some people are still unaware of the exact functions of this technology and how it works.

Generative AIs are popular tools, and it appears that everyone from students to professional writers is welcoming them as jacks of all trades who know how to fix everything.

However, there is something wrong with them. Naturally, writing produced by AI is grammatically excellent, but something is off regarding the style of writing. The contents generated by AI seem to repeat some sorts of rote styles.

Through this article I will take an overview of the functions and operation of the generative AI tools. Next, we'll look at how, on closer examination, a seemingly straightforward language might be nonsensical.

Generative AI: What Is It?

Advanced AI systems called large language models, or LLMs, are able to comprehend and produce writing that resembles that of a person. Some models that you might be acquainted with include GPT, PaLM, and Dolly.

The majority of users obtain such LLMs via a chat window that functions as a question-and-answer system. A couple of these tools include Google's Bard and OpenAI's ChatGPT. A prompt is a query or instruction that is stated by the user and acts as though instructing a human to perform a task. The tool produces output in response, which may be text, a graph or a picture. We are going to talk about text output on this topic.

Since generative AI tools are among the first of their kind to be widely accessible to the public without the need for training, their availability is an exciting development. This allows anybody to test one of these tools without having to set up project parameters, create a dataset, or learn how to use it. But this easy access and low usage barrier also raise concerns about security and privacy, as well as blind unquestioned trust in the results.

How Text Is Produced by Generative AI?

AI companies trained their LLMs on genuine human-written material to imitate natural-sounding writing. A recent article in The Washington Post claims that generative AI businesses have made use of text that is publicly accessible on the web, particularly from online sources, published academic papers, and official government public records. The language and the structures it contains serve as the data representation in this case; the subjects being written about are not always of importance to these LLMs. However, as thinking and writing have a strong connection, any bias or error made by a human writer would be reflected in the output produced by the AI.

Through text ingestion, LLMs generate a mathematical model of ideas using sophisticated computer algorithms, rules and instructions created by humans to direct the tools into action. The LLMs mathematically forecast which words authors are most likely to need in response to the user's demand using this data.

On the surface, AI writing appears fluent and intelligent, since numerous sources that AI businesses utilized to train their new tools were formal writings by educated native English speakers (word order, word choice). But keep in mind that word frequency and the probability that a word will follow another word or that a phrase will follow another sentence are the two factors that these technologies use to generate text.

Consider it as a more sophisticated kind of text prediction on your smartphone. (The program is obviously much more complex than that!)

Challenges in Coherence and Contextual Relevance in AI-Generated Writing

The next most frequent phrase or sentence in the writing generated by these AI techniques is occasionally disconnected or out of context, which is another problem. These AI tools appear to contain algorithms that add extra transition words between phrases to make the sentences sound more related in order to avoid this issue. The intention is to improve the writing's flow, yet the opposite occurs when a conjunction is used incorrectly or unnecessarily. The writing gets disjointed and confusing. Generative AI algorithms frequently produce wordy, formulaic material that approximates business writing.

The end outcome is that the writing sounds like any boring corporate writer: strangely formal, unnecessarily repetitive, rigid and uninteresting. It's worse if the results are factually incorrect (a typical issue with AI-generated text).

Example: Writing an Essay about the Rainforest.
Let us now apply this information to an example. You may submit something like, "Write an essay about the destruction of the wetlands over the last 40 years." You might try something more specific, such as "Write a 1000-word university-level essay about the destruction of the wetlands over the last 10 years." In response, ChatGPT may scan its data for the keywords in the command and determine which words, sentences, and phrases are often associated with keywords.

It rapidly becomes evident that ChatGPT's algorithms instruct the AI tool to perform something like this:

Find the most often used words in paragraphs or essays that involve the phrases wetlands, destruction, and last 10 years, then expand the text using similar structures.

The disconnectedness is not just related to word choice, but the cohesion and coherence of the text. If you often utilize generative AI tools such as ChatGPT, you will notice an overuse of cohesive components. AI technologies use these terms to generate the impression of coherence. The idea is to make the writing appear interconnected. However, just including cohesive phrases in AI-generated text does not imply that the writing makes sense. Text is cohesive when it makes sense and expresses its purpose.

Humans are skilled at adjusting their language to the situation at hand. In casual conversation or informal writing, a person would use pronouns and share prior knowledge to prevent repetition, resulting in a better flow and reduced wordiness. However, LLMs lack this form of contextual awareness, therefore they must rely on words to generate it. While this strategy (using various AIs) may work in a brief email, it is overkill in an essay, resulting in lengthier output that seems redundant, repetitious, and too confusing.

We should keep in mind that even human writers would make the same errors as AI-generating machines; however, what makes the human-written text authentic is the reasoning behind every part of the text. Such a feature is not actually justifiable, but the authentic text is something sensible that talks to the reader.

News ID 659386

Tags

Your Comment

You are replying to: .