
ChatGPT, Smart Robot but can be a Fake Information Spreading Tool?
In call centres, chatbots have begun taking the place of people, but they struggle to respond to clients’ more complicated inquiries. If the launch of ChatGPT is any indication, that might be about to change. The algorithm searches through a massive quantity of data to produce text that seems natural in response to questions or prompts. It can make poems and articles, even imitating literary styles, and write and debug code in a variety of programming languages. It has been hailed by some experts as a groundbreaking achievement in artificial intelligence that has the potential to upend major corporations like Google and replace humans in a variety of tasks. Others caution against the potential for tools like ChatGPT to spread false information on the Internet.
1. Who is ChatGPT’s creator?
In order to create AI technology that “benefits all of mankind,” the San Francisco-based research facility OpenAI was established in 2015 with the help of programmer and entrepreneur Sam Altman, Elon Musk, and other affluent Silicon Valley investors. A programme called Dall-E that can produce images based on text descriptions and software that can outperform humans at video games are further examples of work by OpenAI. The GPT (Generative Pre-Trained Transformer) family of text-generating AI systems includes the most recent version, ChatGPT. OpenAI’s website, it is now free to use as a “research preview,” but the business aims to find ways to make money from the platform.
Investors in OpenAI include the philanthropic foundation of LinkedIn co-founder Reid Hoffman, Microsoft Corp., which contributed $1 billion in 2019, and Khosla Ventures. Despite being a co-founder and an early donor to the nonprofit, Musk stopped being involved in it in 2018 and no longer has a financial stake, according to OpenAI. OpenAI changed its business model in 2019 to become a for-profit company, but it has a unique financial structure: returns on investment are capped for investors and staff, and any revenues over that amount are returned to the original non-profit.
2. How does it operate?
The GPT technologies are able to read and evaluate large amounts of text and produce sentences that resemble what people say and write. They are trained to identify patterns in a dataset without the aid of labelled samples or explicit instructions on what to look for. This process is known as “unsupervised learning.” In an effort to make its responses current and accurate, the most recent version, GPT-3, absorbed content from a variety of online sources, including Wikipedia, news websites, books, and blogs. GPT-3 is enhanced by ChatGPT with a conversational interface.
3. What has the reaction been?
In the days after ChatGPT’s introduction in late November, more than a million people registered to use it. Users have been experimenting with entertaining, low-risk uses of the technology on social media. Its responses to obscure trivia questions have been distributed by some. Others were astounded by its profound historical arguments, college “essays,” popular song lyrics, cryptocurrency-related poetry, meal menus that cater to certain nutritional demands, and programming puzzle-solving techniques.
4. What other purposes might it have?
One possible use is as a substitute for a search engine like Google. It might provide a more customised response than reading through dozens of pages on the subject and responding with a line of pertinent text from a website. It might raise the bar for automated customer care, generating a pertinent response the first time so users aren’t forced to wait to speak to a person. Businesses that would ordinarily need the assistance of a copywriter, it could create blog entries and other kinds of PR content.
5. What are the constraints?
Users could believe that ChatGPT has independently checked the accuracy of the replies since they sound so authoritative when put together from second-hand information. What it’s really doing is putting out content that reads great and seems intelligent but may occasionally be nonsense, incomplete, or biased. The quality of the data used to train the system depends on it. Stripped of useful contexts, such as the source of the information, and with a few typos and other imperfections that can often signal unreliable material, the content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate. Because it lacks helpful context, such as the source of the information, and has few of typos and other defects that frequently identify false material, the content could be a minefield for those who aren’t sufficiently knowledgeable about a subject to recognise a problematic response. Because ChatGPT responses were frequently incorrect as a result of this problem, StackOverflow, a website for computer programming with a forum for coding help, banned ChatGPT responses.
6. What ethical risks are there?
Machine intelligence’s capacity for deception and mischief-making grows as it becomes more advanced. After some users trained Microsoft’s AI bot Tay to spew racial and sexist slurs, it was removed from the internet in 2016. Similar problems arose with another Meta Platforms Inc. product in 2022. In an effort to reduce ChatGPT’s capacity to spread misinformation and hate speech, OpenAI has sought to teach it to reject unsuitable requests. The chief executive officer of OpenAI, Altman, has urged users to “thumbs down” objectionable or abusive comments in order to improve the system. However, some people have discovered workarounds. At its core, ChatGPT just creates word chains without knowing what they mean. It might miss racial and gender biases in novels and other texts that a person would notice. It can also be used as a trickery tool. College professors are concerned about pupils using chatbots to complete their assignments. Legislators may get a flood of letters purporting to be from people complaining about proposed legislation, but they have no way of knowing if the letters are real or were created by a chatbot employed by a lobbying firm.