Ethical Implications Of ChatGPT: The Good, The Bad, The Ugly
Humans love easy shortcuts. Don’t we all think that in the age of cutting-edge competition, it is okay to use a little help from a chatbot? It is not that you are stealing or committing a crime that is against the law! The introduction of ChatGPT, the generative AI technology, has got many people think on similar lines in the past few months. However, the ethical implications of ChatGPT have necessitated that we ponder upon how this seemingly harmless chatbot can bring undesirable and negative consequences.
In this article, we will shed light on the ethical implications of ChatGPT.
What Are Ethics and How is it Related to Technology?
In simple terms, ethics relate to the understanding of right and wrong. However, this sense of right or wrong behavior is not brought into existence by any divine intervention. So, being ethical does not sort of mean following the 10 commandments of Moses.
On the other hand, ethical behavior is associated with adhering to the law of the land. Ethical behavior ensures that everyone conforms to a set of mutually agreed, defined set of laws and rules. It offers everyone an opportunity to be treated fairly and nobody gets in trouble.
The advent of the internet, however, blurred the boundaries of right and wrong behavior. Initially, there was not enough knowledge about how the internet works. So, there weren’t any defined laws. Now, the speed at which technological advancements take place, it is nearly difficult to create laws or refine the existing ones.
OpenAI’s valuable tool, ChatGPT has introduced a whole set of ethical considerations. Let’s have a look at them one by one.
Ethical Implications Of ChatGPT: Social Dilemma
Several instances of the use of ChatGPT have shown that it promotes racial and gender bias. This AI-based chatbot system operates on learning models which are not trained to eliminate bias. The chatbot blindly follows the algorithm and provides the data when someone gives it a prompt. It is not capable of determining whether the data is biased.
The data fed into ChatGPT is old and limited. ChatGPT is built on data of around 570 GB, which is approximately 300 billion words. It is not enough to answer queries on every topic in the world from different perspectives. Additionally, this data has not been updated after 2021. In this way, it fails to reflect progressivism as well.
The problem with the ChatGPT models is that they perpetuate gender, racial, and other kinds of social biases. Many scholars and users pointed out that when they used ChatGPT to gather data or write articles/essays on some topics, they received a biased output, reflecting harmful stereotypes.
For instance, the chatbot seemed to give primacy to male candidates for job opportunities in the tech field. At the same time, several users also found that ChatGPT shows results that are discriminatory towards people of color and different sexual orientations.
These problems were seen because, unlike humans, chatbots and such technologies cannot use the faculty of discrimination and critical thinking. They simply gather information and data from internet sources. It is left to the discretion of the users to exercise caution.
ChatGPT poses a challenge here because it is used by people of almost all age groups. Now just imagine a 13 years schoolboy, who uses ChatGPT for writing an essay on the role of women in Indian society. If the chatbot uses biased data instead of listing the achievements of Indian women, it will not only produce a bad essay, it will negatively color the perspective of the young boy.
Ethical Implications of ChatGPT: Jeopardising Personal and National Security
ChatGPT can pose harmful threats to personal and national security. Although most people use the chatbot for apparently harmless things, there could be some antisocial elements that could misuse the technology. Let’s discuss to most prominent ethical issues regarding security.
Cyber Security Threats with ChatGPT
ChatGPT can be used by scammers and more vicious elements of society. Scammers can use ChatGPT to create fraudulent mobile applications or websites to dupe people. ChatGPT requires a simple prompt to create applications and websites. Through malicious codes, scammers can use resources to mimic officially authorized applications and websites and misguide innocent people.
ChatGPT’s language model, Natural Language Processing (NLP) capabilities, also produce phishing emails and messages in human-like language. In this way, scammers can generate and send emails to several people at once. These emails and messages can be made to look like they are sent from genuine accounts.
The seeming genuity of these emails can have the potential risk of convincing common masses to share sensitive information. Some examples of sensitive information include:
- Bank details
- OTPs
- Health information
- Adhaar/PAN/passport details
- Occupation details
Sharing these details allow scammers to easily impersonate people and undertake serious/non-serious offenses and identity theft.
National Security Threats with ChatGPT
National security is the most sensitive issue in modern times. Even the two World Wars showed us how technologies have potential implications to cause endless miseries to people. Technological advancements in Artificial Intelligence pose a greater threat to internal security.
ChatGPT is a topic of concern among policymakers because they fear terrorists can use it unethically. Here, are some of the ways in which terrorists can use ChatGPT to undertake unethical activities:
- Use ChatGPT for cyberattacks and espionage by sending phishing emails to high-level authorities.
- It can be used for promoting misinformation and conducting psychological warfare. If it becomes a source of highly sought-after source of information, it can be used to mobilize people. If the chatbot promotes misinformation, the masses may be compelled to act on it and harm others.
- Using the research capabilities of ChatGPT to study policies and using this understanding unethically.
- ChatGPT-powered can create deep fake avatars of highly authoritative people, such as celebrities, politicians, and activists, and can mobilize people to undertake riots and attack others.
Ethical Implications of ChatGPT: Threatening Professionals
In an ideal world, machines and technological inventions are used for assisting humans. However, ChatGPT has emerged as a threat to professionals. Worldwide, companies are laying off people because their job can easily be automated and replaced by Artificial Intelligence-based technologies like ChatGPT.
For instance, jobs in the sectors of content writing, code generation, and healthcare consultation for general illnesses can be performed by ChatGPT. It helps companies save money which they would have spent on the salaries for these roles. However, it takes away the source of livelihood from people.
The most disgruntled of the lot of professionals in this regard are teachers and instructors. They not only fear losing their jobs, but they are also against how ChatGPT is interfering with the learning of students.
If we look at the above points, they highlight how certain uses of ChatGPT are unethical:
- Companies are overriding the terms of employment and laying off employees because they got an easy solution to save money. In this way, in the name of ‘upgrading’ their technology, they are replacing hardworking, paid employees with a free technology.
- Having ChatGPT for jobs like AI-written content creation, copywriting, designing, etc. may also violate copyright laws. The chatbot generates results similar or identical to the already existing original content. In the absence of the acknowledgement to the rightful source, it infringes on intellectual property and copyright laws. Thus, it is unethical to use ChatGPT for such things.
- Using ChatGPT for assessing symptoms of common illnesses is also unethical. Several illnesses and diseases have common symptoms. However, it takes a trained and experienced doctor to understand whether those symptoms are serious or not. Moreover, prescribing medication requires considering the overall medical history of the patient. That is why professional doctors are given licenses by authorized institutions before they start their practice. For this reason, the actions and decisions of the doctors come under the ambit of law. If a patient faces any complications due to the wrong treatment, the doctor has to face legal charges. The same is not true for ChatGPT. You cannot blame the chatbot or the company if you take medicine and face problems due to it.
- Even teachers are justified in highlighting the unethical use of chatbots. Students have always been creative when it comes to finding new means of making academics easier. The current generation is tech-savvy, so they will use the internet to understand concepts better or learn new things which are not taught in the classrooms. That is not unethical. However, the problem arises when students use ChatGPT to write essays, solve problems, and do assignments and academic assessments. In universities, students are marked for assignments and homework. Hence, having AI-based technology do your homework and assignments comes under plagiarism, which violates the code of academic conduct.
Possible Solutions
It is the responsibility of OpenAI, governments, and various institutions to ensure they deal with the ethical concerns of ChatGPT. Let’s look at a few things that be done:
- There should be a focus on improving ChatGPT’s ability to decode the context of the prompt.
- ChatGPT should have updated data.
- ChatGPT should have ethical training to understand biases so its responses are discriminatory.
- ChatGPT should provide a warning to the users to exercise discretion and not take the response at face value.
- Governments should make stringent laws regarding the use and access of ChatGPT. It should ban certain features if they threaten the safety of the user and the security o the country.
- Governments and educational institutions should spread awareness among students regarding the judicious, yet ethical use of ChatGPT for skill development and career advancement. It should be used for learning purposes and not for plagiarism.
So, these were some of the ethical implications of ChatGPT that the world has seen so far. As the chatbot is used more and more, there could be greater ethical implications that we might see in the future. However, for that, we will have to wait.
Hope this article was beneficial in understanding how humans should use their power of critical thinking and creativity with technologies like ChatGPT. If they polish their skills and update their learning, they can easily grab a wide range of opportunities - ethically!
We have brought you more articles on ChatGPT. Stay tuned to Unstop to learn about this AI-based technology.
Here are some ChatGPT articles:
- Key Capabilities Of ChatGPT In Customer Service Industry In 2023
- ChatGPT For Financial Analysts: Advantages And Some Limitations
- ChatGPT For Educators: Personalised Learning Tools For Students
- HR Guide To ChatGPT 2023: Automate Your Work & Hire The Best
- ChatGPT For Job Seekers: Use AI Chatbot At Various Hiring Stages