Discuss about the ethical impact of Artificial Intelligences (AI) within businesses and online communities.
Ethical analysis of companion robots
Artificial Intelligence (AI) refers to the technology that is associated with the simulation of human intelligence by the computer or machine. AI has proved itself in many practical tasks - from labelling photos to diagnosing disease. AI will let robots do more complicated jobs, such as shop assistant serving customers. One of the primary concern of research and development in AI system is to understand the ethical impact of such system in businesses and online communities. The emergence of robots will lead to many risks in near future and companion robot is of the top concern (Cs.bath.ac.uk, 2018). Besides responding to human requests modern robots are assured for decision-making and some level of autonomy and this emancipation of robot have raised several ethical issues that were earlier irrelevant. The issues are significance of decision-making to AI system; moral, societal and legal consequences of AI decisions and actions; accountability of AI system for its action (Theaustralian.com.au, 2018). Thus, implementation of AI system that follow ethical principles requires an understanding of different ethical theories that can be applied to decision-making. The following section discusses the ethical analysis of one type of AI system called companion robots. The discussion also consists of a proposed solution for the dilemma in view of each of the four ethical viewpoints as well as the ACS code of ethics.
Artificial human companion or virtual companion is a kind of hardware and software combination that provides real or apparent companionship to a person. One type of such companion is a companion robot, which can functionally assist the aged or elderly person, autistic children or the disabled to maintain a socially acceptable standard of life. Present day companion robot can either focus on task execution or on companionship.
Utilitarianism: This ethical theory states that an action can be considered best when it maximizes the sum of happiness or pleasure. Here instead of motivation or other factors for the action only the results of actions are considered. For instance, the action of being destroying something or killing someone whom everybody hates will be considered as ethically right if that action increase the total happiness of the universe. Thus, the problem of utilitarianism is that is it allows for immoral, unfair and unjust action (Dignum, 2018). For utilitarianism, human beings have no absolute value. Thus, if a self-driving or AI based truck has the choice between killing a homeless criminal who sleeps in a street aside the museum, or destroying an expensive museum building having precious art; then in that case, it would be ethically right to kill the homeless criminal if it maximises social happiness or welfare (Dignum, 2017).
Utilitarianism
However, the situation is certainly different in friendly AI ethics. Here a powerful machine or group of machines interprets the presented principle is absolutely defined way. This leads to a usual counter-statement to machines’ ethical system, which states that the successive application of the principle leads to S, which is unwanted by anyone. Here S can be actualization of wicked preferences such as destroying object or killing people to save others or excessive happiness (Rossi, 2016).
Thus, the technique in case of preference utilitarianism is avoiding the complexity problems of human values through definition of the term value (goal, purpose or preference) of a person or object means itself and that it is good to protect them. Instead of putting the preferences and goals directly into the system, the above technique of defining them is generally suggested in AI ethics at once to learn ethics from humanity (Mill, 2016).
Deontology: According to this theory, the morality of an action is judged as right or wrong on basis of a series of rules, instead of the basis of the consequences of the action. This theory argues that argues that what is good or need to be done must be explain through human reasoning (Paquette, Sommerfeldt & Kent, 2015). The deontological ethics argues that humans can be regarded as ends in themselves and cannot simply be considered as way to an end. It argues that decisions are taken in consideration of factors of one’s duties and other’s rights. Unlike utilitarian arguments, deontological ethics does not support use of self-governing weapons on basis of the potential to save lives or cost-benefit reasoning. Instead, it establishes inherently good rules and does not depend on profit or achieving a greater public good (Conway & Gawronski, 2013). Deontological ethics creates humanitarian based moral rules. Human characteristics and potentialities such as self-reflection, exerting judgment, practical reasoning, exercising judgment, consideration does not exist in AI system so that human activity must be at the forefront of designing and taking responsibility for their best (Goldsmith & Burton, 2017).
The ethical values must be formulated as measurable parameters by the ethicists as well as AI researchers. In other words, they have to provide clear and straightforward answers and decision rules to any potential ethical dilemmas the AI system might encounter. This would require the challenging task of humans agreeing among themselves on the best ethical course of action in any given situation. It is notable that while designing AI system, the strong cross-cultural differences in moral values around the world must be considered. In addition, different ethical approaches are required to handle different situation and in some cases, there may not be a single ethical course of action at all. For instance, considering the case of currently developed deadly self-governed weapons for military applications. Crowd sourcing the potential solutions to moral dilemmas from millions of humans (Russell, Dewey & Tegmark, 2015) could solve this.
Deontology
Social contract theory: This theory argues that people live together in society according to the contract that forms rules of ethical as well as political behaviour. According to this theory, ethics consists of a set of rules that governs how people treat and accept each other for their mutual benefit under conditions that are also followed by others. Since duties and ethical obligations are generally complex by character and can be explained in various ways and the social contract theory can be applied as a process of explanation. Hence, it is hard to decide whether all duties and ethical obligations could be explained by a social contract theory or not (Schouten, 2013).
Human in the loop (HITL) AI system deals with implantation of judgment of individual or groups in order to optimize the merely defined AI systems (Li et al., 2016). While society in the loop (SITL) AI system deals with implantation of judgment of whole society in the algorithmic law of societal results. SITL can be devised to implant the general will into an algorithmic social contract. Implementing SITL control in governance algorithms presents problems. Firstly, some of these algorithms create negative facets, which implies incurred costs of third parties are not concerned with the decision. For instance, if self-governed vehicle algorithms gives priority to passenger safety then they may excessively increase the risk carried by pedestrians. Another problem with SITL implementation is that governing algorithms often implement indirect settlement. For instance, reduction of the speed limit on a road minimizes the drivers’ utility, while increases the total safety of pedestrian and drivers (Rahwan, 2018).
Thus, it can be proposed that SITL construction needs implementation of two distinct processes. Firstly, SITL requires human surveillance of decision-making made by algorithmic and data-driven systems. Secondly, SITL also requires implementation and negotiation of tradeoffs between the objectives of different stakeholders of society. Thus, the requirements of SITL implementation states that governance algorithms must be managed in the same manner as people’s relationship with government is managed.
Character based ethics: This ethics is also referred to as virtue ethics and instead of determining what makes an action good, this ethical theory determines what makes a character or a person, good. This theory argues that a good person consistently performs good actions and the theory tries to find maximum benefits from decisions and actions. Hence, character based ethics define as well as determine most ethically desirable, goal of methods or actions undertaken for those with whom one is associated (Hursthouse, 2013).
Social contract theory
Character based ethics fits nicely with modern AI research and is a promising moral theory as basis for the field of AI ethics. Taking the virtue ethics route to building moral machines allows for a much broader approach than simple decision-theoretic judgment of possible actions. Instead, it takes other cognitive functions into account like attention, emotions, learning and actions (Burton, Goldsmith & Mattei, 2015).
Since virtues are an integral part of one’s character, the AI system would not have the desire of changing its virtue of temperance. Learning from virtuous exemplars has been a process of aligning values for centuries, thus building artificial systems with the same imitation learning capability appears to be a reasonable approach.
ACS code of ethics: This code consist of the following clauses (Poulsen, 2018).
- The primacy of the public interest,which implies that public interest must be put above personal or business interest. Thus, while building AI system the public interest matters including safety, environment, public health, safety and the environment must be considered.
- The Enhancement of the quality of life,which implies that the quality of life of the affected people by the AI technology must be enhanced. The development of AI technology has a remarkable impact on human society as well as way of life. While the impact could be beneficial, still it may have some negative effects. There should be ethical approach to minimize these effects.
- Honesty,which implies there must be honesty in representation of products, services, skills and knowledge associated with technology.
- Competence which implies thatwhile designing AI system the limitation of the system must be clearly defined.
- Professional developmentwhich implies that stakeholders must be informed about the new technologies as well as practices and standards that are relevant to development of AI system.
- Professionalism which implies that any ethical dilemma must be resolved during development of AI system.
It is notable that though the existing laws have not been developed from the viewpoint of AI system still it does not imply that the AI- based technological products and services are unregulated. Government must provide a balance between supporting for innovation while ensuring the customer or consumer safety through holding the manufacturers of AI system responsible for any harm resulted from unreasonable practices. Thus, it could be proposed that AI researchers, policy-designers, AI engineer must work together to ensure that AI system provide a humanitarian benefit. The ethical guidelines must be implemented by the policy-designers to ensure that decisions with respect to ethics becomes more transparent, particularly with regard to ethical metrics and outcomes.
Conclusion:
From the above discussion it draws conclusion that it should not be presumed, that AI system can inherently behave morally. Humans must define and teach them about morality including its measurement and optimization. This seems to be a daunting task for AI system engineers. It is argued that in future AI application such as robots will be capable telling right from wrong that will always act on its own and help humans in suffering. Now at the moment what is more important is that even the narrow AI applications require urgent attention regarding their way of making ethical decisions in daily practical situations.
It is notable here that AI system has vast potential for providing human rights, human justice, human welfare as well as nurturing virtues and thus to make human lives in many ways. However, there are some ethical issues, which imply the risk of negative results or impacts as discussed above in the ethical analysis. While considering these risks, one must keep in mind that the ethical implementation of the AI system is up to humans and the challenge is now to ensure that everybody must benefits from this technology. The primary intention of all these policies is to ensure that people of the society can totally capitalize the capabilities of AI systems along with minimizing the possible negative or undesired consequences on people.
References: -
Burton, E., Goldsmith, J., & Mattei, N. (2015, April). Teaching AI Ethics Using Science Fiction. In AAAI Workshop: AI and Ethics.
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: a process dissociation approach. Journal of personality and social psychology, 104(2), 216.
Cs.bath.ac.uk. (2018). AI Ethics: Artificial Intelligence, Robots, and Society. [online] Available at: https://www.cs.bath.ac.uk/~jjb/web/ai.html [Accessed 27 Aug. 2018].
Dignum, V. (2017). “Responsible autonomy”. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI’2017), pp. 4698–4704.
Dignum, V. (2018). Ethics in artificial intelligence: introduction to the special issue.
Goldsmith, J., & Burton, E. (2017, February). Why Teaching Ethics to AI Practitioners Is Important. In AAAI (pp. 4836-4840).
Hursthouse, R. (2013). Normative virtue ethics. ETHICA, 645.
Li, J., Miller, A. H., Chopra, S., Ranzato, M. A., & Weston, J. (2016). Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823.
Mill, J. S. (2016). Utilitarianism. In Seven Masterpieces of Philosophy (pp. 337-383). Routledge.
Paquette, M., Sommerfeldt, E. J., & Kent, M. L. (2015). Do the ends justify the means? Dialogue, development communication, and deontological ethics. Public Relations Review, 41(1), 30-39.
Poulsen, A. (2018). A Post Publication Review of “Threats to Autonomy from Emerging ICTs. Australasian Journal of Information Systems, 22.
Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14.
Rossi, F. (2016). Artificial intelligence: Potential benefits and ethical considerations. Europe: European Parliament. Retrieved January, 28, 2018.
Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. Ai Magazine, 36(4), 105-114.
Schouten, P. (2013). The materiality of state failure: Social contract theory, infrastructure and governmental power in Congo. Millennium, 41(3), 553-574.
Theaustralian.com.au. (2018). AI and ethics in the boardroom. [online] Available at: https://www.theaustralian.com.au/business/technology/we-need-a-more-open-debate-on-ai-and-ethics-in-theboardroom/ news-story/863457fc5886bd552f1c687be1399186 [Accessed 27 Aug. 2018].
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2019). Ethical Impact Of AI In Businesses And Online Communities: An Essay.. Retrieved from https://myassignmenthelp.com/free-samples/ethical-impact-of-artificial-intelligences.
"Ethical Impact Of AI In Businesses And Online Communities: An Essay.." My Assignment Help, 2019, https://myassignmenthelp.com/free-samples/ethical-impact-of-artificial-intelligences.
My Assignment Help (2019) Ethical Impact Of AI In Businesses And Online Communities: An Essay. [Online]. Available from: https://myassignmenthelp.com/free-samples/ethical-impact-of-artificial-intelligences
[Accessed 26 December 2024].
My Assignment Help. 'Ethical Impact Of AI In Businesses And Online Communities: An Essay.' (My Assignment Help, 2019) <https://myassignmenthelp.com/free-samples/ethical-impact-of-artificial-intelligences> accessed 26 December 2024.
My Assignment Help. Ethical Impact Of AI In Businesses And Online Communities: An Essay. [Internet]. My Assignment Help. 2019 [cited 26 December 2024]. Available from: https://myassignmenthelp.com/free-samples/ethical-impact-of-artificial-intelligences.