Concerns of Unemployment Due to Automation
Discuss about the Ethics and Professional Practice for Artificial Intelligence.
According to John McCarthy, artificial intelligence is the science that refers to the science and mechanism of building intellectual machines, principally intelligent mechanical programs. In fact, it is a way of making software think intelligently or simply making a computer that possesses human like performing competency with a greater speed (Cohen & Feigenbaum, 2014). The ethical issues related to deployment of artificial intelligence in industries are distinct in nature since causes consequence on a broader level having impact on grave issues like employment and economical growth.
The recent proneness of Australian industries to replace human workforce with artificial intelligence has been a matter of serious concern on the ground of ethical considerations. While embracing artificial intelligence, replacing about two-thirds of human intelligence with mechanical one the greatest issue that the companies face is the ethical dilemma associated with it (Opray, 2017). The philosophy working behind artificial intelligence is to invent machine intelligence similar to that of humans that are regarded high in human beings. The observation from the perspective of economy states that Australia’s expenditure on automation is the second largest in the world. Major business houses in the country has been investing large sum of amount for acquiring artificial intelligence; only in the last year of 2016, the Australian industries have spent $7.98m (theaustralian.com.au, 2017). Since all the companies are competing on global level, it is natural for them to seek more and more implementation of computer technology. However, it has been seen that Australian businesses lack in taking up artificial intelligence and planning for integrated use of AI. Surveys have found that considerable number of Australian business leaders feel that they are deficient in competency required to capitalize on artificial intelligence. However, apart from this, the Australian business houses suffer from severe ethical dilemma that hinders the wide acceptance of it. The first thing that comes as objection in this regard is the possibility of unemployment, which is likely to happen due to automation. Automation in the field of both physical and cognitive labor is likely to undertake the greater part of human workplace (Bostrom & Yudkowsky, 2014). There is, in fact a specific term called technological unemployment to indicate how artificial intelligence has been replacing human labor over the years. The view that automation can have lasting effect of unemployment has been controversial since long. Known personalities in the fields of economy and management Gene Sperling and Andrew McAfee, have also considered it a “significant issue” contemplating on the idea of existing dilemma and impending job-loss (ide.mit.edu, 2017). However, some believe that compensation effect will work in favor of human labors and those who are expecting a long-time effect of this are committing a fallacy.
Unequal Distribution of Wealth
The risk assessment of deploying artificial intelligence involves another problem i.e. unequal distribution of wealth. Since the economy system relies on reimbursement for contribution in the economy, automation in the system of work can facilitate the companies to cut down the wage of the human labors (Opray, 2017). Such drastic reduction in the wages will ensure that the revenue generated reaches to less number of people. Consequently, the possibility of unequal wealth distribution arises where the larger share of profit reaches to the people in the ownership. This ever-widening wealth gap in the society has been another cause of major ethical dilemma for the companies.
The increasing level of reliance on artificial intelligence has become another major cause of ethical dilemma as computer software have already been proven effective in direction human choices and attention. The cause of dilemma lies in its potential in being proven as detrimental if used by wrong hands (Tennyson, 2013). It seems that human dependency has found a new form, tech addiction and reliance on artificial intelligence. Besides this, it is believed that intelligence stems from learning continuously. Where it is possible to learn for human beings from their mistakes the machines are unable to learn anything new after its testing phase is over. Naturally, it is not possible to include all probable examples in the machines within that brief phase and thereby stands a chance that with time these machines would lack in security and efficiency (Tennyson, 2013). The question of security is however, an essential point as with the implementation of a powerful technology, comes the question of safeguarding it. The extent of confidential work that these machines are entrusted with requires a higher degree of security arrangement in order to avoid any damaging consequences.
Information ethics and computing as well as several other areas of applied ethics require coherent unification of consequentiality of ethical analysis and deontological aspects. The theory of just consequentialism and computing proposed by James H. Moor emphasizes the effect of such policies within the limitation of justice (Sverdlik, 2016). The higher degree of flexibility offered by computing computers has facilitated its use in novel and unprecedented ways; however, formulated policies are still not there to control the use of artificial intelligence. Policies are the typical kinds of actions, which are at times contingent upon various situations (Baker, 2016). As far as designing a comprehensive policy is concerned, amalgamating considerations of consequences with more traditional deontological concept of considering duties, justice and rights will be ideal. Moral realists have often tried this amalgamation of consequentialist and deontological principles in order to deal with conflicts between different rights and obligations. This hybrid model of ethical approach assists in locating the external practical constraints of liberal imagination while preserving the deontic values in difficult situations (Levy, 2014).
Human Dependency and Misinformation
Business philosophy is governed by the divergence between deontology, which is justifying the set of actions according to the traditional particular values and duties and consequentialism on the other hand which is to consider the outcomes of those actions. The problem with consequentialist theory, as critiqued traditionally, is that it concerns the future consequences and makes assumptions according to that whereas it is evidently not possible to determine future outcomes beforehand. As for example, an unethical business approach is permitted for an immediate gain without having any other ways (Russell, 2016). However, the conflict arises between whether this short-term deviation from ethical ground will have a lasting impact on the business culture or that what consequences it will generate in future is futile to contemplate on as long as it is beneficial for the business. Among act consequentialism and rule consequentialism, the latter being more inclined to deontological practices, the former one is, however, more popular in many cases as it develops desirable outcomes even though it is no clear what optimal benefit it will generate. On the other hand, rule consequentialism, the previously tried and tested method, resolves this issue to some extent by necessitating a person to adopt rules, which are likely to generate the desired outcome.
As referred in Kantian ethics, Categorical Imperative lays emphasis on an integrative perspective of considering both deontological values and reflection on consequences (O'Neill, 2013). Obedience to both has been seen to result in essential achievement of desired goals. The combined approach of deontology and consequentialism for a single purpose is likely to produce two different outcomes. Firstly, the principles of consequentialism may constrain deontological ideals. If the Categorical Imperative is conceptualized as a rule-consequentialist approach as exampled in Kant’s model where only deontic rules are assessed desirable because these are likely to deliver preferable moral outcomes. This combination can also operate in a reverse manner in which consequentialist ideals are constrained by deontological priorities (Stern, 2015). For example, in a serious business consideration too much of ethical consideration can hinder the generation of desired outcomes.
All these attempts to merge consequentialism and deontology have provided an ideal opportunity for arguing explicitly in order to find out which one is the more efficient and dependable. Eventually they have concluded on a more liberal theory of what it is called just-consequentialism, facilitating to accomplish what is right and what is good at the same time. This seems to be of a perfect balance in a tough business situation provided with the liberty to follow procedural norms, which boost autonomy and the capacity of an individual to take hold of the opportunities without any hesitation in one hand while committing to rules as far as possible on the other. Hence, Moor’s theory of Just-Consequentialist Framework can rightly be called as a defensive ethical theory.
Security Risks and Breaches
The Institute of Electrical and Electronics Engineers (IEEE) computer society is a professional organization, which facilitates to improve theory, putting them into practice, application of information and computer technology processing scientific knowledge and technology (ieee.org, 2017). The group has volunteer boards in several program areas such as professional activities, education, membership, standards, conference and technical activities and has its own set of bylaws and constitution as established in 1971 (ieee.org, 2017).
The Australian Computer society is the supervising organization for information and communications technology (ICT) profession in Australia (more.acs.org.au, 2017). The organization represents all the ICT practitioners of varied professions such as government, education and business. ACS works in order to help the members to follow their ambition, realize the extraordinary possibilities of their aspiration and helps them to realize those. The organization is known for recognizing professionalism, improving the skills of ICT and creating an integrated community of ICT practitioners (acs.org.au, 2017). Besides, the community also prioritizes ethics maintenance of the highest standards.
Now, since both the organizations are the two most reputed ones in their functioning field of information technology they are bound to have some similar code of ethics following which they operate under ideal situations (Johnson, 2014). On the other hand, as these are two distinct entities some set of rules may differ between them. While mentioning the similarities the first striking parallel that can be drawn from the codes of ethics of the two organizations is the emphasis on technical competence. Both the organizations have stated it plainly that the members must work diligently and competently and should strive for continuous improvement. Both IEEE and ACS in professional development include the responsibility of helping colleagues and students to ensure their improvement also (teaching.csse.uwa.edu.au, 2017). Besides, another ethical code that is quite prominent in nature is to maintain professional loyalty. This includes rejecting bribery, being honest in stating any claim and representing skills, knowledge and services. Among the codes of ethics with social implications the safety and security of the public is on the highest priority level for them both. Working for welfare of the people and environment is considered as obligation for these organizations. Along with this, both the organizations make it a must to preserve the integrity and professionalism in the field of information technology. In addition, the organizations seem quite accepting about criticism; while ACS lays emphasis on giving regard to others’ perceptions, IEEE proposes to seek and accept honest criticism about their technical work (ieee.org, 2017).
Comprehensive Policy for Ethical AI Practices
In order to contrast the codes of morals and beliefs of the two organizations, some specific ethical codes are there in both of the organizations that are not in common. For example, IEEE makes its stringent that under any circumstances causing injury to other’s reputation, property and employment by malicious or false action is unacceptable and is a punishable offence (ieee.org, 2017). IEEE also bars any kind of discrimination among the members concerning gender, race, religion, disability or any other criteria. ACS, on the other hand makes it a code of ethics to maintain distance from a person whose society membership has been terminated. The organization also encourages the members to lodge complaint against a person with unethical behavior to the Society authority (acs.org.au, 2017). Besides, ACS seems to be concerned about not only ensuring the quality of life but also respecting the privacy of people that might be curbed by the organization’s work.
Hence, it can be seen that though the organizations vary in fewer areas in the context of ethical codes, they resemble in most of the places aiming to work for people’s betterment while maintaining the desired set off social and professional ethics.
References:
About the ACS | Australian Computer Society. (2017). More.acs.org.au. Retrieved 4 April 2017, from https://more.acs.org.au/about-the-acs
ACS Code of Ethics. (2017) (1st ed.). Retrieved from https://www.acs.org.au/content/dam/acs/acs-documents/Code-of-Ethics.pdf
ACS Code of Ethics. (2017). Teaching.csse.uwa.edu.au. Retrieved 4 April 2017, from https://teaching.csse.uwa.edu.au/units/CITS3200/ethics/acs-ethics.htm
Baker, K. (2016). The Consequences of Accepting Consequentialism. Philosophy Now, 115, 38-40.
Big firms embrace artificial intelligence. (2017). The Australian. Retrieved 4 April 2017, from https://www.theaustralian.com.au/news/latest-news/big-firms-embrace-artificial-intelligence/news-story/ce4927d38d379b554b4550903d37a80e
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316-334.
Cohen, P. R., & Feigenbaum, E. A. (Eds.). (2014). The handbook of artificial intelligence (Vol. 3). Butterworth-Heinemann.
IEEE IEEE Code of Ethics. (2017). Ieee.org. Retrieved 4 April 2017, from https://www.ieee.org/about/corporate/governance/p7-8.html
IEEE. (2017). Ieee.org. Retrieved 4 April 2017, from https://www.ieee.org/index.html
Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263.
Levy, S. (2014). The Failure of Hooker’s Argument for Rule Consequentialism. Journal of Moral Philosophy, 11(5), 598-614.
O'Neill, O. (2013). Acting on principle: An essay on Kantian ethics. Cambridge University Press.
Opray, M. (2017). Artificial intelligence has arrived, but Australian businesses are not ready for it. the Guardian. Retrieved 4 April 2017, from https://www.theguardian.com/sustainable-business/2017/jan/25/artificial-intelligence-has-arrived-but-australian-businesses-dont-know-how-to-use-it
Russell, B. (2016). Contractualism, Consequentialism and the Moral Landscape: A New Pro-Contractualist Picture of Ethical Theory.
Stern, R. (2015). Kantian Ethics: Value, Agency, and Obligation. Oxford University Press, USA.
Sverdlik, S. (2016). Consequentialism, Moral Motivation, and the Deontic Relevance of Motives. Moral Motivation: A History, 259.
Tennyson, R. D. (2013). Arti?cial intelligence and computer-based learning. Instructional technology: foundations, 319.
THE MIT AI & MACHINE LEARNING DISRUPTION TIMELINE CONFERENCE. (2017). ide.mit.edu. Retrieved 4 April 2017, from https://ide.mit.edu/sites/default/files/030817%20Agenda.pdf
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2018). Ethics And Professional Practice For AI Essay.. Retrieved from https://myassignmenthelp.com/free-samples/ethics-and-professional-practice-artificial-intelligence.
"Ethics And Professional Practice For AI Essay.." My Assignment Help, 2018, https://myassignmenthelp.com/free-samples/ethics-and-professional-practice-artificial-intelligence.
My Assignment Help (2018) Ethics And Professional Practice For AI Essay. [Online]. Available from: https://myassignmenthelp.com/free-samples/ethics-and-professional-practice-artificial-intelligence
[Accessed 23 November 2024].
My Assignment Help. 'Ethics And Professional Practice For AI Essay.' (My Assignment Help, 2018) <https://myassignmenthelp.com/free-samples/ethics-and-professional-practice-artificial-intelligence> accessed 23 November 2024.
My Assignment Help. Ethics And Professional Practice For AI Essay. [Internet]. My Assignment Help. 2018 [cited 23 November 2024]. Available from: https://myassignmenthelp.com/free-samples/ethics-and-professional-practice-artificial-intelligence.