Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave
Are decision-making algorithms always trustworthy?
Answered

Discussion

You Will Write Report Or Description, But Must Set Forth The Article’s Thesis, Mode Of Argumentation, Ethical Theories Applied, And Provide a Critique, Applying Your Technical Knowledge If Any, And Theories And Perspectives From The Coursework. Resources For The Assignment. 

Now a day’s algorithms have become more powerful that, using algorithms machines are able to take decision based on their historic data and previous analysis. The analysis is mainly based on negative consequences of algorithm. Big companies and farms use different algorithms to train their dataset to get some fruitful information out of it which can boost their organization financially.

The main objective of the author is to relate the decision-making algorithm with human decision-making system as according to many articles and according to many people’s opinion the decision which machines made are sometime falls under the category of racist, sexist and classist biases. Machine learning models or algorithm are used nowadays widely for predicting future instances and also for forecasting purposes.

There are many instances where it has been found that decision making algorithms performs worse than that of human decisions and don’t make things fair. One such incident was seen at the medical school at St. George’s hospital in London where they see that the resulted output of the machine was both sexist and xenophobic. This eventually surprised them a lot as they did not at all expected that the computer would do discrimination between these. It all happened as the historical dataset which they feed to train the model itself consist the sexist and xenophobic. Thus, according to the pattern of the dataset the model predicted and thus discrimination took place.

Clearly the article is based on comparison mode of argumentation. It has been considered that every algorithm has a heart. Few of the algorithms are less difficult and few of them are not properly understandable. Data scientist are assigned for these machines leaning and for different exploratory analysis purposes. For these data scientist should have a transparent understanding about what are the trade offs and the build tests and also should monitor the ongoing mechanism of the algorithms, which are done to ensure that the decision consistently upheld by the algorithm.

One of the incidents which took place in Australia last week when the department of human services were introduced an automated debt recovery system which utilizes a crude fraud detection algorithm which calculates if the Australian citizens has been overpaid by the welfare system or not. And it has been seen that the fraud detection algorithm assumed that for all the recipient the income is steady throughout the year whereas for individuals the income often vary time to time. For such there are more than thousands of complaints for which the author argues about the errors the algorithm has within it. The errors in the algorithm were not carefully looked into and also, they were not resolved due to which it increases the false positive rate for the classification model.

According to the authors point of view algorithms are not always gives what exact the outcomes would be instead using such statistical and mathematical models the author wants to state that using such tools and technology humans apparently building a weapon which will affect the humanity itself. There are many new powerful tools nowadays which were not used previously and hence in old days decisions are been made more wisely by human brains but now with such evaluation in technology many mathematicians recognized that these particular tools and model could results into some mathematical weapon.

Also building such algorithms are costly as to find the optimize algorithm different layers need to be added to an already made complex layer and need to be evaluated repeatedly if the multiple layer model or the algorithm produces optimum result or not. Moreover, and more importantly, it’s expensive because on any reasonable definition it will generally cut down on profit to be nondiscriminatory. The author is totally against of these algorithms as the author states that these decisions making algorithm cannot be trusted all the times as in real-life decision-making procedures are common but they are good unless the data fits into them and more often they are quite worse.

So, from here the question arises that if these accountable algorithms can truly do better than humans and create better and fairer decisions to provide the exact outcomes then at first data scientist have to prove using such algorithms as most of the times these algorithm fail miserably as because simply the outcomes are been generated by machines. Also, there is not neutral algorithm thus the complexity of the algorithms cannot be reduced anyhow if optimize solution is needed. Also, data collection plays a crucial role as the outcomes of any predictive algorithm depends on the data fetched into the model, if the data is well and good then the result will be accordingly and if the data contain some discrimination factor then the outcome will also have discrimination factor which can result in worse decision making ever in comparison with human decision.

Why We Need Accountable Algorithms. (2020). Retrieved 13 March 2020, from https://www.cato-unbound.org/2017/08/07/cathy-oneil/why-we-need-accountable-algorithms

"Neutrality" Isn't Neutral. (2020). Retrieved 13 March 2020, from https://www.cato-unbound.org/2017/08/11/laura-hudson/neutrality-isnt-neutral

support
Whatsapp
callback
sales
sales chat
Whatsapp
callback
sales chat
close