This Assignment assesses the following module Learning Outcomes (from Definitive Module Document):
1.acquire detailed knowledge of the ethical standards from relevant professional bodies (such as the BCS Code of Conduct) to which a computing professional is expected to adhere;
2.be aware of the wide range of International & UK Laws within which a computing professional should operate;
3.understand the current computing technological, commercial and economic contexts where social, ethical and legal issues and dilemmas may arise;
4.demonstrate awareness of software and content licensing practice.
5.be able to work in a team to articulate the evaluation and application of management techniques to address the social, ethical and legal problems and commercial risks together with the opportunities inherent in the use of computing technologies and the deployment of computer-based systems;
6.be able to produce outputs and documentation of those outputs to demonstrate compliance with legal, ethical and professional standards;
7.be able to summarize high profile cases where the meaning of social, legal and ethical issues have been elaborated in detail.
Assignment Brief:
Please see below for the detailed assignment brief.
Submission Requirements:
Your group has either opted for an online presentation or a face to face presentation.
In both cases you must submit the sequence for your presentation.
Online presentation:
You are required to produce a PowerPoint presentation that covers each of the areas.
All members of your group are required to take part in the presentation- a minimum would be to present 1 slide of the presentation.
Face to face presentation
You are required to produce an A1 paper poster that you can attach to a wall and present as a group.
The digital version of the poster must be submitted on Canvas. This will be presented via an online conferencing tool (For example, MS Teams, zoom)
All members of your group are required to take part in the presentation.
Marks awarded for:
Type of Feedback to be given for this assignment:
Score and comments on Canvas
Briefing
You must choose ONE of the following scenarios and investigate ethical and legal considerations for your chosen scenario. All the scenarios are based in the European Union
Your team’s role in all scenarios is as a consultancy firm focussed on ethical and legal risk. You are being contracted to identify ethical and legal issues that may arise from the hiring company’s work. They want to know from you where there are ethical and legal issues they may need to consider and what you recommend they do about them.
In order to do this work, you will need to focus on your choice of scenario as presented to identify ethical and legal issues that relate directly to them. In addition, you will also need to extrapolate from your chosen scenario to identify potential ethical and legal risks.
You are expected to refer to relevant areas of legislation (for example, data protection, cybercrime, disability legislation) that apply and provide examples of laws that may apply (note: you are not expected to state whether or not the laws actually do apply)
You are also required to create guidance about the ethical risks and remedies. You should refer to relevant frameworks, codes of conduct and examples of similar cases where relevant.
You are required to present your findings to members of the teaching team.
Scenarios
As a group you must choose ONE of the following scenarios to analyse and present.
1. Deep Fake videos
You have been contracted by a large film company that planning to use software that can produce fake videos. The film company wants to use the technology to:
·reduce the need for actors and stunt performers to perform dangerous stunts
·pay homage to dead actors in remakes of films they appeared in
·keep recording going when an actor may be temporarily unavailable for filming
·recreate real-life events for dramatic reconstructions in documentaries about real life crime
They see this as a starting point and intend to use the software to also reduce the number of actors required in films, for example in extra roles. Another revenue stream is to work closely with the software company to develop the capabilities of the software so that less expertise is required to create the artificial videos: this would enable the company to market the software to film makers with limited budgets for editing expertise.
2. Image recognition algorithm
A small software firm has developing an algorithm for image recognition that is intended to detect known troublemakers in football crowds. The firm wants to sell the technology to football clubs in order that they can prevent troublemakers getting into stadiums or be able to track them if they get into the stadium. Additionally, the company has received a lot of interest from outside of football. Their business plan involves diversifying early into these different opportunities. Initially a free tier for queries will be made available to get people to try out the system and to upload their images into the image bank. Outside of the free query tier, companies can pay per query. In order to allow the application to scale easily to meet the anticipated demand, it is being deployed to a cloud computing service.
3. Virtual chatbot
A start-up has developed a virtual chatbot that is designed to act as a first point of contact for customers. However, they have made the chatbot available at a discount to companies who agree not to tell their customers that the chatbot is a virtual agent rather than another human. They have made the core libraries for the chat bot repository publicly available on GitHub to encourage researchers to use their chatbot in their own software for their own experiments to test if people can detect that the chatbot is not human. The company is particularly interested in research into the efficacy of the chatbot for triaging patients booking medical appointments online.
4. Misinformation remover
A collection of independent programmers has produced a bot that can go through the systems of social networking sites to identify and tag misinformation posted by sites’ users. It is further intended that the bot will install itself on mobile platforms and detect if users are producing and posting misinformation. If it finds that users are producing and posting misinformation, they could automatically be identified and, potentially, flagged to the social network sites. It would be possible to take a number of actions once the misinformation is detected, including rebutting the content publically, forwarding it to law enforcement organisations or simply deleting the content.
The bot is also distributed as a browser plugin that can be used by other users of the social network sites who are concerned about misinformation. This is the point at which you have got involved as a user has contacted your group in order to check if it would be ethical or even legal for them to use the tool.