(Don't read this page. It is a work in progress for a Fall'19 graduate automated SE subject at NC State.
Come
back in mid-October!)
Quiz 1
One
According to Fatma Aydemir and Fabiano Dalpiaz
listed numerous ethical issues,
softare developers face numerous ethical choices in
their
day-to-day work:
- Privacy: Handling, storing, sharing user data only under the circumstances and for the purposes that the user sets
- Sustainability: Energy consumption of the software artifact, caring about energy throughout the SE process and in the documentation
- Transparency: Transparent decision-making procedures of intelligent systems, publicly available ethics policies by software development organizations
- Diversity: Gender, race, and age distribution of professionals in a development team
- Work ethics: Decisions on which bugs to fix and how quickly, ensuring quality of the code before release
- Business ethics: Informing users of a changed business model, including revenue models
- Accountability: Who should be held responsible for the harm caused by software?
- Dependability: Decision to maintain and/or keep a software product in the market
- Common goods: Contributing to, using, promoting open source software
Pick any two of the above areas. For each area, write 2 or 3 lines about
how a software developers could make:
- An ethically positive decision in that area.
- An ethically negative decision in that area.
Two
Suppose you decide that, ethically, you should produce software that:
- consumes limited energy;
- which produces models that humans can read, understand, and audit;
- which keeps human data safely secure.
Within that ethical context, list two design choices that are
ethically positive. Justify your conclusion (using 2-3 lines of text).
Repeat this exercise, this time for ethically negative choices.
Three
The Institute for Electronics and Electrical Engineers (IEEE)
has discussed general principles for implementing autonomous and intelligent systems (A/IS).
The IEEE makes the following points about A/IS:
- Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
- Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.
- Data Agency: A/IS creators shall empower individuals with the ability to access
and securely share their data, to maintain people’s capacity to have control over their identity.
- Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
- Transparency: The basis of a particular A/IS decision should always be discoverable.
- Accountability: A/IS shall be created and operated to provide
an unambiguous rationale for all decisions made.
- Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
- Competence: A/IS creators shall specify and operators shall adhere to the knowledge and
skill required for safe and effective operation.
Other organizations, like Microsoft
offer their own principles for AI:
- Transparency AI systems should be understandable
- Fairness: AI systems should treat all people fairly
- Inclusiveness AI systems should empower everyone and engage people
- Reliability & Safety AI systems should perform reliably and safely
- Privacy & Security: AI systems should be secure and respect privacy
- Accountability: AI systems should have algorithmic accountability
Draw a diagram that maps the IEEE guidelines into the Microsoft. Label each edge. Write one sentence
justifying each edge.
Note: there is no right answer for this one. Ethics is an evolving field; evidence:
the Aydemir and Dalpiaz list is different to the IEEE and Microsoft list.
One way we
speed that evolution is to think hard about how all these terms connect.
© 2019 timm + zimm