Hoppa till huvudinnehållet
Institutionen för informationsteknologi

Abstracts IIa, 10 December; Sunstein, Shalvi, Cockton, Friedman, Jonas, Schumacher, Neumann, Spafford, Moor, Johnson, Stallman, Górniak-Kocikowska, Weckert

Anette, anette.lofstrom@it.uu.se

Abstract

Through literature readings it has come clear that the concept ethics is frequently used among researchers, but its meanings varies and is disparate. Views like respect, sharing, ethics as irrelevant, ethics versus security, privacy and good taste are chosen perspectives among the authors. Ethics is described as complex and it is also a subject of thematization, for example computer ethics. The literature readings have also given examples on uses of the term ethics without definitions as well as of uses that build on implicit "taken for granted" understandings of what ethics really means. Following this issue of "what is ethics" it is important to explore how different interpretations of ethics might influence conclusions. Below I quote a description given by a respondent in my fieldwork, and discuss it ethically through some of the understandings that have been revealed in the course literature. What happens if I discuss the same quote through different understandings of ethics? Will I get to the same conclusion?

Quote for analysis
I think it's important to give pedagogues opportunities to participate, and as a leader, I think it is important to build an organization where you can actually access employees' knowledge and skills and to focus that we should be an organization for learning. This is not easy since it requires a lot of time. We need to meet, to discuss things. We also need to get together and reason before decisions and so on. Time of the pre-school is quite limited. We are limited because we have children to care of. We must find a balance in our uses of time (leader at a preschool)

Let us discuss this through the lenses of ethic as sharing. For this leader it is important to give employees opportunity to share their knowledge and skills, to meet and discuss things before decisions are taken. In this perspective his leadership is ethically correct, since he want to share the power if decisions with his employees. If we understand ethics as respect, we need to ask ourselves respect towards whom? Here we need to add another complexity, because it is respectful towards children and parents to prioritize time and efforts with the children, but if employees do not get time to be influential on decisions it is a kind if disrespect towards them, their knowledge and their skills. Thus; if we analyze the quote through the lenses of ethics as respect it can be regarded as both ethically correct and as unethical. This discrepancy is totally context dependent and based in time reality. If we view the quote through the lens of ethics as privacy we might get into problem since privacy is technology related in this case. However; let us think about the very aim of this leader. He wants to build an organization that allows employees to be influential and to share their knowledge. What if he used a social media for this goal? Would this be ethically correct in a privacy perspective? I would say that it depends. Is it important for employees to be anonymous in this work? If it is it might be more ethically correct to offer a web based tool where employees can discuss anonymously, than to meet them face-to-face and thereby delimit their privacy.

Shortly: These examples show that ethics is severely complex. The short analysis above suggests that the very same feature can be both ethical and unethical depending on how ethics is defined in the specific context and situation.

Håkan, selg@nita.uu.se

WHAT IS COMPUTER ETHICS?

Deborah G. Johnson: Computer ethics require respect for Proprietary Rights

Proprietary Rights have no moral basis according to Johnson; their nature is socially constructed. They must be considered from a utilitarian perspective of what is good for the society in the long run. The legal institution of ownership allows the owner of an invention to put it on the marketplace and profit when the invention is successful. However, due to the innumerable patents that have been granted, it is very difficult for a developer to get an overview, thus the risk for lawsuits is considerable. There are also problems with prevent "algorithms" or "software building blocks" to be patented, which may harm further innovativeness. Thus Johnson concludes that patent and copyright laws are not bad but they seem to lack the conceptual tools to handle the issues posed by computer technologies.

Richard Stallman: The essence of computer ethics is sharing

A good citizen is the one who cooperates. Rules of property rights with respect to software favours antisocial spirit by encouraging those who successfully take from others in making personal fortune.

Eugene H. Spafford: Computer ethics irrelevant because the general public is too uneducated

The results of the act should be considered separately from the act itself, especially when we con¬sider how difficult it is to understand all the effects resulting from such an act. Too often, we view computers simply as machines and algorithms, and we do not perceive the serious ethical questions inherent in their use. Our use (and misuse) of computing systems may have effects beyond our wildest imagining. Thus, we must reconsider our attitudes about acts demonstrating a lack of respect for the rights and privacy of other people's computers and data. The members of society need to be educated so that they understand the importance of respecting the privacy and ownership of data. If locks and laws were all that kept people from robbing houses, there would be many more burglars than there are now; the shared mores about the sanctity of per¬sonal property are an important influence in the prevention of burglary.

Peter G. Neumann: Computer ethics to be replaced by security

There is a number of expectations of human behaviours that system designers and administrators may must consider. At one extreme are supposedly cooperative and benign users, all of whom are trusted within some particular limits; at the other extreme is the general absence of assumptions on human behavior-admitting the possibility of "Byzantine" human behavior such as arbitrarily malicious or deviant behavior by unknown and potentially hostile users. It is convenient to consider both forms of human behavior within a common set of assumptions, with benign behavior treated as a special case of Byzantine behavior. Thus any behaviour - arbitrarily malicious or deviant - could be expected from users, with ethical behaviour as a mere special case.

John Weckert: Computer ethics a complex matter as actions may or may not be wrong depending on choice and context

An action may be wrong according to the laws or cultural norms. For such an action to be offensive, individuals or groups must take offense. Thus there is a subjective dimension.
This dimension is influenced by choice. When the attributes of a person or a group are such that there is no choice, the offences are of more concern than when the attributes are due to choices. To get one´s political views ridiculed is less offensive than one´s race or gender.
Also the context makes difference. Individuals or groups in vulnerable positions are more likely to feel offended than in positions of strength. Actions may be wrong - but not necessarily offensive, if the subjective dimension rejects the taking of offence. There is also a duty of not getting offended; that is, a demand for tolerance. But actions may be wrong because they are offensive, depending on choice and context, although they are not wrong in themselves.

James H. Moor: Computer ethics first and foremost about privacy

No ethical problem involving computing is more paradigmatic (typical) than the issue of privacy. Moor first presents two standard ways of justifying privacy; From instrumental value, (something good as means) privacy offers us protection against harm, so that privacy leads to something important. Privacy also enables us to form intimate bonds with other people that might be difficult to form and maintain in public. An intrinsic value (a value in itself, something good as ends) might be defined as privacy as an essential aspect of autonomy.
Moor than suggests that we need to create zones of privacy, a variety of private situations. Then we have to make sure that the right people and only the right people have access to relevant information at the right time. The proposed restricted access account is a combined "control/restricted access" approach, with the advantage that it could be fine tuned; that is, different people may be given different levels of access for different kinds of information at different times.

Swedish Banks: Computer ethics a question of good taste

Ethical policies of major Swedish Banks implies that the banks do no "co-operate ", that is, accept as clients of their payment system services, such firms or individuals that dedicate to illegal activities, or such activities that may related to "offensive behaviour" in the eyes of the public. Such practices seem to be in line with international terms of agreement within banking. What to be considered as offensive is not specified, which leaves much autonomy to individual banks for decisions.

Krystyna Górniak-Kocikowska: Computer ethics of global character

The Computer Revolution causes profound changes in people´s lives worldwide. Because of the global character of cyberspace problems connected with computer technology, such as ethical problems, has to be regarded as global matters. Up to the present stage of mankind there has not been a successful attempt to create a universal ethic of global character. The ethic of the future will have a global character in the sense that it will address the totality of human actions and relations. Therefore computer ethics should be regarded as one of the most important fields of philosophical investigation.

NSF-Ethics of Human Enhancements: Ethical issues lag (far) behind

No one knows which visions-utopian, dystopian, or pedestrian-ultimately will be realized. But insofar as there are good reasons to think that many of these visions are plausible, it seems prudent to at least begin a conversation about the many ethical and social issues associated with human enhancement, especially since ethics seem to historically lag (far) behind technology and other quickly-evolving events. By planning ahead, we can be better prepared to enact legislation or regulation as deemed fit.

Council of Europe: Computer ethics?

On the agenda: How do the management of Internet infrastructure and the interference with Internet access affect the right to freedom of expression? How can children be protected from sexual abusers that use the Internet for their crimes? Which tools exist to prevent and prosecute cybercrime? Can the marketing of counterfeit medicines on the net be stopped? How can privacy be safeguarded?

Mareike, mareike.gloss@im.uu.se

Abstract

The here presented articles are dealing with different ethical issues that has been discussed with the rise of the information society. I want to focus hereby on particularly two aspects: The issue of property and the issue of privacy.
Even though both themes seem to be different at first, they do have a clear common denominator. The here presented articles are dealing with data and the changed accessibility of data in its different forms.

Property
Several of the presented articles discuss the matter of property an din particular the issue of copyright and property of software. This has been a widely discussed issue, in particular when it came to music and film "piracy", I want to focus on two articles that are dealing with software.
Stallmann argues that "all software should be free". Hereby he underlines that he does not mean free as in "gratis" but instead free for other developers to work with the basic programming.
On the other hand Johnson argues along the existing law and concludes that the current legal situation is appropriate. Both argue from the perspective of harm that is been done.
Johnson explains that changing the property law for software - i.e. allow copying - would harm the owner to an extent that outweighs harm that is done by NOT allowing the copy of software. Therefore he uses examples from the physical world - the swimming pool. Here lies his weakness. Because on the other hand Stallmann makes the case for allowing access to the source code and going deeper into the specifications of virtual data instead of seeking the analogy with "real life".
Thereby his argument is stronger, since he leaves the level of legal argumentation. Johnson does not get over the argument that the current legal system is the best possible, but fails to give a clear reasoning for this. While he might be right that the current legal system is adequate to protect the owners rights in the classical sense, he fails to account for the changed circumstances and the different character of virtual property. Stallmann however succeeds in showing the advantages of virtual data, going further than the simple financial argument.

Privacy
More discusses privacy as a value, looking into it from three different perspectives: Privacy as instrumental value, as intrinsic value and as a value assembled from a set of core values, that he claims to be "empirically grounded" to be found in all cultures. In particular the core value of security leads to privacy.
However, his argumentation is flawed. There is a gap to fill when it comes to an understanding of what privacy actually is. At the same time is conclusion about the relationship between security and privacy appears to be deficient. He argues that a person who has all the information about someone else must appear as a threat to someone, even when the person does not intend to harm the other person.
But this argument is taken out of his personal experience of privacy. He argues because of privacy´s intrinsic value, the observed person MUST feel a sense of discomfort. Hereby he comes into a circular argument. He argues that: Privacy has an intrinsic character because it derives from the core value security. The core value of security leads to a need for privacy because of privacy´s intrinsic value.
I want to show along an example how this argumentation does not work in reality. When Google street view launched in Germany in 2008, big parts of the population as well as the media saw it as a big risk for privacy. In the public discourse it was argued - very close to Mores argument - that street views intrusion into privacy would be a thread to security. If this was really the case, does not matter. What is important here is that this was what big parts of the population experienced.
In many other countries in the world - amongst others Sweden - these concerns did not play a role at all. Quite the opposite, when telling my friends about the public discussion in my home country, I got surprised reactions, how Germans could reject such a convenient service. The big gap between the Swedish and the German perception of street views privacy regulation shows, that there cannot be one global definition of privacy because privacy is not in equal proportions connected to security in people´s perceptions.
Even though More acknowledges these cultural differences in privacy, he does not manage to clearly state what effect these differences have on the overall argumentation. This is because he does not clearly enough consider the other side. For him, data that is public MUST be in risk of being abused - like medical records by insurance companies. Without considering if the other side really would do so. That is because he is arguing from his cultural standpoint.

A new kind of ethics?
The central question of all this is the role of the changed perception of what is ours and - following from that - what harm can be done by intruding into what is ours. It seems to be that the border of what we experience as belonging to us - our personal data as well as our creation - is blurring. Ethical considerations have to fail when they take clear borders (of the old condition) as a point of departure for their argumentation.
Through the Internet we have created a form of "Agora" not only for values and ideas but instead as a place where virtual property is merging. This can be seen as a disadvantage, as potentially harmful (it is from a material point of view). On the other hand this can as well be embraced.

Joseph, joseph.scott@it.uu.se

Abstract

Is it morally wrong to give offense? And if so, why? These are the questions that Weckert addresses in "Giving and Taking Offense in a Global Context." The author is attempting to demonstrate that offense in and of itself can be immoral; that is, an otherwise moral action may in fact be immoral if that action is offensive to others. The argument is not that some things are objectively offensive - such as, to use the author's example, racism - as the author quite reasonably points out that it seems counterintuitive to call anything offensive if there is no one who is offended by it. But this leaves us with a variation on the problem always faced by subjectivist ethics: how do we reconcile the immorality of offending others with the fact that people are often unreasonable about they find offensive? Is it our responsibility to avoid offending everyone, no matter how easily offended they are? Is there no weight to be given based on the rationality of offendee? With no constraints on what offense we must try to avoid giving, we are effectively and immediately paralyzed by the probability that someone, somewhere will find our actions offensive in some way.

Weckert gives us two measures by which to judge the importance of an offense. The first is that it is offending someone in regards to something which was not a matter of choice is always worse than in regards to something the person chose; so attacking someone's reasoned position is more ethical than making fun of their weight. The second is the context of the offense, specifically as it regards the power of those on the receiving end; the powerless have more moral right to take offense than do the powerful.

These are not entirely unreasonable guidelines, but an example will help to demonstrate that they are not sufficient. Online bullying is an issue which threatens to derail the commenting sections of many popular websites, with some participants hiding behind the relative anonymity of the web to engage in hostile behavior and personal attacks; this would seem to fall within Weckert's scope of "things that are wrong because they are offensive." And yet, it does not appear to be particularly offensive by either provided measure. All online social interaction is, by design, voluntary; and while some personal attacks may be based on non-chosen criteria the more common form of attack seems to be on the expressed opinions of others, the very quintessence of a chosen position. And it is by no means clear that traditional power differentials obtain between cyber-bullies and their prey.

I would suggest that the problem stems from the identification of the taking of offense as the crux of the ethical problem. Offense is the symptom, not the disease; we are offended by actions we sense to be morally wrong, and if we cannot define the exact nature of this wrongness, that is a failure of our ethical system. The point is, I think, neatly illustrated in Weckert's example of offensive smells, which he uses to demonstrate that there are no objective offenses. Bad smells are a so-called "primitive" offense - as opposed to a "norm-driven" offense - and Weckert views them as essentially unexplainable. But there are reasons to find some smells offensive; we may not know the reason that infected flesh, human waste, or rotten food is "bad," but we our response to those smells has evolved to keep us away from unhealthy things. I think the same thing applies to "norm-driven" offense: we may not be able to define why something which offends us is unethical, but we've evolved a heuristic, and we know it when we see it.

Sofia, sofia.cassel@it.uu.se

Abstract

I have chosen to focus on Moor's article 'Towards a Theory of Privacy in the Information Age'. Specifically, I find the interplay of privacy vs. personal relationships very interesting.

Moor mentions an example where he calls a pizza place, and the staff immediately asks if he would like to order the same kind of pizza he ordered last time. In his case, the reason the pizza place knows his favorite kind of pizza is because they have caller ID (i.e., a computer remembers what pizza he likes). Moor feels slightly uncomfortable about this, and reflects that he would not have been if he had been a frequent customer at a fancier restaurant where the waiter had memorized his favorite dishes.

The example made me think about when we feel uncomfortable and when we feel complimented by someone other party having access to private information about us. Assuming that computers and humans cannot have personal relationships, In Moor's case it appears that the major difference between the waiter and the computer is that the waiter has a personal relationship with Moor, whereas the computer does not. Thus, we would presume that Moor would also be very uncomfortable if a stranger approached him and (correctly) told him what color underwear he was currently wearing (as I assume most people would be).

This leads to another question, which is where the information comes from. In the restaurant example, suppose that the waiter in fact has a very bad memory and notes information about the restaurant's guests in a computer so that he can look them up whenever they visit. Is this still a violation of Moor's privacy?

In the stranger-underwear example, supposing that the stranger has statistics that tell him most men wear green underwear, presumes that Moor does so too, and happens to be correct? Is this still a violation of Moor's privacy, even though the stranger has no actual information about him?

By Moor's reasoning, it appears that the waiter noting information in his computer would actually be a violation of privacy, since it would be somehow analogous to the peeping Tom example. But Moor does not know this, and is still happy about the waiter recommending the food he loves. The stranger commenting his underwear seems like it could be the opposite case: Moor feels uncomfortable because he thinks it is an invasion of his privacy, although it is not (the stranger might be going about telling everyone they are wearing green underwear, and being correct only some of the times).

Moor emphasizes informed consent in his conclusion. I think that he should have further explored the difference between what appears to be a violation of privacy and what actually is a violation of privacy: The stranger who tells everyone they are wearing green underwear is perhaps not violating anyone's privacy (but appears to be) whereas the waiter who is noting information about guests in his computer is perhaps violating their privacy (but appears to not be).

Jing, jing.liu@it.uu.se

Abstract

High-technologies, human values and human rights
High-technologies bring both conveniences and problems. While people starts adapting high-tech productions into daily life, the problems come up together. Ethics issues are always together with humans' actions.
1. Computer-related ethics issues
There are things to think about when talking computer-related ethics issues, for example: privacy, security, copyrights, etc. Neumann[1] suggests that there are 3 basic gaps between human values and computers, and he talks about security in protection against internal misuse and penetrations, and also against undesirable system and user behavior. Moor [7] talks about privacy issues in information age, he states that privacy is one of our expressions of the core value of security, and he suggest several principles for privacy. The publicity principle which indicates to things should be clear and known to the persons affected by them, the justification of exceptions principle suggests that an action is moral if only if the harm it causes is much less than it harm prevented, and the adjustment principle says that the rules should be adapted into special circumstances.
Spafford [2] analyzes several reasons of computer break-ins, and argued that they are unethical, even when no obvious damage results. Neumann[1] also states computer break-ins are antisocial and remain serious potentials for misuse.
Johnson [3] and Stallman[4] discuss ethics moral issues surrounding the ownership of computer software. Johnson[3] states that it's wrong to make illegal copies of software, because it harms the rights of the authors, therefore, computer software should be protected, he also states that copyright and patent law seem right on target but lack the conceptual tools to handle the computer-related issues. In the contrast, Stallman[4] argues that software should be "free", he said that "programmers have the duty to encourage others to share, redistribute, study, and improve the software", he states that the society shouldn't have the owner of the software, since its negative effects are widespread and important.
As with the revolution of computer technology, Górniak-Kocikowska[5] considers the computer and the global ethics. He states that computers do not know borders and they affect all areas of human life, so computer ethics should be a global ethics and won't be professional ethics. Weckert [6] considers global ethics, too. He discusses offensive in global context, he suggests that tolerance of the views of others is always important a in a global community, and make more information readily available to a large number of people and not to take action against those who have offended are also helped.
2. Human Enhancement
There are lots discussions on human enhancement, Allhoff[8] suggests that enhancement and therapy are different, although for me, the distinction between the two are not obvious, because people always try to challenge themselves. In the paper, 25 questions about human enhancement were discussed. Whether human enhancement is moral or not should take what kinds of human-enhancing devices and treatments will be invented into account. Personally speaking, I would like to try some human enhancement devices, if they are not harmful to my health.
References:
[1] Neumann P.G., Computer Security and Human Values; Computer Security, The Research Center on Computing and Society at Southern Connecticut State University
[2] Spafford E.H., 1992, Are Computer Hacker Break-ins Ethical? Journal of Systems and Software, Vol.17, Issue 1, pp 41-47
[3] Johnson D.G., Proprietary Rights in Computer Software: Individual and Policy Issues; Software Ownership and Intellectual Property Rights, The Research Center on Computing and Society at Southern Connecticut State University
[4] Stallman R., Why Software Should be Free, Free Software Free Society: Selected Essays of Richard M. Stallman, 2nd Edition
[5] Gorniak-Kocikowska K., 1996, The computer revolution and the problem of global ethics; Science and Engineering Ethics, Vol.2, Issue 2, pp 177-190
[6] Weckert J.,2007. Giving and Taking Offence in a Global Context; International Journal of Technology and Human Interaction, Volume 3, Issue 3
[7] Moor J.H., 1997, Towards a Theory of Privacy in the Information Age; Computer and Society
[8] Allhoff F., Lin P., Moor J., and Weckert J.; 2010; Ethics of Human Enhancement: 25 Questions and Answers; Studies in Ethics, Law, and Technology, vol.4, issue 1, article 4.

Thomas, thomas.lind@it.uu.se

Abstract

In the NSF report by Allhoff, Lin, Moor, and Weckert there are 25 issues, or questions, identified as likely to be in need of increasing attention following the developments on technological applications for human enhancement.

I believe that new applications of technology for human enhancement will continue to be developed and diffused in human society. Legislation and regulations out of ethical concerns may serve to slow down the process in a semi-controlled fashion. However, given the current moral state regarding the preservation of human life, where the ends seem to justify the means in the effort to save and prolong human lives, I believe that the use of technology may be limited at best. The authors mention the concern for "playing God with world-changing technologies", which is an interesting notion. An area with the label of playing God applied to it for quite some time is that of genetic manipulation of humans, but while this label has not worn off the applications for gene therapy are numerous and increasing. My point and argument here is that while we can imagine how many emerging technologies can be applied in ways comparable to "playing God", and agree to not strive towards these applications, we may slowly advance towards such applications by the iterative development of applications closer to current ones that seem a safe distance away from "playing God". In effect we are applying our current cultural values to determine what "playing God" means, and to agree that such actions are not condoned, but small advancements in technology are affecting our culture, shifting it in a direction we cannot foresee. While we characterize playing God as bad in our culture, we are playing God in the eyes of the culture of our ancestors (by the use of technology), as will the culture of our descendants most likely condone what looks like playing God in our eyes. As such, much of the discussion on human enhancement relates to future issues where any solution we can comprehend today would be controversial given our current state of society, but will it (can it, even) be in a future society where such human enhancements are actually on the verge of being implemented? The first question posed by the authors is on the very definition of human enhancement, and the idea that enhancements should perhaps only serve to bring human performance to a normal level. However, with the introduction and proliferation of technologies such as Google Glass for e.g. augmented reality, exoskeletons as aids for work involving heavy lifting, and other "voluntarily temporary" human enhancements, what will stop the boundary between permanent normative enhancements and temporary performance-boosting enhancements from fading?

Benny Avelin, benny.avelin@math.uu.se

Abstract

I am going to consider the text written by Deborah Johnson "PROPRIETARY RIGHTS IN COMPUTER SOFTWARE: INDIVIDUAL AND POLICY ISSUES"

Im this paper she argues that, although she does not state wether the law against copying proprietary software is right or wrong in itself, she argues that the law system which it is a part of is roughly just an as such we have a prima facie obligation to follow that law.

I am going to look at some of these arguments.
First she argue that copying a piece of software harms the owner of the copyright, by using the argument that someone who does not think that this does any harm should talk to small business owners, who has gone out of business, because customers copy their software instead of bying it.

This argument somehow suggests that a software company can never get big. If a company has bad software then no one would like to buy it, then copying could do no harm from a financial point of view, moreover why would one copy bad software anyways. On the other hand if the software is good then this company would go out of business because of people copying their software, however this is somewhat contradictory, most often a good software survives.
To this end the act of copying as a whole would then serve a purpose of filtering bad software out, doing a better job than a mere monetary barrier.

Later she agues that the harm caused to the owner by copying is greater than the good of copying it. However this does not apply in any case, consider for example software directed towards other companies, usually this company charges so much for their software that no individual would ever buy it for the purpose of curiosity. In such a case copying the software and experimenting with it for personal use then, since there is de facto no monetary harm to the owner, it does greater good in copying it.

Regarding casual copying (friend to friend)
Nissenbaum puts it well in "Is Casual Copying Immoral? - A Plea for Casual Copying"

'With casual copying, I submit that the right of the purchaser of consumer software to be free of interference in responding to personal obligations of kindness and generosity in the private domain should truncate the competing claim of the software owner to completely determine copying practice. That is, full determination over who can copy and when is no longer one among the bundle of rights accorded to software owners.'

In arguing against copying one could consider the case of supporting free software. If it is such that the free software which can carry out some task less good than a proprietary one, and if one supposes that free software is to be supported. Further suppose that if no free illegal copy of the proprietary software existed then one would choose the free option, in this case if one chooses to illegally copy the proprietary software then one has done more harm than good in not supporting the use of the free software.

Ruth Lochan, ruth.lochan@im.uu.se

Abstract

Why software should be free
by R. Stallman
In this abstract I would like to reflect on Stallman´s article on the GNU Operating Systems website. There are ongoing and often heated discussions on whether or not software should be proprietary or free. Stallman argues that it should be free and presented a series of reasons and examples to support the points. The main argument is on the premise that software should be for "the prosperity and freedom of the public in general". Developers often use two reasons in support of the public having to pay to use software: the emotional argument or the economic argument. There is a price for everything, even in the case in software development, where even the most emotional developer will eventually sell if the price is right. So it would appear that ethics and morality are not the foremost concepts which developers have in mind when creating new software, often the driving force is the possible financial gains. The author pointed to three levels of harm that arise as a result of obstruction:
- "Fewer people use the program.
- None of the users can adapt or fix the program.
- Other developers cannot learn from the program, or base new work on it."

Undoubtedly, attaching a cost to software use limits accessibility. This is especially challenging for example in developing countries where there may not be the economic infrastructure in place to use different proprietary software. The author refers to this as "psychosocial harm". A divide is created amongst users and possible users. Without releasing the source code the software remains a "black box". Consequently rather than using and further developing what already exists, other developers have to recreate software that performs similar functions as other existing software. Developing software that is free does not mean that that creativity will be compromised, in fact it is possible that there will be even better software when the incentive is not only monetary. Furthermore many of the software development in academia are funded by universities or by tax payers and therefore should make the developed software available to the stakeholders, or wider society. It is not that ethical developers should not be rewarded for their work, because they should receive monetary compensation. Accessibility to free software can be beneficial since it will lead to wider use of already developed software. Another interesting point in the writing is the difference between competition and combat. "Proprietary software causes a form of combat in the society" instead of just competition amongst developers. The author also highlights what can be viewed as a difference in ideology between the United States and Europe. The effects of not allowing free software have far reaching consequences and indeed affect all aspects of modern life.

Tao Qin, tao.qin@angstrom.uu.se

Abstract

In [Why software should be free]
by R. Stallman, the author pointed to three levels of harm that arise as a result of obstruction:
- "Fewer people use the program.
- None of the users can adapt or fix the program.
- Other developers cannot learn from the program, or base new work on it."
However, this can?t solve any problem to us!
Software is not easy to create. In fact, It?s a whole heck of a lot of work, and takes somewhere between months and years.
1 Software teams are constantly working on improving and updating the software to keep up with changing technologies. It?s a continuous process.
2 It costs money to put out a software product. We have to spend years creating it, paying people?s salaries, renting office space, purchasing computers, etc. If we want you to actually find out about our product, we often need to spend money to advertise as well.
3 People who make software have more to do once your purchase has been made. We are here for you when you run into issues by providing a support team to answer questions, walk you through troubleshooting steps, fix bugs, etc.
4 We do our best to price software affordably. Just like a sandwich shop owner figures out how much to charge for a sandwich based on the price that adequately covers the cost of ingredients, running the store, and paying their employees. Most of us price our software as reasonably as possible.
5 Software is created by hard working people? like you. Do you get paid for your work?
6 You pay for your clothes, gadgets, your movie tickets, your lunch, your plane ticket, etc. So why not your software?
7 Hopefully after knowing this, you understand that software does not create itself. It?s made by hard working people just like you.
8 We often work in small teams and we put a lot of time, money, and effort into creating it for you. It?s our technological work of art. We?re not perfect, but we do our best.
If we are paid by government department, I may agree with R. Stallman. In fact, we are paid by our software customers! If we don?t get pay, how we move forward? As benny said, a software company can never get big. He explained what he said like this:
If a company has bad software then no one would like to buy it, then copying could do no harm from a financial point of view, moreover why would one copy bad software anyways. On the other hand if the software is good then this company would go out of business because of people copying their software, however this is somewhat contradictory, most often a good software survives.
I think he made his point to us! This is what our software companies are thinking! This is the way our software companies survive!

Maria, msvedi@kth.se

Abstract

Group+seminar+II,+10/12K12:+Technology,+ethical+issues+
++
Are there inherent ethical issues in technology, or does it rather relate to our use of it?
Computers might be merely hardware and algorithms, but there is ethics regarding our use of
them, e.g. privacy concerns. Moor (1997) mentions that the issue of privacy is the more
paradigmatic regarding computing and ethics, where our challenge is “to take advantage of
computing without allowing computing to take advantage of us”. The convenience of
computerised systems has to take into consideration e.g. improper exposure of information.
This exposure might be either due to neglecting security aspects, improper use of them or not
enough of them. If someone takes advantage of that, are there harmless cases of
unauthorized computer intrusions? Spafford (1992) claims there is not, no matter no
significant damage results or whether they have a useful purpose. Shalvi et al. (2012) claim
that dishonest behaviour is based both on self-interest and justification, where restriction
from it is based both on time and no justifications for self-serving unethical behaviour. They
found that when faced with temptations people tend to lie, especially when under time
pressure. Serving self-interest in itself may be critical to one’s survival, but adhering to
unethical behaviour can have negative (social) costs. The results of the act should be
considered separately from the act itself, especially when we consider how difficult it is to
understand all the effects resulting from such an act (Spafford).
Regarding computer systems, there are gaps that permit misbehaviour from the user or the
computer (Neumann, 1991):
- Technical gap: The initial purpose of a particular device can differ from all the other
potential uses.
- Socio-technical gap: Policies regulating technology innovation and use are not always
following social policies.
- Social gap: The expected and actual human behaviour differences.
Can these gaps be narrowed? The technical gap could be narrowed by careful design, where
development, administration and is used with respects to requirements. The socio-technical
gap depends on narrowing the technical, but value sensitive design, with well-defined and
enforceable policies, seems a solution. The social gap depends on the two former but also on
education to inform for better understanding (Neumann, 1991).
Transhumanism is a movement and a philosophy, with an emphasis on “progress (its
possibility and desirability, not its inevitability)” and “goes well beyond humanism in both
means and ends” (More, 2013). Using technology as a means to enhance and extend the
human nature, including biological limitations, but not to perfect the human. As with all
philosophy there are some shared and general views, but there are wide variations in
“assumptions, values, expectations, strategies, and attitudes”. More (2013) mentions to work
proactively and recognise risks by using rationality and a concomitant acknowledgment of
uncertainty.
There are technologies and tools that nowadays are routinely utilized for the goal of
enhancing human capacities, such as eyeglasses, caffeine or fitness routines. Enhancement
can thus be seen as improvements in personal welfare, within boundaries where they are
placed outside of sanctioned interventions (Juengst & Moseley, 2016). Working towards
human ’normality’ proves problems when defining boundaries and where to draw a line for
enhancements, due to the diverse nature of humanity. Is human enhancement cheating? If a
student obtain good grades due to genetics, socio-cultural background, disciplined study or
by enhancement (drugs), only the latter would challenge the way we see it as cheating today.
Technology,+research+and+ethics+2012,+5+ECTS+Credits+
Group+seminar+II,+10/12K12:+Technology,+ethical+issues+
++
There are two ways to go: prohibiting policies or evaluation that does not reward cheating
(Juengst & Moseley, 2016). If a student takes aid in Ritalin and manages to get through a
tertiary education, is the judgement based on whether or not the students has documented
ADHD or if the students takes on a disciplined and mature stance after graduation? Would
the answer differ if the enhancement was non-temporary and thus made a part of the
individual? The same powerful technologies that can transform human nature for the better
could also be used in ways that, intentionally or unintentionally, cause direct damage or more
subtly undermine our lives (More, 2013). Could enhanced humans create a new digital
divide? Regulation and/or restriction have to take into account personal freedom, autonomy
and authenticity. While we may not want to create “hollow victories for authentic
achievements” (Juengst & Moseley, 2016), we have to consider the possibility of both
stringent regulation and individual liberty. To come to terms with this Juengst & Moseley
(2016) define three grounded considerations for a more moderate position:
- To govern use rather than development
- To either punish unauthorized uses of technology or protect the interests of those
disadvantaged by those uses
- To recognize the trade-off: often a step forward for some purposes is also a step
back in other contexts
To not change isn’t always the best policy, but we have to be aware of both sides. In order to
not create a digital divide concerning technological human enhancements we may have to
treat them the way Stallman talks about software: to make the development a public regard,
where we as individuals have the duty to encourage the welfare of society with voluntary
cooperation. On the other hand, as Neumann (1991) expresses it: “Holistically, we need a
kinder and gentler society, but realistically that is too utopian”.
References
Juengst, E. & Moseley, D. (2016). "Human Enhancement", The Stanford Encyclopedia of
Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.)
http://plato.stanford.edu/archives/spr2016/entries/enhancement/ [2016-10-19]
Moor, J.H. (1997). Towards a Theory of Privacy In the Information Age. Computers and
Society. CEPE’97.
More, M. (2013). The philosophy of transhumanism. The transhumanist reader: Classical
and contemporary essays on the science, technology, and philosophy of the human
future, 3-17.
Neumann, P.G. (1991). Computer Security and Human Values. The National Conference on
Computing and Values (NCCV). http://rccs.southernct.edu/computer-security/
Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of
justifications). Psychological science, 23(10), 1264-1270.
Spafford, E.H. (1992). Are computer hacker break-ins ethical?+ Journal of Systems and
Software - Special issue on computer ethics, 17 (1), p. 41-47
Stallman, R.. Why software should be free.
https://www.gnu.org/philosophy/shouldbefree.en.html

Christiane, christiane.gruenloh@fh-koeln.de

Abstract

Cass R. Sunstein discusses in his paper the use of moral heuristics [13]. Heuristics are well known in cognitive psychology - especially in problem solving. Zimbardo et al. define heuristics as
"simple, basic rules - so-called "rules of thumb" that help us cut through the confusion of complicated situations. Unlike algorithms, heuristics do not guarantee a correct solution, but they often start us off in the right direction." [15, p. 224]

As the general definition of heuristics indicate, they can´t guarantee correct results, but help to shorten the way of finding a solution, while algorithms for example perform an exhaustive search and eventually come up with a correct result. Sunstein states, that heuristics play a "pervasive role in moral, political, and legal judgments". Sometimes moral heuristics can work well - espe- cially when they represent generalisations from a range of problems, but problems occur when "the generalizations are wrenched out of context and treated as freestanding or universal principles". Another problem lies in the framing. Semantic framing can influence and affect people´s intuition - at least for unfamiliar questions of morality, law and politics. Sunstein presents a catalogue of moral heuristics to investigate the relationship between moral heuristics and questions of law and policy. Using this cases he illustrates that rules of thumb are applied, which may generally be sound and work well, but in this cases lead to systematic error. [13, p. 531ff]

According to Shalvi et al. if people have enough time, they can´t justify their unethical behaviour and therefore don´t cheat. It is assumed that people "first act upon their initial automatic intu- ition and only later deliberately reason about their action". [10, p. 3] and further that people´s automatic tendency is to serve their self-interest. This could raise again the question, whether there is no such thing as altruistic people, who are intrinsic good. The authors conducted exper- iments to proof their hypothesis that time pressure will increase lying (because the participant has no time to reason about their behaviour) and only when the participant has time, private justification impact ethical behaviour. They conclude from their results that in accordance with Haidt moral judgement is "driven first by an intuitive emotional reaction" and is followed by a "post-hoc search for reason justifying the initial intuition". They suggest that if people have the time to reason about their actions and fail to justify their unethical behaviour, the feel bad about lying. [10]
Krystyna Gorniak-Kocikowska discusses the emergence of a new ethical theory from computer ethics as response to the computer revolution: information ethics [3]. She got inspired by an articly by James Moor ("What is computer ethics?") and by her work on the problem of a global ethic. According to Gorniak-Kocikowska computer ethics has to be regarded as a global ethic, because computers do not know borders and therefore the problems with regard to computer tech- nology have a global character. She states that the future global ethic will be a computer ethic and therefore "should be regarded as one of the most important fields of philosophical investigation". [3].

John Weckert deals with the concept of the giving and taking of offence in order to assist "our understanding of what is necessary in a global ethics" [14, p. 18]. Weckert gives as an example the case of the Danish cartoons and states, that if offence is taken seriously, it´s not easy to distin- guish right from wrong. Here two values clashed: freedom of expression and religious belief. These values are not equally high rated in different countries. In some countries freedom of expression outdo other values - in some countries this freedom doesn´t even exist. Weckert describes, what offence is, what is wrong with giving offence, what elements are involved in taking offence (hurt, judgement that action was wrong and some action), why people take offence and discusses, if there is a duty not to take offence. Although one might feel hurt, and we consider the action being wrong, that doesn´t imply the right or duty to demand action as a result of the offence taken. According to Weckert, choice and context "provide some way of distinguishing between offence which is a serious moral issue and that which is not". [14]

Hans Jonas emphasis in his book [5] that all commandments and maxim - however different they are - regard to actions of the present. The underlying assumption is, that the action takes place between people within the same presence. Nobody would held responsible if an action with good intentions lead to bad consequences. [5, p. 23ff] But time has changed, because modern "tech- nology has introduced actions of such novel scale, objects, and consequences that the framework of former ethics can no longer contain them." Although the old rules still stand (for example justice, honesty etc), there are some issues, because we have to deal with a "growing realm of collective action where doer, deed, and effect are no longer the same as they were in the proximate sphere, and which by the enormity of its powers forces upon ethics a new dimension of Responsibility." [5, p. 26ff] Similar to Rachels, who claimed that the moral community is neither limited to people in one place nor to one time and the conception of the moral community must be expanded across space, time and the boundaries of species [8], Jonas introduces a new imperative. According to Jonas, Kant addresses his categorical imperative rather to the individual conduct than to address public policy and real consequences are not considered at all. Therefore Jonas extended the cat- egorical imperative by Kant to take the existence and happiness of proximate generations into account. This new imperative by Jonas is: "Act so that the effects of your action are compatible with the permanence of genuine human life". [5, p. 36ff].

Present actions can have a big impact on future generations. Ernst Friedrich Schumacher de- scribes in his book the changes with regard to work due to modern technology [9]. The idea of modern technology is to reduce work. Modern economist would therefore welcome to reduce the work load for example by `division of labour´. Schumacher compares this to the point of view of a Buddhist. Since Buddhist value work of man very differently and a division of labour into minute parts, which are meaningless, boring and stultifying to conduct, would be regarded "little short of criminal". Later Schumacher states that "modern technology has deprived man of the kind of work that he enjoys most, creative, useful work with hands and brains, and given him plenty of work of a fragmented kind, most of which he does not enjoy at all" [9]. Buddhists and modern economists differ in terms of their values, according to Schumacher might go as far as claiming that economic laws lack values as much as the law of gravitation. He reflects on what if instead of reducing productive time, we would increase the producing time to twenty per cent of total social time. He assumes that this would have a therapeutic and educational value, because there would be more time for any piece of work and people working this way wouldn´t know the difference between work an leisure because they would enjoy working like that.

The relation between values and technology are described by Gilbert Cockton ("Value-Centred HCI") [1] and Friedman et al. ("Value Sensitive Design and Information Systems") [2]. Cockton claims that there is something wrong with the existing definitions of HCI and therefore should "look beyond computing, psychology and sociology to a design movement that seeks value above all else" [1, p. 149]. According to Cockton the basis for a complete and effective HCI can only be a focus on delivering value and that HCI will become a true discipline only when it "develops, expresses, discusses, agrees and integrates a set of core values" [1, p. 149, 151]. He states that value can take many forms and a designer has to understand the values of the stakeholders and to "support them in delivering this value" [1, p. 155]. The problem is, how a designer can do that. Rather than defining a value, the designer has to talk about it with the people [1, p. 157]. One way to design technology that account for human values is introduced in [2]. With "value" they refer to "what a person or group of people consider important in life" [2, p. 2]. Friedman et al. introduce an integrative and iterative tripartite methodology, that consists of conceptual, em- pirical, and technical investigations. The aim of conceptual investigations is to learn who are the stakeholders, what values are implicated and how one should engage in trade-offs among conflict- ing values, carefully develop working conceptualisations of specific values and so forth. Empirical investigations of the specific human context of the technical artefact intend to inform the analysis, evaluate the success of a particular design and try to answer questions regarding the prioritisation of competing values in design trade-offs. Technical investigations aim to determine how technol- ogy can support or hinder human values. While empirical investigations focus on the individuals, groups or larger social systems, "technical investigations focus on the technology itself". [2, p. 3ff] Moor maintains that there are a set of core values, which are found in all human cultures (e.g. life, happiness, freedom, knowledge, ...) [6, p. 29]. According to Moor, our "challenge is to take advantage of computing without allowing to take advantage of us". In contrast to humans, who forget most of the information, the computer can store lots of information which doesn´t exactly facilitate privacy. In this information age, the human being using credit cards and online shopping becomes more and more the Transparent Man. Electronically captured information can be used (and misused) for any purposes. "Therefore, to protect ourselves we need to make sure the right people and only the right people have access to relevant information at the right time". [6, p. 31]. Moor proposes a control/restricted access theory of privacy, in which polices for privacy can be fine tuned. He suggest to create zones of privacy, which helps deciding how much personal information one wants to keep private or public. Moor claims that, although is not a core value, it can bee seen as the expression of a core value - namely the value of security.

Peter G. Neumann addresses human values as well (for example in terms of personal privacy pro- tection) and focuses on computer security than on the different values that should be taken into account [7]. Trying to provide increased security can probably compete with user´s needs like ease of system use, performance etc. With regard to values he states, that there is a need for better "education relating to ethics and values, in the context of the technology, particularly in relation to computer and communication systems, and also relating to the risks of computerization". Three basic gaps are introduced, which "may permit computer and / or human misbehavior". Neumann refers in his article to Eugene H. Spafford, who deals with the question, whether some computer hacker break-ins are ethical and wheter there is such as things as a "moral hacker" [11]. Some people argue, that those break-ins might be useful, if no harm is done. Spafford presents some motivations with which hackers may try to rationalise the break-ins, for example The Hacker Ethic which states that in part all information should be free. In this aspect he refers to Richard Stallman´s GNU Manifesto and criticises that according to that manifesto intellectual property doesn´t exist and therefore there would be no need for security. Spafford claims that privacy would not longer be possible, if all information would be free to everyone. But Stallman´s stance doesn´t imply publishing medical date as he implies. The Chaos Computer Club for example added in the 80s two points to the ethical principles of hacking: "Don´t litter other people´s data." and "Make public data available, protect private data."1. Among others Spafford tries to refute the security argument that break-ins illustrate security problems and therefore performing a service by comparing it to housebreakings to demonstrate the susceptibility to burglars. I don´t think that one could conclude, that the security argument is without merit. In my opinion, when actually no harm is done, actually showing that the system is insecure and a break-in was successful can sensitise the owner / admins. I think it depends on the dimension of break-in.

Deborah G. Johnson focuses in her article on the two issues that surround the ownership of computer software: the individual moral issue (Is it wrong to make an illegal copy of a piece of proprietary software?) and the policy issue (Does the system of copyright, patent, and trade secrecy protection produce good consequences?) [4]. Although copying a piece of software seems harmless, she is compelled to conclude "that it is morally wrong to make an illegal copy of a piece of software, because it is illegal" and the key issue lies in the relationship between law and moral- ity. Johnson examines the strongest arguments for the moral permissibility of individual copying and confutes them. She concludes that illegal copying deprives the owners of their legal rights, which does harm to them. Johnson argues that computer algorithms shouldn´t be patentable, because until now so many patents have been granted, which hinders inventions. In this context, she refers to Richard Stallman and states, that he had proposed a law that excludes software from the domain of patents.
Richard Stallman is the founder of the Free Software Foundation. He discusses in his essay why software should be free [12]. Like Johnson he regards to the question, whether on is allowed to copy a computer program to give it to someone else but comes to another conclusion. While Johnson refers to the law (it´s illegal therefore it´s morally wrong), he states that current law cannot decide that. Stallmann uses as criterion "the prosperity and freedom of the public in general" and claims that "the law should conform to ethics, not the other way around". He concludes that "program- mers have the duty to encourage others to share, redistribute, study, and improve the software we write: in other words, to write "free" software."

References
[1] Gilbert Cockton. Value-centred hci. In Proceedings of the third Nordic conference on Human- computer interaction, pages 149-160. ACM, 2004.
[2] Batya Friedman, Peter H. Kahn Jr, and Alan Borning. Value sensitive design and information systems. Human-computer interaction in management information systems: Foundations, 4, 2006.
[3] Krystyna Gorniak-Kocikowska. The computer revolution and the problem of global ethics. Science and Engineering Ethics, 2(2):177-190, 1996.
[4] Deborah G. Johnson. Proprietary rights in computer software: individual and policy is- sues. Software Ownership and Intellectual Property Rights, Research Center on Computing & Society, 1992.
[5] Hans Jonas. Das Prinzip Verantwortung. Versuch einer Ethik fu ̈r die technologische Zivilisa- tion. Suhrkamp, 2003.
[6] James H. Moor. Towards a theory of privacy in the information age. Computers and Society, 27(3):27-32, 1997.
[7] Peter G. Neumann. Computer security and human values. Computer Ethics and Professional Responsibility. Blackwell, Malden, 2004.
[8] James Rachels. The Elements of Moral Philosophy. McGraw-Hill, 4th edition, 2002. [9] Ernst Friedrich Schumacher. Small is beautiful: Economics as if people mattered, 1973. New
York, 1989.
[10] Shaul Shalvi, Ori Eldar, and Yoella Bereby-Meyer. Honesty requires time (and lack of justi- fications). Psychological Science, 2012.
[11] Eugene H. Spafford. Are computer hacker break-ins ethical? Journal of Systems and Software, 17(1):41-47, 1992.
[12] R. Stallman et al. Why software should be free, 1992.
[13] Cass R. Sunstein. Moral heuristics. Behavioral and Brain Sciences, 28(4):531-573, 2005.
[14] John Weckert. Giving and Taking Offence in a Global Context. IGI Global, Hershey, PA, USA, 2009.
[15] Philip G. Zimbardo, Robert L. Johnson, and Vivian McCann. Psychology: Core Concepts. Pearson, 7 edition, January 2012.

Nino, nino.amvrosiadi@geo.uu.se

Abstract

Ethical issues arise around every human action and creation, and technology would not be an exception. Fortunately the humankind except from creativity is also gifted with the sense of responsibility, which always makes us wonder whether our actions are right. So, how do we decide what is right, and is the modern technology built and used according to the notion of right?
Technology seems to be evolving quickly and the happiness and safety of people using it are often taken into account [1,2,6]. Value sensitive designs are introduced and the technology assisting humans reaches the revolutionary limits of merging people with robots. For example artificial vision is one of these many admirable achievements. There is a considerable progress towards the other end of the scale as well, where privacy is a value that tends to extinct without us taking notice, when other people or organizations judge that it is better to keep and expose individuals´ personal information whenever necessary [3,5,8].
Probably it is easier to judge whether or not technology keeps up with human values. But the question of what is the right thing to do still remains unanswered.
Moral heuristics [10] are very often the guideline that people are following. Although one can argue that the previous knowledge and experience quickly pops up and saves us from hours of unnecessary thinking when being in front of a dilemma, we may step into serious errors due to lack of information. If for example the information source is the ICSU declaration regarding scientific research freedom, exchange and such high ideals, one will be surprised how far the scientific community has reached in the field of ethics; but this impression will easily be ruined if taking an objective look on how the scientific community actually functions all over the world (e.g. recently Iranian students were denied access to TU Delft on political grounds). Most commonly people rely on the existing laws to form
their list of `right by definition´.
Is it then moral to grant unlimited rights to the creators of technology because this is what the laws state? In our society it is a widely accepted idea that it is good and fair for one to become rich by selling his/her ideas, and unethical to deprive a person from the right to sell his/her research or creativity achievements. But where is the limit between self-recognition and common good? The end should be adding to global knowledge and technology development, rather than becoming rich and famous [4,9]. It´s a fact though that while most of the countries today do not practice conquering in physical space, they approve conqu

Uppdaterad  2016-11-14 10:56:55 av Iordanis Kavathatzopoulos.