Department of Information Technology

Abstracts A-M

Matilda Andersson, Chemistry, matilda.andersson@mkem.uu.se

Abstract

Tatiana Chistiakova, Information Technology, tatiana.chistiakova@it.uu.se

Ethics in research

Morality and ethics are essential part of human being and are important constituent of life in society. The ethical issues can be found everywhere, starting from everyday situation between two friends and ending up in serious problem on the national level.

Moral values are individual for every person, however, they are mainly established by the society. As a consequence, every individual should follow some rules and being responsible for his or her actions. For some groups of people, like doctors, clergy, the ethic responsibility is also followed by the privacy and secrecy. Since researches often deal with various types of data which might uncover personal information of people, ethics becomes very important for the research in any area as well. Moreover, in some cases researchers are responsible not only for reviling or not the studied data, but also for obtained results. Hence, researchers should follow ethics rules and regulations.

There are many cases in research area, when the ethic question is a hot issue. It happens, that research processes and results may lead to serious consequence. For example, as it happened with Italian scientists recently, when the earthquake was predicted to be not serious and ordinary people were convinced in their safety. However, the result was the opposite. As follow, seven scientists were found guilty in making wrong predictions.

In my opinion, the judge could not take this type of decision. This is all about research. Scientist are not able to see future, but they just try to study possible events evolution. That is why all research experiments always include error calculations or probabilistic estimations of results. The results might be very accurate or less accurate, but there is always a chance that they are wrong. The real prove of true experiment is shown only by practice. Moreover, the results can never be taken as a ground truth, when the case is about such unpredictable things like natural phenomenons.

As it is mentioned before, one of the ethic components is secrecy. Since researchers may work with various types of data, including medical data, it is obligatory to follow rules that regulates how and on what bases the data can be opened to public. It is also important to keep the data anonymous, so that no questioned person can be recognized.

The main points of the ethic are to protect the privacy and to respect the personality. Nowadays, since the research covers all sphere of our life, the need in careful and precise investigation is in demand. Every scientist should carefully states results of his or her studies, should thoughtfully weight all pros and cons of the future studies.

Börje Dahrén, Earth Sciences, borje.dahren@geo.uu.se

Abstract
The CUDOS code and Disinterestedness
One of the defining works in the field of research ethics is the CUDOS code as defined by Robert Merton. The foundations of this code are Communism, Universalism, Disinterestedness and Organised Scepticism, and this abstract will focus exclusively on the term Disinterestedness.
Merton proposed that science should be conducted with the sole purpose of gaining new knowledge and contributing to science with no ulterior motives. This is a very idealistic view, and it is questionable if this part of the CUDOS code has ever been applied at a large scale at any point in time. In reality, there are a multitude of factors other than the “pure search for knowledge” that influence the decisions of individual researchers as well as research institutions. In my view, the two most influential factors are career advancement and policy making.
1. Career advancement. Today, researchers are under immense pressure to deliver high impact research at an ever increasing rate. The impact of the science is measured using mainly bibliometrics, looking at factors like citations and where the studies are published. This means that if you are conducting research in a not-so-active field of research, it is not possible to be cited as many times as if you do research in a very active field of research. This might dissuade researchers from getting into a “cold” field of research, as they are afraid they will be stuck in a scientific dead end, also known as “career suicide”. As long as researcher is a paid job, career advancement will be a highly influential factor, at least partly overruling the code of disinterestedness.
2. Policy making. Politicians as well private/governmental research foundations make strategic decisions on how to distribute research money, to advance the areas of research that they feel are the most important, or conversely, they might choose to actively not fund or even abolish a certain area of research. There are two prominent examples of this: a) the Bush administration that abolished all embryonic stem cell research in the US, abruptly stopping this relatively new field of research dead in its tracks, and b) during the last decade in Europe, the field of climate research has been extremely “hot”, meaning that many researchers in the natural sciences has tried to squeeze in different climate aspects into their research and grant proposals. In other words, the strategic decisions taken by policy makers distinctly influence what the individual researcher chooses to direct his or her attention toward.
The question is firstly to what extent these factors influence the individual researcher, but also if this influence necessarily is a good or bad thing, i.e. if the Disinterestedness really is relevant for conducting ethical research.
When policy makers choose what research field to fund (or not fund), they purposefully alter the landscape of research. This means firstly that science is by definition no longer a free entity, as it is dependent on the goodwill of policy makers. But also, it means that science becomes more focused, and more effective at generating new knowledge in the chosen area(s) of research.
In short, policy making undoubtedly overrules the code of disinterestedness. The net effect of this might be positive, i.e. more effective research, but the tricky question is who gets to set the agenda, and on what grounds. In the case of embryonic stem cell research in the US, the shutdown was a decision based on the religion-derived morals of the policy makers, which definitely adds yet another layer of complexity on top of the already complex domain of research ethics!

Brendan Frisk Dubsky, Mathematics, brendan.frisk.dubsky@math.uu.se

Abstract

The truth, the whole truth and nothing but the truth

Among the examples of research ethical problems listed in the literature for the upcoming seminar, a central theme is discernable: that of the putative role of the researcher as a herald of truth, and ethical issues associated with instances, suspected or confirmed, of incomplete information, falsities or even deliberate deceit in the scientific community. To the first category belong the exceptional measures of secrecy taken in connection with the graduation from Uppsala University of the Swedish crown princess, the case of the Italian seismologists convicted of manslaughter and the shady circumstances under which two mathematicians were pressured to leave their department here at the university, to the second the observed tendency of publishers to promote spectacular positive results at the expense of truth, the possibility of fabricated DNA evidence and pseudoscience, and to the third the supposedly suspicious-looking e-mails among climate researchers, the purportedly bogus lie detectors and finally the lucrative business of open-access journals publishing flawed articles which are falsely claimed to be properly peer-reviewed.

I argue that the presence of this common theme is no coincidence, but that it may on the contrary serve as a reasonable blanket definition of those ethical values which are intimately connected to science, as distinguished from those of society in general. Indeed, out of the more elaberate CUDOS norms of Robert Merton, the first and last may be viewed as asserting that the scientist should strive to spread and advocate scientific truth, while the remaining two are to ensure the absence of scientifically irrelevant considerations in matters scientifical.

Assuming this definition, examples of ethical issues related to research but not to the ethics of science, then, can be found in the ways research is conducted (experiments involving considerable suffering in humans or animals is in general frowned upon by the society, for instance) and in the potential consequences of the advances in knowledge in which it results, such as technology which is viewed as dangerous, or which is otherwise opposed for political or religious reasons and the like. As support of the practical validity of the proposed definition, one may note that while ethically loaded discussions concerning truth and scientific rigour are part of everyday research life, the loudest voices of concern when it comes to the remaining aspects of ethics and research are arguably most often heard from outside the scientific community.

The potential for conflict between these "norms of science" and those of society (which in addition of course vary somewhat throughout the world) is thus clear, and as an aspiring researcher one should reflect on the resulting ethical dilemmas, ranging from the question of whether or not to illegally donwload copyrighted material, to that of whether or not to publish a paper which would in effect supply the reader with a recipe for constructing biological weapons. (Pertaining to the last example, the publication of results describing how to modify the bird flu virus into one which could spread directly from one human to another caused some controversy about a year ago.)

Samuel Edwards, Mathematics, samuel.edwards@math.uu.se

Abstract
Criminal Scientists

The prosecution of the Italian scientists ([1]) raises several interesting questions regarding the ethical and legal responsibilities of scientists when communicating their scientific opinion on matters relating to the health and safety of the general public.

On 6 April 2009, a magnitude 6.3 earthquake struck the town of L'Aquila, situated in the Apennine mountain range in central Italy, killing 309 people and causing substantial damage to the city's infrastructure. A series of smaller tremors occurred leading up to the earthquake lead a government panel, "The Serious Risks Commission" to make an assessment of the likelihood of a large earthquake occurring. A week before the 6 April quake, the panel issued a public statement that it "was not possible to predict whether a stronger quake would occur", as well as issuing a recommendation that buildings in the area be strengthened to withstand seismic activity. From the minutes of the meeting of the panel, it was determined that some of the scientists on the panel made statements such as " there was no reason to believe that a series of low-level tremors was a precursor to a larger event" and "that just because a number of small tremors had been observed, it did not mean that a major earthquake was on its way" ([2]).

It was these statements that led Italian prosecutors to charge the scientists on the panel with manslaughter, of which they were found guilty in October 2012, and received to 6 year jail terms (As of October 2013, none of the scientists have served any jail time, pending appeals against the sentences). Many international groups of scientists have protested against the prosecution of the scientists, arguing that the science itself was being put on trial. The prosecutors argued (and the judge agreed) that this was not the case, rather that the scientists were being held accountable for statements they made.

I find this to be an interesting question; what level of responsibility do scientists have to make accurate statements in matters that relate to public safety? Scientists generally have a much more formal language, making statements that are based on precise probabilities and statistics, often being hard for laypeople to understand. Great care must be taken when this language is abandoned. Indeed, great trust is placed in the statements made by experts in many delicate situations. It is therefore quite clear that scientists have an ethical responsibility to avoid making statements that will erode this trust; every statement should be followed up with conditions and caveats so as to help preserve the public's trust in science. Moreover, I believe an argument can be made that it is negligent to deviate from this language in such a way that it is clear that people's lives may be put at risk (by which I mean in cases where there is a possibility of misinterpretation, along with a possibility of harm to the public). Doctors can be held responsible for egregious errors they make that result in a loss of life, and it is not unimaginable that there should be some form of legal recourse to discourage scientists from making irresponsible statements.

Emma Eriksson, Chemistry, emma.eriksson@kemi.uu.se

Abstract
Research ethics in observational studies
Observational studies can be the best way to get the appropriate results in certain kind of research problems, this type of studies are preferably done when normal experiments are unethical or some kind of behavior is monitored. For example, the effects of a long-term smoking habit on certain medical conditions cannot be investigated by using conventional methods with some exposed subjects and a reference group, this would not be ethical. On the other hand, it is possible to study people which are already smoking and compare their results to non-smokers - this is an observational study. Studying different kind of behaviors, interactions between people and reactions from people in certain situations are also different kinds of observational studies.
A lot of ethical issues can be coupled with doing observational research. One important aspect is how you maintain the integrity of the observed people. It is always important that the data is stored in a way that the results cannot be paired with a specific person and that all the subjects are protected from any harm. To the greatest extent they should also be informed in advance with a complementary written consent from the subjects. But, under certain circumstances it is not possible to tell the participants before the observational experiment due to possible changes in their behavior, or because of the large amount of people observed (for example when studying large crowds). The question is where the line should be drawn, when is covert observation ok and when is it not?
Ethical problem also arises when children are included in the study. All studies should to the greatest extent be done on adults, but sometimes this is not possible. When a child is involved, a written consent is needed from both the child and the guardians and it should preferably be written in a way so the child can understand it. But naturally, it is not always possible for a child to give its consent. An ethical dilemma then arises - is it ok to give the guardians all the power to decide if the child should participate in an observational study? What happens if there is money, as a compensation for participation, involved? And are the parents always entitled the results or could it be insulting to the child´s integrity?
In conclusion, a lot of ethical questions will always arise when including people in research, especially when studying children. Giving the included subjects the appropriate information about the research seems to be one of the most important things to assure good research practice. Another thing to think about is to construct the experiments in the right way to assure that accurate conclusions can be made. If for example large epidemiological studies are preformed and wrong conclusions are made, serious consequences can emerge if the general public is informed.

Jonas Flodin, Information Technology, jonas.flodin@it.uu.se

Quality in scientific publications

Scientific advancements play a vital role in the improvement of the quality of life for humans and animals. As such, it is important that research is focused on topics that are relevant and that the results can be trusted to be correct. To this end, peer reviewing, a system to ensure that results are scrutinized by experts in the field, has been put in place. However, peer review seems to be either insufficient in filtering out research of bad quality, or not in use at all.

In [1], J. Bohannon describes how he successfully gets a fatally flawed paper accepted for publication in 157 open-access journals. Bohannon claims that "Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless". This means that the paper didn't undergo the scrutinization necessary to find obvious short-comings for most of the journals where it was submitted. I argue that this is a natural consequence of the incentives surrounding publications. Open access journals have the incentive to publish more papers because authors pay a fee to have their papers published. The authors have an incentive to try to churn out as many papers as possible, regardless of quality, because the rate of publications per time unit is a common performance metric. This works by feeding off the trust that has been built by the scientific community for decades.

[2] is an even more alarming example of where editors failed to detect an obviously flawed paper. The authors were invited to speak on the World Multiconference on Systematics, Cybernetics and Informatics. The paper is "written" by the computer program SCIgen, which was created by scientists at MIT. By just reading the abstract, it is easy to see that the paper is purely nonsense (although rather amusing). By looking around the internet using a search engine, it's easy to find more generated papers with interesting acceptance.

These two examples show that quality control is severely lacking in places, but how can we know which journals and conferences we can trust to perform high quality scrutinization? It falls to the individual researcher to use their own judgement when deciding which publications to trust. I argue that this is not desirable, since it allows researchers to ignore the lack of quality in papers with arguments such as "It's published in a journal, of course I think it has been scrutinized by experts in the field".

It is clear that current practices doesn't measure up to expectations when it comes to quality control. In the intereset of keeping the trust between researchers something must be done, whether it's some kind of enforced quality control or reshaping incentives to account for quality and not only quantity when it comes scientifice publications.

[1] Who's Afraid of Peer Review? [1]
[2] Rooter: A Methodology for the Typical Unification of Access Points and Redundancy [2]

Joakim Karlsson, SP Technical Research Institute of Sweden, Joakim.Karlsson@sp.se

Abstract
The large small world
As a full grown researcher a long time has passed since the first thought and impression was implicated. During the early days of our lives we are raised and taught rights and wrongs mainly by our parents and family. As we continue to grow we will learn about laws and moral values mainly from society. During the final stages of our pursuit to become a researcher we will earn how to act through the standards set by the research surroundings, such as the university, funders, representative bodies and government and society. These moral values, laws and ethical guidance´s are what form human beings into character. However all of these characterizing values will be most individual. Even though you have the same parents or the same teacher two persons will have completely different thoughts.
With the increasing use of technology and improved communications researchers are given more room to collaborate worldwide. In many cases collaborations are encouraged. With the freedom come new perspectives and opportunities for improvement. However these collaborations can cause issues and dilemmas when it comes to ethical and moral issues. Even though moral values and ethical codes are accepted all over the world, these will be different regarding what part of the world and what culture you are in. Cultural differences and differences in laws between countries and cultures will lead to different point of views for individual researchers. To guide in global moral and ethics, guidelines such as the Declarations of Helsinki has been accepted in the research community worldwide.
As the final responsibility for the decision making is in the hand of the researcher himself the judgment of the moral and ethical issue is dependent on the individual researcher. I state that the decision made will be reflected by the experiences and values the individual person has gained throughout his life until that date. Most of the decisions one are making in life and the consequences you have to face will form you for your further decision-making. Dependent on you previous experiences a decision you find morally correct may be an insult in someone else´s eyes.
Examples of different points of views can be seen in the example of the professors laid off in a secretive way at Uppsala University. Although evidence was presented for the researchers they were not able to give their saying in the matter. From the beginning my sense was that the researchers have done wrong, but as I studied the transcripts my sense changed regarding the strict line kept by the principal, with no room for the researcher to give their defense.
A moral dilemma encountered by two researches from two different parts of the world may end up with two completely different outcomes, and both of them might in their respective eyes be the correct one. This is an example were global guidelines and moral laws is interpreted different dependent on your local situation.

Emil Kieri, Information Technology, emil.kieri@it.uu.se

B. A. Nosek, J. R. Spies and M. Motyl, Perspect. Psychol. Sci., 7:615-631, 2012.

The research community is highly competitive. Especially for early carreer scientists, facts on the ground in terms of journal publications is crucial. This often put the concepts of getting it right and getting it published in conflict. Furthermore, the spirit in the research community, and in the editorial process, promotes progress. Negative results, and verifications or falsifications of previous results, are less likely to get published compared to new, positive results.

The need to publish at the highest possible rate induces a risk that false results
are published. The reluctance of journals to publish replications and falsifications may lead to that the false results that are published linger uncontested for long times. This article discusses these mechanisms. The authors argue that there is a gap between what is good for the scientist, at least on a short time-scale, and what is good for science. They discuss previous attempts to improve the situation, and makes some new propositions.

Part of the cause for problems of this kind lies in the mindset of researchers, both the researches conducting the research, and those scrutinising and ultimately publishing it. Replication is given less credit than novel results, and journals publishing studies which confirm previous results are often seen as less prestigeous.

One aspect which is discussed to some detail is the role of the reviewers. The authors recognises peer review as a good way of detecting potential flaws. The review process is also acknowledged as hard work, and it is noted that the reward for the reviewer is small, if existent. Therefore, asking reviewers to simply scrutinise the research harder is not seen as a solution.

One common type of error in published research, in particular in the behavioural sciences where the authors reside, is bad handling of statistics. They propose more widespread use of checklists in the reviewing process in order to make the detection of such error easier. Such checklists help the reviewer, allowing the detection of severe, but common occuring, shortcomings in the manuscripts with little extra effort.

The authors also discuss the lack of transparency in the research process as an incentive for poorly conducted research. They argue that researchers too a greater extend should publish not only summaries of their research findings in the form of journal articles, but also the underlying data and details about their tools, methods and workflow. They cite a study where psychologists were asked to share their data for a replicative experiment, and less than a third complied. Furthermore, the study found correlation between reluctance to share data with weak evidence against the null hypothesis and errors in the statistical analysis. An analogy can be drawn to the perpetual debate within the scientific computing community regarding the publication of computer code, see, e.g., [R. J. LeVeque, SIAM News, April 1, 2013].

Jan Klosa, Information Technology, jan.klosa@it.uu.se

Abstract
The complexity of human being has made it necessary to develop rules and guidelines to make live secure and to keep it ongoing. Hundreds and even thousands of years ago these rules have for instance been formulated in books (like the Bible) or have been dictated by leading persons, such as Kings. Nowadays, guidance is often based on laws and morals. Over the past decades the call for guidelines in the area of research has become more and more important, since its absence would most likely lead to misconduct and mistrust. Moreover, since research effects the environment and society this could drop humanity into a never ending, unpredictable scenario.
But the interaction of legal and ethical approaches as well as the freedom of research have to be sensitively balanced. For example: "certain ethical criteria make it harder [...] and certain ethical criteria make it impossible to reach new and valuable knowledge. " (Good Research Practice, Vetenskapsr ̊adets Rapportserie 3:2011, page 24)
On the other hand, moral values can vary from person to person and are influenced by culture, tradition and religion, so that ethical codes can only have a voluntary character. Whereas, "what a certain law prescribes, is very clearly and precisely formulated" (Good Research Practice, Vetenskapsr ̊adets Rapportserie 3:2011, page 19), so that a law is meant to be binding.
So, which cases should be covered by laws to make sure that effective and high quality research is still possible? And how much freedom should researchers have to not cause any kind of harm, neither to research subjects nor to environment and society?
Those questions cannot be answered without intense discussion. One popular opportunity of covering certain aspects of research arose in the form of the so called "Declaration of Helsinki", which is an ethical code addressed to the medical community.
Even though morals and laws provide a chance to make research more trustworthy, they do not completely prevent research misconduct, such as fabrication, falsification or plagiarism. But developing a concept of research misconduct is not as straightforward as it might seem. E.g. how can one ensure that both "intentional and unintentional behaviour" (Good Research Practice, Vetenskapsr ̊adets Rapportserie 3:2011, page 108) will be covered and will be treated appropriately? What special kinds of behaviour should better be encompassed by ethical codes and which ones by legislation? Should the answer to the latter question depend on the research area? Who is responsible if misconduct has been detected and what sanctions should be imposed?
Answering the last question might be of special interest when focussing on one of the given examples: the case of the Italian researchers who have been found guilty of manslaughter, because of incorrectly and contradictory predicting an earthquake that occurred at the 6th of April 2009 in L´Aquila, Italy. 309 persons died because of this earthquake. Did these researchers use research material (e.g. computational methods) incorrectly? Did they intentionally spread contradictory information? It is unlikely to affirm one of the questions. Hence, it can´t be denied that the whole process has been driven by moral principles to sentence someone who is responsible for anything related to this horrible tragedy.

Konstantinos Koukos, Information Technology, konstantinos.koukos@it.uu.se

Abstract
This course tries to draw the primitive guidelines for morality and ethics. When there is a decision for human to take, there is no right or wrong, there is no well defined procedure, (no algorithm for correct decision taking)! Only our intuition on very complex problems and no rule to follow - assuming that law exists only for well known issues - which actually derives from widely accepted ethical decisions of the past. Beyond these guidelines the course specifies on ethical decisions for good research. Discusses the primitive ideas of Merton´s CUDOS (communalism, universalism, disinterestedness, and organized scepticism) and their application in research. Additionally it addresses ethics codes, moral laws and touches sensitive issues for the science such as animal ethics and their application in research. Some examples about peer reviewing and their role in research are also discussed as part articles and presentations. As another example the descructive results of a wrong scientist prediction that led to the death of more than 300 people rises new ethical dilemmas.

Anastasia Kruchinina, Information Technology, anastasia.kruchinina@it.uu.se

Responsibility of scientists

Publishing and sharing knowledge is essential to success of every scientist. Researchers should inform the general public about new research results, address and discuss current scientific issues. But when scientist publish results, he or she must be aware that he or she bears the ultimate responsibility for the content presented in the work. Such responsibility exists not only for published results, but also in practical work which influence community.
I wish to explain my point about shearing results on the example of L'Aquila earthquake occurred in the region of Abruzzo, in central Italy: [3]. The agency of prevention risks (mainly scientists) made inexact and incomplete information about danger and did not warn people about possible consequences. It resulted in many victims.
If such problem like prediction of earthquake is under consideration, scientist should never say that something happens or not happens for sure. Here is always a possibility of error. In the case of Italian scientists, they should be more careful and warn people that science is not able to predict earthquake for sure, that with current knowledge it is just impossible. People which are not experts in specific area trust scientists. Similar situation is then we are ill and come to the doctor. If doctor will say that we are healthy and we do not need to worry, we will trust him or her. We will relax and believe that everything is nice. But here is always a possibility that doctor was wrong and just one additional test is able to show a dangerous disease...
In the situation of Italian scientists, if they would have explained to the people that everything could happen, that here is always a possibility of wrong prediction, people would be more careful and the number of victims could be reduced. People would not be so relaxed and prepare for possible catastrophe.
But of course, here is also the other side of the coin - panic, which can be also dangerous. Another illustrative example could be scientists in medicine. When to declare a pandemic? Should they say people that around here is some dangerous infection which quickly spread? It is very hard to explain people the real situation and it is very easy get panic instead of prevention.
However, if scientists will be every time to much careful and protect yourself with incomplete information and possibility of error, community will stop to believe the scientists, what will cause a lot of inconvenience in science.
In conclusion, scientist need to be aware of this both problems while shearing their results and think carefully before make any decision. It is a very important ethics question. Should scientist publish his or her results if they could be used on the wrong way? (e.g. Einstein and his discovery of nuclear fusion which was later used as a weapon.) Should scientist tell the community the results, but pointed that there is still big possibility of error? (Instead this can cause disbelief of people to the scientist and reduced funding for the research.)
So every scientist before publishing or speaking need to think carefully: It is really true what I am going to tell? It is necessary? Is it helpful? Is it dangerous?

Dorothea Ledinek, Solid State Electronics, dorothea.ledinek@angstrom.uu.se

Abstract

Carl Leonardsson, Information Technology, carl.leonardsson@it.uu.se

I will focus on discussing the responsibilities of scientists suggested in the VR report on good research practice and the ICSU booklet on freedom, responsibility and universality of science.

The first principle of the CUDOS code is Communalism. It states that scientific results and data should be made openly available. First, that relates to the responsibility to publish. I.e. the scientist who has produced a scientific result is obliged to make it public. Both reports bring up the difficulty of adhering to the responsibility of publishing when carrying out research funded by commercial interests. Especially so when the achieved results happens to not be the desired ones. Another imaginable difficulty for following the responsibility of publishing, which is less covered in the two documents, is when the scientist is willing to publish, but the research community, conferences and journals, are not willing to accept the results. This may be because the results are politically incorrect, or deemed uninteresting as may be the case for negative results. Second, communalism states that published results should be openly available. This contradicts the common practice of publishing in conferences and journals where a private publisher takes ownership of the text and exacts fees on whomever wants to read it. VR now requires that results of research funded by VR grants should be published using open access.

The second principle of CUDOS is Universalism. ICSU stresses that the principle of universality guarantees the right to perform research. They state this as the freedom of movement, association, expression and communication, as well as the right to equitable access to research data and materials. The VR document focuses on the corresponding responsibility to judge scientific results objectively without taking into account aspects of the one who produced them such as ethnicity or social standing.

The third principle of CUDOS is Disinterestedness, i.e. to perform research only for the purpose of gaining knowledge. VR seems to take a rather strong stance against this principle. The VR document stresses that there are many valid reasons for performing research, and that the research community should be generous in accepting any reason for participation.

The last CUDOS principle is that of Organized Scepticism. I.e. to continually question and scrutinize, both the results of others and own results. This principle is mostly covered as rather vague general rules in the documents of VR and ICSU. The adherence to the principle by the research community is however questioned in some the other recommended literature, specifically the articles in Science and The Economist regarding peer reviewing and reproducibility of results respectively.

Another responsibility brought up by both VR and ICSU is to "minimize the misuse of science." This seems to be a difficult thing to do, since much research, and in particular basic research, can be used or misused in many ways, without the scientist being able to control the usage.

Kristina Lidayova, Information Technology, kristina.lidayova@it.uu.se

Research ethics is an area which is quite difficult to define because there are always new and new questions rising up as the research develops. Some kind of "moral consensus" was defined nearly 70 years ago by sociologist Robert Merton. He formulated four main principles. The norm of communalism tells us that the society and other researchers have the right to be informed about the research results. The norm of universalism is about evaluation of scientific work only from a scientific point of view and not incorporate the gender, position in society, etc. According to the norm of disinterestedness the main reason for doing the research should be the contribution with the new knowledge. The last norm of organized scepticism says that questioning and scepticism is important until we have sufficient evidence on which we can base our assessment.
Researcher's position or general perception of research has changed from the time these principles were formulated. There are many difficulties to live them up to the nowadays reality, but they still provide important cornerstones for discussions about honesty, openness or research misconduct.

One example of a research ethics problem category is to find the equilibrium between the protection of the individuals participating in the research and the conduction of a good research that can lead into important results. There is a need to have some kind of ethical rules or regulations that would state what to do before research, how to perform actual research and what to do after the research. Collection of these rules is called research ethics code. It attempts to formulate what morals are demanding from the researcher in certain situations. It is also important to know what kinds of regulations the law dictates to the researchers. Therefore there exists ethics legislation which addresses some specific situations and conditions.

From a general point of view it is important to distinguish between the morals and the law. The morals are not clear and precisely formulated and they are very dependent and based on our own values and are strongly connected with our feelings and needs. They differs from person to person and from culture to culture. It is also not clear what kind of sanction should follow after breaking the moral.
The law can be clearly and precisely formulated and has an explicit system of sanctions. Compare to the morals it always applies to a certain geographical territory and from a certain point of time. Both the law and the morals are important, even they are different. There are always situations when the researchers are breaking the morals, but the law has nothing to say. Or the other way around when the law forbids something which seems from the moral point of view neutral to us. The researcher should therefore constantly distinguish between the law and the morals and follow both the ethics codes and ethics legislation in order to perform good research.

Jonatan Lindén, Information Technology, jonatan.linden@it.uu.se

Publishing and peer review

An increasingly alarming problem in the scientific community is what is perceived as the declining quality of research conducted. The reproducibility of experiments has recently been reported to be embarrassingly low in certain areas; as stated in the Economist article, two studies that undertook to reproduce experiments presented in articles in biomedicine, only succeeded in less than 25% of the cases.

At the heart of the problem lies a conflict of interest. The research community as a whole, and the publisher, on one hand, desire nothing but truthful accounts of successful experiments, contra the individual researchers on the other hand, possibly desiring a career in academia and on a shorter term, a publication. The problem is, research with novel and positive results tend to be easier to publish, as it is deemed more interesting. Hence a researcher has strong incentives to produce accounts of successful experiments. This may tempt researchers to either deliberately hide or trivialise problems with their research, or to become subject to motivated reasoning, i.e., since one outcome is more desirable, results may be interpreted in favour of the expected outcome.

The authors of the article Incentives for Truth consider peer review to be the currently best method of improving (or safeguarding) research quality. However several flaws are enumerated - reviewers are volunteers, reviewing is tedious and many errors are overseen, and it is only the research article that is reviewed, not the research. The latter is of course much more extensive than the former, and it can be expected that the probability of that an error would show up is much higher if the whole research process were to be inspected, instead of only the final article. They don't mention other problems with peer review, such as conflicts of interest - the reviewer might know the authors of the article under review, and as such may alter his' or her's review accordingly.

In the same article, one interesting idea put forward to solve the earlier mentioned problem, is to make it much easier to publish articles. This would remove the article count as a meaningful measure of success, as it would be trivial to publish. Hence the incentives to publish would be steered more by an interest of communicating research progress, and would also welcome all sorts of results of experiments. Furthermore, it would remove a part of the rigidness, or stiffness, of the current publication model, where a published article is as if written in stone, and the end product of the research. Instead, it would allow for a more dynamic view of the research content produced. However, this would of course raise certain questions about attribution of research efforts, and how merit should be distributed.

Fei Liu, Information Technology, fei.liu@it.uu.se

Abstract
Ethics contain moral precepts that are conscious, reflected on and motivated, which one formulates as clearly as possible and are presented in a systematic way. Ethical considerations in research are largely a matter of finding a reasonable balance between various interests that are all legitimate. The questions research ethics strive to tackle with are dynamic, though. They change over time. They vary between disciplines. When research ethics are placed in a cultural and social context, they manifest themselves differently. On the surface, it seems that research ethics conflict with scientific advancement, since certain ethical criteria make it harder (taking a longer time, costing more), even impossible to reach new and valuable knowledge. However, examples of poor scientific quality, like ill-formulated research questions, using incorrect methods and omissions of negative observations and data, overlap with poor ethics. Therefore, there is actually no confliction between demands for good scientific quality and good research ethics. Ethical regulatory systems ensure participants'interests during the quest for knowledge. The question is how one should act in a complicated reality, where different principles and interests can stand in opposition.

Since the subject for this group seminar is about research issues, in the following texts I will try to enumerate various ethical issues that could arise during the course of research according to the course materials. If a researcher realizes that he or she is working with research that has or can have dangerous consequences, an important problem arises.(continue or not? publish or not?) A common and tempting mistake is overestimating the significance of the results one has arrived at and exaggerating their holding power far beyond the area in which one has found them to apply. To be allowed to conduct certain types of research, it is necessary to obtain approval of ethics review according to laws but what if researchers perform their work in countries with lower ethical standards, just to take advantage of this? There are four concepts regarding accessibility of research materials:secrecy, professional secrecy, anonymizing and confidentiality. So how to handle requests of materials branded with these concepts can be an issue. Research collaboration has all those large-team-project problems: participants don't contribute; undertaking tasks that are not one's wish; authorship distributions; big team management; funding expenditure. One of the main tasks of Swedish universities is to inform the general public about research. Problems with this are: a blunt and oversimplified way of presenting important research problems by media; announcing results prematurely and even exaggerating their importance by researchers. Since the number of published works play a decisive role when the merits of researchers are compared, this can tempt researchers to strive for quantity rather than quality. A typical example of such a temptation is to break research results down into "least publishable units", so as to be able to present a larger number of titles. A problem can arise when someone makes a significant contribution to the work effort during the research itself, but is not given the opportunity to be included on the author list. Researchers are often invited to have peer reviews and cases of peer reviewers abusing the trust given to access to a colleague's work entails.

Yu Liu (Ernest), Mathematics, yu.liu@math.uu.se

Abstract
Example III: Deception (Milgram Experiment)
Milgram Experiment
Aim: to see whether people really would obey authority figures, even when the instructions given were morally wrong. Deeply, it´s like asking, was human nature inherently evil or could reasonable people be coerced by authority into unnatural actions?
Settings:
Milgram and the "teacher" who is the participant sit in one room, and the "student" sit in another room who is Milgram´s assistant. The teacher cannot see the student.
Milgram ask the teacher to ask the student questions once a time. If the student gives the wrong answer, the teacher push a button to give the student an electric shock. The more wrong answer the student gives, the higher voltage the teacher is ordered to give, from 40 to deadly 490.
Actually, the student (the assistant) will not be shocked. And he will play some sound record to pretend to be the reaction of the student, like "Och", Screaming, "let me out". If the teacher insist to quit for many time, the experiment will be stopped, otherwise, he will continue to the 490 voltage.
After the experiment ends, Milgram would tell the teacher the real purpose of this experiment.
Outcome: 60% of the all participants will continue to the 490 voltage, although they felt uncomfortable.

Ethics Issues: Deception or not??
Fact:
He did not tell the participants the real purpose beforehand, but he did that after it right away.
Criticism:
1. How could he deceive the participants who even want to help you with your research?
2. The fact that the participants thought that they had caused suffering to other human beings (i.e. how can we so evil?), could have caused severe emotional distress.
That is, realize that "wow, how can human beings be so evil? Just obey evil orders easily."
Arguments:
1. Purpose seems reasonable: He was trying to establish whether the claim of war-criminals (e.g. World War II), that they were just obeying orders, was a reasonable defense or not.
2. After the experiment, he told participants all the things.
3. Subsequent research indicated that there were no long term psychological effects on the participants (referring to Stanford Prison Experiment, which has long term effects).

Current Rules
Here is a summary of the APA (American Psychological Association) code of ethics and informed consent policy - most other research areas use similar codes of practice.
1. Must not perform experiments using deception unless the research has a valid use, and there is no viable alternative.
2. There are some risks that must never be concealed, such as physical risk, severe emotional distress and discomfort. For example, a subject volunteering for a sleep deprivation experiment must be informed, rather than ordered to participate, as happened within the military.
3. Any deception should be revealed as soon as possible, and certainly no later than the conclusion of the experiment.

If a researcher designed a similar experiment with Milgram today, it would not be allowed today.

Do they work??
Corresponding to the rules:
1. unless the research has a valid use: it´s extremely hard to define "valid use", and to tell whether some experiments are useful or not, before we actually see the effects of these experiments, especially those blank areas of research, like "how likely shall we obey orders?"
2. some risks that must never be concealed: for many risks, even potential risks, we have no idea. For doing research, many unexpected things could happen. How to tell whether we tell all the risks?
Besides, most participants are non-scientists, and it is all too easy to inadvertently mislead them because they do not fully understand the consequences.
You can never wait for them to become a professor of this area, then ask him to participate.
3. This could be done easily. And it have recently been adapted. A subject signs an initial consent form but, after the experiment has been explained at the end, they sign a second and can ask for their contribution and records to be destroyed.

Debates still go on
1. There may be times when the experiment lost some validity due to informed consent, that is, without deception.
2. If we have 100% belief that we human beings are reasonable all the time, Milgram experiment would never make such a stir. Actually, more or less we doubt that how likely shall we obey authorities regardless of the consequence of the orders?
So it´s a meaningful experiment. Shall we stop that kind of experiment? Actually the rationality and science of this experiment itself are also under debate.

Lina Meinecke, Information Technology, lina.meinecke@it.uu.se

The responsibility of science.

The main aim of science and research is to enlarge our knowledge. You might, however, ask the question to what extend you can hold a scientist responsible for his research.
The issue that a researcher should always be aware of what his research can be used for, has long been discussed for prominent examples: nuclear fusion and stem cell research.
How much a scientist is responsible for uncertain information has not been discussed that much. A recent event illustrates this dilemma:
In April 2009 a heavy earthquake hit the Italian town l´Aquila and caused ca. 300 deaths. The city lies in a seismic active region and experienced a shower of smaller tremors in the proceeding months. As a result of the raising panic in the area´s population seismologists discussed the increased risk for an earthquake in a local meeting. The declaration was that l´Aquila always experiences a higher risk than other regions, the tremors were a sign for increased seismic activity even there, but the probability for an earthquake to appear in the coming weeks was still around 0.1%. The problem then was communicating this to the public, which resulted in a statement by a government spokesman. Apparently some scientists also gave interviews about the low probability of an earthquake in the near future. Parts of the population felt reassured by these declarations and interpreted them as there was no chance at all for an earthquake to hit l´Aquila. So they stayed in their houses when it happened, which often ended fatally. The 6 scientists and the government official were sentenced to 6 years of prison for manslaughter, because they didn´t present all the knowledge they had and misjudged the situation.
Can you hold scientists responsible in such away? And to which extent are scientists supposed to make decisions or recommendations based on their results. Furthermore do researchers, who do not get trained to hold public - even political - speeches, especially in front of a big crowd and at a delicate time hold the responsibility if their scientific presentation gets misinterpreted by the public?
The sentenced caused protest from seismologists around the world. They argue that there was no way this earthquake could have been predicted and many have signed a petition for the release of the 6 convicted. To understand the unpredictability of the earthquake one should look at southern California were a big quake has been predicted for a long time and a similar shower of tremors occurred simultaneously - but nothing happened.
It is clear that scientists have a responsibility towards the public. It is often the public who funds research, so there should be some feedback into society. But where does the role as an objective analyst end and the role as an active decision maker, who takes one or the other side, start? This question becomes even more difficult in fields were science has not yet evolved so far to give certain answers or maybe never will be able to, such as seismology.

Slobodan Milovanovic, Information Technology, slobodan.milovanovic@it.uu.se

Peer review - an honor or just extra work?

Communicating scientific results to the public and exchanging them between researchers efficiently is of a great importance for development of the society as well as one of the primary tasks of each researcher. The concept known as peer-reviewed (or referred) journalism has been established and strongly accepted in the scientific community as a dominant mean of sharing the latest scientific findings. The peer-reviewing process is thought to be the core of solution to the problem of finding an effective self-regulatory system for distributing the scientific information. The key detail in the process is reviewing scientific results by qualified (usually anonymous) members of a profession within the relevant scientific field in order to determine their suitability for publication. This way, standards of quality, performance and credibility are supposed to be maintained.

It is a common interest of each member in the scientific community to responsibly do the assigned work in the process. According to the previous statement it would be a sound assumption to say that such system should not experience any serious problems. Nevertheless, according to the article published by the Science magazine under the title Who's Afraid of Peer Review, it seems that "from humble and idealistic beginnings a decade ago, open-access scientific journals have mushroomed into a global industry". It seems that the main flaw lies in the role of a reviewer - the core role of the process. If a scientist gets an offer to review a paper for the journal, he or she as a rational member of the scientific community should feel honoured for the trust that has been invested in him or her. Such gesture implicitly recognizes the work of a scientist who is being offered a role of a reviewer. However, a social experiment conducted by J. Bohannon (which can be also questioned from the ethical point of view) which resulted with the earlier quoted article, reveals that reality severely deviates from the assumed scenario. By creating an artificial scientific paper with a serious, easily-noticeable methodological flaw and submitting the same to the many different peer-reviewed journals, he was able to test the established system for possible illness. He arrived to some important results that many would not think twice before they agree to describe them using the word scary.

Namely, significant percentage of the reviewers seemed liked they did not even tried to understand the material of the work that is presented in the paper. The paper had been accepted for publishing by a nonnegligible number of journals, and many journals that returned the paper back for further editing were suggesting only improvements related to grammar and style.

The publishing process has turned into a highly competitive hyper-production business. Scientists are under constant pressure to produce more publishable material and in a constant race with their colleagues. That creates a strong platform for many sorts of misbehaviour including lack of time for reviewing, sabotage of other scientists' work, and the loss of true creative motivation.

Updated  2013-10-28 11:16:54 by Iordanis Kavathatzopoulos.