We apologize for the intermittent audio on Dr. Barbour's line. A transcription of the webcast is provided below to help clear up any sections that are inaudible. Thank you for your understanding.

<< Back to 5 Biggest Challenges on the Front Lines of Scholarly Publishing

See press release

Download survey report

Feedback? Post a comment

Transcription

Jason: Welcome to today’s webcast, 5 Biggest Challenges On The Front Lines of Scholarly Publishing. My name is Jason Chu and I will be your host and moderator today. I’m very pleased to introduce to you our two special guest speakers. Dr. Virginia Barbour is chair of the Committee on Publication Ethics, otherwise known as COPE. She is also co-founding editor of PLOS Medicine Journal. Welcome, Ginny!

Ginny: Hello, very nice to be here. I am looking forward to the webinar.

Jason: Thank you. Next we have David Moher. Dr. Moher is University Research Chair at Ottawa Hospital Research Institute and also Co-Editor-in-Chief of Systematic Reviews Journal. Welcome, David!

David: Thank you, looking forward to the session.

Jason: Fantastic. Let’s go ahead and jump to today’s agenda. As a way to kick things off, we’re going to do a poll. Then we’re going to go through our countdown of the 5 Biggest Challenges. These are challenges that were surfaced via a survey of editors and editorial staff that iThenticate conducted in March 2013. So during the course of that countdown, we’ll have the benefit of the insight from both Ginny and David, who will share both their experience as well as how to address some of these specific challenges. And then of course we’ll be speaking from the context of their experience in their own work and the finally we’ll have the opportunity to have some Q & A. We’ll also have at the end of the session, a link to the full survey. So if you are interested in the full survey results, you can get access to that as well.

Jumping right into things, I’d like to start us off with a poll. What’s going to happen is that I’m going to drop a poll into the window here and what I ask you to do is to just provide us with your response. We’d love to get a sense of who is joining us today. We’ve got on more poll question I’d like to ask you. This is really around what you see is the biggest challenge. I’m going to change the poll here and ask you, ‘Which of the following do you think is the biggest challenge for scholarly publishing?’ It looks like we have “Pressure to Publish” coming in pretty high here. “Plagiarism” is following in its stead. Ok, I’m going to take that poll away. Thanks for your input, folks. That’s going to help inform our session today.

Let’s start off just by sharing some of the survey results that we uncovered. The goal of the survey was to learn more about editors and editorial staff’s attitudes and experiences with ethical issues. Through the survey, 5 top concerns and challenges were surfaced. You can see here the broad representation of our respondents. We had 120 respondents weighing in. You can see here that Medical and Scientific were fairly well represented followed by STM, Education, Research, Engineering, Humanities and then of course, a couple of outliers with Other.

#5: Technological advances that simplify image or data falsification

Now, let’s jump right into the challenges. Again, we conducted the survey, we surfaced 5 specific challenges. The number 5 on our list is “Technological advances that simplify image or data falsification.” According to the survey, “Image/data falsification” was deemed serious by more than a third of our respondents. Ginny, I think I want to kick things off with you. Can you give us a description or definition of what characterizes this type of falsification.

(approx 4:15)

Ginny: We put up here a slide, which is taken from the guidelines at the PLOS Journal, but this was derived from work that was done originally at the Journal of Cell Biology, by Mike Rossner, who was one of the people that drove this. And there are some very specific features around what constitutes image manipulation, and it’s worth taking some time to look at these. So, they are essentially, introducing, taking out or inappropriate enhancement of a specific feature, putting unmarked groupings of images, so putting things together, splicing items together where they shouldn’t be spliced and inappropriate adjustments of contrast or brightness in such a way that actually misleads the information in the figure. It sounds fairly obvious. There are of course degrees of this. I really urge anyone who is looking at figures to have very clear guidelines, first of all in your own laboratory if you’re a researcher, or an editor at a journal, to have a very clear idea of what the specific areas that are acceptable. It’s worth spending time to map these out very clearly. If in doubt, what we say is that if you have replaced the image or if you’ve done something to it, make it very clear at that time what you’ve done. So these are the broad outlines of what is acceptable and what is not acceptable.

Jason: Thank you Ginny. David, I know that you have some data that you wanted to share as well. Let me just advance that slide here.

David: Thank you. I’m not sure whether folks are aware of the science image integrity website. I’ve found it to be quite good and quite comprehensive, and I think for journal editors, there is a nice section on best practices on managing image data and it really helps editors to evaluate the risks they might have. For example, in a journal that I run, theres probably a low risk because we’ve got only one or two types of images that we would typically get from the type of studies that we tend to publish. For other journals, that risk may be much higher. So I think that’s probably a useful resource to go to the science image integrity website. There’s a really nice paper that doesn’t specifically speak to the falsification of images by Fanelli, which was originally published in PLOS One. I think it is a good paper because it really speaks to the issue that this is not a very rare problem. Editors and others really need to be quite aware of it. There is nothing that I feel that this topic problem is not specific to images or any type of research. I think it is probably a problem that goes across the board that we all need to be aware of. Some of the data in the Fanelli paper is somewhat troubling in terms of the prevalence with which falsification and fabrication go on. I would recommend people to read it. It is in PLO One, which is open access so people should have no trouble getting access to it.

Jason: David, just a quick follow up question on that note. Do you recall what the findings were? Would you care to share those with our attendees?

David: I think if if my memory is correct, in the survey she did, I think she noted that 2% of authors admitted to the problem of falsification and if you look at 2% and you consider that maybe upwards of 1.5 million to 2 million papers are published a year and you multiply that by 2%, you’re getting a very large number. She also noted that approximately a third of respondents noted other questionable behavior by themselves and/or their colleagues. So, it’s quite significant.

Ginny: What’s really fascinating about that paper, and what I agree with, is that although 2% of people admitted themselves, a much larger proportion say that they’ve observed it in other individuals, which as you know, suggest that actually the prevalence is quite high.

#4: Conflicts of interest between researchers and industry

Jason: Yes, thank you David and thank you Ginny. Let’s continue our countdown here. At number 4, we have "Conflicts of interest between researchers and industry." According to the survey we conducted, close to half of respondents believe that conflicts of interest were serious challenges. Ginny, you had some considerations for researchers that you were wanting to share. I want to turn to that slide and ask you to weigh in here.

(approx. 9:15)

Ginny: There are absolutely conflicts of interest here between researchers and industry. I think one of the interesting observations is often that these conflicts and the understanding of them has been driven by information within the medical research community. But we know increasingly that it is not just a facet of the medical research. It extends across the whole of scientific research and many of the things that we are now aware of are as applicable to basic science as they are to medical research. One of the places where this has been discussed publicly very extensively is at this site, and I would encourage you, even if you’re not a medical journal editor, to have a look at it. This is the International Committee of Medical Journal Editors (ICMJE), which is a small group of medical journal editors that meet regularly. They have discussed very extensively the difficulties around financial conflicts of interest and more importantly, come up with suggestions on how to manage them. That’s the first place I would suggest you start looking at. And it can be a question of having sufficient awareness of there even being a conflict in the first place. That’s certainly an issue that many early stage researchers find, is they’re simply not aware right at the beginning that there is a potential for conflict. They may not take that into account when they are developing relationships with either pharmaceutical companies or indeed in any situation where there is some sort of conflict. It doesn’t of course have to be financial. It can be professional or otherwise.

Jason: Yes, thank you Ginny. I want to turn back here to you David. You had some studies that you wanted to share as well.

David: I just wanted to follow up on Ginny’s point. Most people tend to think of conflict of interest as purely financial and why that might be true for many cases, it’s by no means the only conflict of interest. There’s intellectual conflict of interest as well. I think people ought to be aware of that. Similarly, many people tend to think that the conflict of interest are with the pharmaceutical industry. And of course, that may be one possibility, but of course there are a lot of other industries that researchers are involved in and they may well also be involved in issues of conflict of interest. My own view, borne out by some others, is that when there is an issue of conflict of interest, we shouldn’t jump up and down and have a big concern because many of us have conflict of interest. They key issue is management of conflict of interest. How do you manage conflict of interest? And I think two things. One is the paper by Paula Rochon and others, which I was involved in. Actually, it is a checklist of looking at financial conflicts of interest from a variety of perspectives, and I know journals tend to use the International Committee of Medical Journal Editors checklist. But this is another one that I certainly feel is worthy of looking at. There is a preponderance of data to suggest that there are issues, particular in terms of the pharmaceutical industry, in terms of conflicts of interest. Studies that are funded solely by the pharmaceutical industry for example, tend to have more positive outcomes than those that aren’t funded by the pharmaceutical industry. And again, folks can get a good review of the evidence based on the pharmaceutical industry perspective in terms of what they fund from a very nice systematic review in the Cochran library and I’ve given a reference to it as well. It’s quite a recent one as well so I think it will provide editors and others with an up to date sense of what’s going on.

Jason: Thank you David. Ginny weighed in here on a question that one of our attendees had asked about what you meant with “intellectual conflict of interest”. Ginny would you care to share that out? I think it would be great to have you share that with all our attendees.

(approx 13:35)

Ginny: Yes, as David said, it’s very much not the case that conflicts of interest are always financial. It can be intellectual. This comes up very frequently, for example if you were involved in the review of a paper. A journal might ask if you have a conflict of interest with reviewing that paper and that can be because for example, you may know somebody who is on the paper, you may have co-published with them, you may have some sort of personal relationship, or you may be a direct competitor of them in such a way that you’re not actually able to easily judge the work objectively. And what most journals recommend, is to make that explicit to somebody outside because what’s really hard in all of these cases are drawing conclusions yourself.

#3: Poorly designed studies

Jason: Thank you. So continuing on our countdown here, we’re at #3 now. So at #3 we have ‘poorly designed studies.’ And again, according to the survey that we conducted, close to half of editors believed that “poorly designed studies” were serious challenges. And I’m going to pass the ball over to you David and ask you to weigh in with some of the research studies that you shared with us.

David: Sure, and this sort of related to issues of potential conflict of interest -- the previous problem that we discussed. Some of the issues around “poorly designed studies” are, if you take for example clinical research in osteoarthritis of the knee, we know that there are a large number of randomized trials, almost all of them are pharmaceutically based trials -- they’re trials on drugs. If you look at focus groups and patients... what are they interested in? They’re interested in issues of the defectiveness of physiotherapy, alternative medicine, etc. So the studies are designed in some ways inappropriately because they’re not getting at questions that patients want to know. And again, this could be industry driving it, so here may be intellectual conflicts of interest because industry may want to push a particular drug or particular class of drugs, when if you go to patients and consumers, we can see for example that they’re not interested in drugs per se. They are interested in a whole bunch of other issues. So that’s one example where poorly designed studies can be a problem. I think I had another slide that I wanted to mention. This is detailed and I’m only using it as a backdrop. We only know about poorly designed studies from the report of a study, and reports that are clear or transparent allow readers, editors, peer-reviewers and others to assess whether the design is appropriate or inappropriate, so we could have a study where the author says that they conducted a randomized trial and they randomized by day of week. We know that it is clear reporting and that’s what we want, but we know that it is inappropriately designed because we know that randomization is not by day of week. So, that would be an example. Here is just a backdrop. There is an abundance of evidence that authors are very unclear about how they describe the randomization process. Those that do often shows that the study is not only badly reported, but perhaps badly designed and not so well executed.

(approx 18:00)

The Equator-Network is a sort of a clearinghouse, particularly for reporting guidelines for all different types of studies. In this slide for example, if you go to the top, right-hand corner, you’ll see the library of health research reporting, and so authors, peer-reviewers, and editors can go to that and see a whole bunch of checklists, for example for reporting case control studies and for reporting randomized trials. These checklists are all about reporting, but the flipside to reporting is conduct. People can go to those and see how they might be able to used to help design a study.

Ginny: The point I want to make here is exactly as David said that one of the most critical things, particularly for journals is that they need good policies around the reporting of studies. You can’t know anything about the quality of the study unless it’s well reported. The analogy I often hear people say is that, “You can’t cure bad conduct of a study; but it’s a bit like going into an untidy room and turning the light on.” You can see where the problems are. It will help identify whether or not you can understand what is in a study and how it was conducted. And then in the end you hope that this will allow better conduct in the future. There are a number of initiatitives in this area, again many of them have been led by medical journals and medical publishing, including CONSORT, which David has been intimately involved with for clinical trials, PRISMA for systematic reviews etc., etc. But what’s particularly great about this is that now this is being extended to the basic sciences where examples such as ARRIVE, the guideline in the reporting of animal studies. All of these are really fantastic examples of how better clarity allow a better understanding of how studies were done.

#2: Pressure to publish

Jason: Thank you, Ginny. I am going to turn us to #2 here in our countdown, which is the ‘pressure to publish.’ Again, returning to our polls that we conducted a short while ago, pressure to publish was the one that you all recognized as being a big challenge. And according to our survey, 58% think that ‘pressure to publish’ is ‘serious’ and 20% deemed it ‘very serious’. So a total of 78% think it’s a problem. I want to turn to you, here, Ginny. You had a couple of consequences that you wanted to share.

(approx 20:40)

Ginny: This is something that we frequently hear from researchers, particularly when I go and talk to them. One of the issues is around a constant pressure to publish -- this comes from the institutions very often. Researchers are under enormous pressure to be productive. We see this reflected particularly in how individuals will chase publication in journals that are perceived to be high impact. And this can lead to very perverse behavior. In fact, it can also lead to unfortunate behavior by journals themselves. And there is a discussion going on about whether or not this pressure to publish can ultimately lead to research misconduct by researchers because of the pressure to publish.

Jason: Thank you, Ginny. David, let me turn to you now. You have a study you wanted to share and also you want to mention the Council of Science Editors.

David: I think there is no doubt about it. I think there is a pressure to publish, and that’s well recognized. There is a quite nice systematic review and I’ve quoted it here. I think it’s a good history of the situation from the Australian perspective and provides some interventions to an increase in the publication rate. One of the disconnects is that it’s interesting that we have today very few students and very few professors on the webinar. And, of course, those two groups are the ones who really feel that to some extent. I participated in a Council of Science Editor’s panel some years ago and I think the sense was to try to get deans of schools to rethink tenure in terms of: was it quantity or quality. And so one of the arguments would be that instead of people having to submit their top 20-30 papers, that they could select perhaps 3 and say, why are these 3 papers important for them in terms of tenure track. That might in some fashion reduce the oneness on just publishing and publishing and publishing and all the sorts of consequences that go along with publication. I think unfortunately part of the issue of the pressure to publish is that there is a very active discussion going on in the World Association of Medical Editor list service about predatory journals and whether people are, unfortunately, not aware of these journals and submitting articles to them and a whole bunch of problems associated with that.

Jason: Well, actually, David. I wanted to pose a question to you that came in from one of our attendees, Kevin. Given this recommendation about moving towards quality vs quantity, for someone who is doing research right now and is interested in publishing -- and Ginny, feel free to weight in here, too -- How do they go about deciding which journal to submit their work to, given that there are so many journals available in similar areas or the same area?

(approx 24:25)

David: What I often hear people say is “go to the top journal” meaning the highest impact factor journal. I hear that a lot and I think it’s really unfortunate. I think people need to take the time to think about the article and where it fits best. When you write to the editor about the paper, you need to make a strong case why this paper and why this paper in this journal. I think this helps orient the editor who typically gets a lot of papers. So, my strong recommendation is: where does it fit best; does the journal accept the types of articles that you are submitting. For example, if somebody was doing a systematic review and they wanted to submit an article to The New England Journal of Medicine, I would caution them on that because The New England Journal of Medicine rarely published systematic reviews over the past 12 months. So I think one needs to use a lot more of “intelligence” about where the paper might go.

Ginny: It’s important to understand who you are trying to reach in your papers -- who the appropriate audience is. I personally feel that it’s tremendously important to think about the access that people will have to a paper. So the fact that many people nowadays are required to publish in open access journals. As David said, look at the journal you are considering submitting to and make sure your paper fits within it. And make sure you explain to the editor why you want to publish there. Now I think David raised an interesting issue of these many journals that are appearing online with an uncertain provenance that many people are concerned about. The term ‘predatory journal’ came up in the Q&A here, too. There is an active discussion going on about whether there should be a list of criteria by which you can judge a journal to decide whether or not it has a reputable editorial service and whether it’s a type of place where you would want to publish your research. I would caution anyone who is looking at journals they are unfamiliar with to do due diligence on it and make sure it does appear to be genuine. And there are a number of organizations that are looking at this critically to come up with criteria. Unfortunately we can’t police the internet. That’s the long and short of it. But at the same time we can do our best to try and focus on good criteria.

Jason: On that note, in terms of general guidance, would you say that if a journal is asking an author to pay to have a paper published that that is the mark of a predatory journal?

(approx 28:00)

Ginny: No, absolutely not. I could do a whole other webinar on open access. In fact I’ve love to, but this isn’t the place for it. There are business models whereby there is publication fee charged when a paper is accepted. That is the model for the journals that I work for. There are also open access journals that do not charge a publication fee. The critical thing, though, is that the publication fee is separate from the editorial decision making. So, for example at the journals that I work for I don’t know whether authors have the ability to pay -- that's very important because you don’t want to refuse papers because the author can't pay. So, no, there isn’t anything intrinsically wrong with it. But at the same time, you do need to be aware when you are an author, if there is a publication fee, that you understand exactly what that means. And there is a big discussion going on around that at the moment. The truth is that publishing is something that requires finance from one end or another, whether it’s a subscription model or open access via publication charges. So, it’s not something in itself that is a problem but it is something as an author that you need to be very aware of.

Jason: Okay, Ginny, thank you. I want to touch on one more question before we move on to #1 here. We had a question come in from Peter who wants to know: Is there a shift from the Thompson Journal impact factor to the impact factor calculated by Google Scholar?

Ginny: I think there is rather a more interesting discussion going on, which is around developing article level metrics, opposed to journal level metrics. Impact factor is a journal level metric. It tells you something on average about the journal itself. Actually it’s much more interesting to know what are the individual level metrics on a paper so then you know the number of citations, number of downloads, etc. etc. Increasingly a number of journals are displaying this information. So, I think that at this point in time there are a number of different ways that researchers can have their work assessed for impact. An example in the UK right now is the research excellent framework which is how higher education establishments are judged. They will not be using journal level impact factor as part of their assessment. They will be making use of individual article metrics. So I would actually say there is a shift toward article level metrics opposed to journal level metrics.

Jason: Thank you, Ginny. David, did you want to weigh in as well?

David: I would just like to say that I think that if we were having this discussion in 18 months time, I think the landscape will be quite different. I think there are a lot of other metrics coming out that are far more relevant to notions of dissemination and knowledge translation. So for example, many journals, including my journal, we use high access label for articles that are significantly in terms of frequency downloaded within a short period of time. And that says something to you about the content and the possible dissemination of that article. It’s a very important piece of information regardless of the impact factor of the journal. And there’s a whole bunch of other metrics that are coming out. And I think we are seeing a shift in the landscape and I think it will be quite different in 18 months time.

#1: Plagiarism

Jason: Thank you, David. Moving on here in our countdown. We are at #1. The #1 challenge that our survey respondents identified as a challenge was 'plagiarism'. So among the general challenges presented to respondents, plagiarism and misconduct ranked the highest, with 82% of editors deeming it serious or very serious. Just want to share some additional survey information here. So in terms of types of plagiarism our respondents see most frequently you can see ‘self-plagiarism’ ranks the highest, followed by ‘blatant plagiarism’, ‘paraphrasing’ and then ‘double submissions.’ What we also did was ask our respondents to weigh in on the nature of the plagiarism they encounter -- 38% indicated that they thought it was ‘intentional’; 29% thought it was ‘accidental’; and 33% thought that it was ‘self-plagiarism.’ On this point of self-plagiarism, there seems to be quite a bit of a question -- is this a case of inadvertent plagiarism or is this intentional. I wanted to toss this to you, David, first. What are you thoughts in terms of the work you’ve seen in your experience? When authors self plagiarize have you seen this as a case of being inadvertent or is there some intentionality in that sort of misconduct? 

David: I would say that our journal is too new for me to be able to make a comment on it so I would pass it to Ginny.

(approx 33:30)

Ginny: Yes, self-plagiarism is interesting. It’s a term that many people dispute or take as not appropriate. We have had some very active discussions both on the COPE website and the World Association of Medical Editors (called WAME) about self-plagiarism and is felt that sometimes it’s a good thing. And the reason, of course, in some cases, you actually want people to use the same form of words to describe the same thing. So you have some content where you use that exactly the same between a number of different papers (for example in the methods section), and it’s absolutely appropriate to use the same words, and that should be encouraged not punished. There are also times when, for example, the introduction to a paper may well use the same form of words. And again, that’s appropriate. There are only a certain number of ways you can say some things. Of course, once you are talking about discussion, and certainly the results, any repetition of words is problematic. So, there is a very active discussion going on about whether self plagiarism is an appropriate phrase to use, and it’s highly context dependent.

Jason: Thank you, Ginny. I’m going to continue our march through our deck here. I want to do a recap of survey results, and then we are going to turn to recommendations and resources. So, quickly, in terms of survey results, again, the top 5 threats, identified by our survey respondents, to the integrity of scholarly publishing. We have ‘plagiarism’, ‘pressure to publish’, ‘poorly designed studies’, ‘conflict of interest’, and ‘image/data falsification’. We also show Editors Top Concerns about Researchers

- Not hiring translators
- Not understanding subject when designing a study
- Splitting studies across publications
- Publishing the bare minimum
- Focusing on the number of publications rather than making advances
- Plagiarism

In terms of prevention, we did ask our survey respondents to weigh in on the effectiveness of different methods of preventing plagiarism. What came in very high was plagiarism detection software. What’s also interesting is that the ‘publicizing use of software’ also seems to have some significance in terms of efficacy. ‘Advising authors on avoiding plagiarism’ and ‘maintaining a blacklist’ and ‘informing author’s employer’ are also some approaches that seem to have a number of efficacies. David or Ginny?

David: I think I put on one of my slides I think perhaps the population that I’m particularly most interested in getting at are students at graduate school who are embarking on a career of research. And I think we need to seriously educate them about the issues of plagiarism and a whole bunch of other issues that have come up today and other issues that haven’t even come up today. It seems to me that the use of software is useful but in a way it’s after the fact. We need to really teach a lot of people research integrity and publication ethics, and that plagiarism is not appropriate. And that’s not going to get rid of everything, but if you look at universities or research institutes, there are not courses on these issues. There is very little that we are doing for the next generation on how to become better researchers, not simply on the perspective of doing better science, but the whole notion of research integrity. What is wrong with plagiarism? Why shouldn’t you participate in such activities? So I think it’s very important. And it’s incredibly important for publishers to fund these sorts of initiatives. And I’m not talking about a one or two day affair that many students and professors wouldn’t be able to get access to. I think it needs to be funded, and I think that publishers are certainly one group that needs to do that. I also advocate for adding a new professional person in the form of a publications officer and I think they should be available at universities and research institutes. One other role could definitely be to facilitate the teaching and making all of these people aware of the problems, making people aware of the software that is out there, that many journals use and that their manuscript will be submitted to that. So I think there is a host of things that need to be done.

Jason: Thank you, David. Before I turn it to you, Ginny, I want to make a quick note and mention the good work that McGill University is doing, particularly around that bullet you have, David, on universities and research institutes needing to promote this avenue of learning. McGill actually has a day or day and a half that they dedicate to academic integrity where they have graduate students come in from different departments and they do case studies. They look at different scenarios, eg graduate student are asked to co-author a paper or asked to do research on the behalf of someone else, and they actually talk through these case studies. It helps them get a better understanding of how to own your own work or take responsibility for your work in a way that is appropriate, both for the institution while they are in the course of study as well as for publication. So I recommend taking a look at the good work they are doing on that front. Ginny, now turning to you. I know you have some associations and resources that you would like to share with our attendees.

(approx 40:45)

Ginny:  To start, I’ll mention COPE, which is the organization that I’m involved with. But I would absolutely echo David’s point that this is absolutely about education very early on. My kids always joke that they are only people in their class who have to put attribution at the bottom of their slides that they do at school. But the truth is that if you don’t start teaching children about how to attribute things early on they won’t do it when they get to the university or go on to doing research. So there is certainly something very different about how children are learning these days and how that affects what they’ll do as professionals. I think that this debate needs to happen very early on. Children are very adept to manipulating things on the web nowadays and that account for a large amount of what we are talking about here -- the ability to manipulate text and manipulate figures. And that is something that comes rather naturally and is always rewarded. So that’s something I would throw in as a suggestion. Now I’d like to talk a little bit about COPE, which is an organization called Committee on Publication Ethics, and that I’m currently Chair of. We are essentially a very large group of editors where our primary role is education of editors and giving advice to them on issues. But at the same time all of our resources except for one very specific thing are fully available. I would encourage anyone who is interested to take a look at this because publication ethics is a part of research ethics. Perhaps one of the more visible ones and one that often comes to mind is, as you can see here on the left hand side of the slide, “COPE Ethical Guidelines for Peer Reviewers,” which was launched last month after receiving feedback on our initial guidelines. I hope this will prove useful. They will essentially take people not particularly experienced in areas of ethical issues and lead you through it.

Some of the other resources we have are flowcharts. These flowcharts are tremendously helpful when people are having issues. They were the brainchild of my predecessor Liz Wager who I believe is on this call. These include a number of past publication ethics cases and are translated in a number of different languages. In the future we will consider updating them but for now they are an excellent resource for anyone to use. Here on the next slide I’ll show you briefly what they look like. I don’t expect you to read this but it is one on plagiarism. It not only takes you through what to do at the beginning when you suspect plagiarism but you can see what editors might try to do to avoid the damages very straight forward. One of the earlier questions on a previous slide that was interesting was the use of software and if it has an effect on plagiarism. Certainly one of the things that we see with the editors that we work with is even just having some sort of system in place when you might actually indicate that you are screening for plagiarism is sufficient to deter plagiarism. So, that’s something worthwhile to keep in mind.

(approx 44:00)

Another resource is cases. These are cases that are from the very beginning when COPE first came into being, which was 15 years ago. And we have cases on everything from plagiarism, which is a very big issue, to other various cases of misconduct. These are very useful resources when you are trying to understand if a situation that you come across is unusual or common and how it might be handled. One of the earlier slides called out some associations that are tremendous resources:

  • WAME.org
  • Council of Science Editors
  • ICMJE
  • Equator-Network.org
  • ISTME
  • EASE.org.uk

All of these organizations have quite concrete and practical advice on the issues that we were discussing today.

Jason: Thank you so much, Ginny. I want to turn now to our Question & Answer portion of our session. Liz had a question early on about intellectual conflict of interest. She wants to know: How do you think this intellectual conflict of interest applies to the peer review process?

David: I think the peer reviewer could have a strong belief in a particular intervention could work in a particular way. They may have other strong beliefs and they may simply find ways to reject a manuscript without declaring their conflicts of interest. In that situation I would suggest or perhaps recommend that peer reviewers not look at the manuscript. They may be worried about a colleague or they may be reviewing some sort of colleague, not a close colleague who is going up for peer review or tenure or they may have some issues. Who knows, but my sense is to suggest to people that if you feel there were any issues do not participate.

Ginny: Yes, I would very much agree with that. And again to go back to the peer reviewer guidelines that were just published on the COPE website, it’s very hard often for an individual to know whether they have a conflict of interest. So our position with journals that I’ve worked for is that when in doubt, tell the editor what the actual issue is and let them decide. You don’t have to decide for yourself. Sometimes quite unusual situations may lead to an uncomfortable decision about peer review of a manuscript. And then it can not be straightforward to know that if you are a direct competitor where you should think very carefully about doing a review.

(approx 48:10)

Jason: Yes, thank you, Ginny. I’m going to turn to our next question here. It’s from Heidi. This is really touching on content that we covered on earlier in the session. Heidi wants to know: Would you consider globally adjusting the brightness of a dim CT scan, MRI, etc. to be improper image manipulation if the adjustment was noted in the figure legend?

Ginny: That is probably acceptable, provided that what doesn’t happen when you do that is you either obscure the image or an item within the image or inappropriately make something visible. But the key there is that, as somebody noted, is noting it in the figure legend. So the editors would be able to make decisions for themselves and if necessary ask you for the unadjusted image.

David: I would ask the author why they are doing this. There may be a completely legitimate reason and we may want to put that in the footnote.

Jason: I’m going to turn to our next question here. From Frederic, who wants to know: “What do you both think about the problem of multi-lingual plagiarists who mash up their submitted papers with bits and pieces from a variety of sources?

Ginny: First of all, one thing that we’ve had to make clear is there are instances where people consider copying and pasting text to be a form of flattery. I think you have to be culturally sensitive where that can occur. But at the same time, making it clear within the publication that you are responsible for or for example if you are in the laboratory and you come across this that you make it clear that this is not acceptable. The problem of course, if it’s not attributed, it doesn’t really matter if it was unintentional or if it is translated. That is still plagiarism and has to be noted as such. So I think it shows a multitude of things, including that probably not a tremendously good grasp of literature itself, which itself can be a warning sign. It’s a case more around education, certainly education, but potentially also the institution where they came from.

Jason: Yes, thank you. Turning to the next question here. Touching back on peer review. How would you deal with this situation: When a study has been deemed poorly designed, but it’s been deemed so principally because the reviewer wanted the study to focus more so on his/her own interests.

David: Well, I think the peer reviewer would obviously, perhaps shouldn’t have reviewed the manuscript. He’s declared some sort of a conflict of interest. I think the only way that we know that a study is poorly designed principally is from the report. And so we really need to get editors and journals to really promote reporting guidelines and Ginny has mentioned Equator-Network library and many journals endorse reporting guidelines. And there is accumulating evidence that journals that use reporting guidelines and authors that use them, the quality of reporting, the completeness of the reporting is superior. So I think that’s really the way that we should try to improve the situation immediately.

Ginny: Yes, I completely agree.

Jason: We’ve come up to the end of our session for today. I want to take this opportunity to thank you Ginny and David for joining us. Thank you so much for your insights and for sharing your expertise with us today.

 

Related

Pressure to Publish - free paper download

Medical Research Plagiarism & Misconduct