Guidelines for Reviewing Manuscripts
Please note that the manuscript you have been asked to review is a privileged (confidential) communication. As outlined in the APA Publication Manual, an unpublished manuscript is entitled to copyright protection from the moment it is fixed in tangible form – for example, typed on a page.
Further, ". . .the author owns the copyright on an unpublished manuscript, and all exclusive rights due the owner of the copyright of a published work are also due the author of an unpublished work." Therefore, you may not circulate, quote, cite, or refer to the unpublished work, nor may you use information from the manuscript to advance your own work or instruction unless you obtain specific permission for such use from the author.
Manuscripts should not be given to students for educational purposes. You must request permission in advance from the Action Editor to share the manuscript with any other person, for example, if you wish to seek assistance from a colleague in preparing your review.
You are strongly encouraged to destroy any paper copies you have made and delete any electronic copies of the manuscript as soon as your review is completed. However, you may retain one copy of the manuscript until such time as the Action Editor sends you a copy of the editorial decision letter and other reviews of the manuscript. Within 48 hours of receiving these materials, you must destroy or delete any remaining copies of the manuscript.
Although JCP uses a masked review process, if you suspect that one of the manuscript's authors is a person whose relationship to you might present a conflict of interest, for example (but not limited to), a recent collaborator, faculty colleague, or student; or if the acceptance or rejection of this manuscript might result in your own financial gain, you must inform the Action Editor immediately so that the manuscript can be promptly reassigned.
As a reviewer, you are asked to serve two different roles: that of gatekeeper and that of consultant.
First, you are asked to make a recommendation to the Action Editor as to whether the manuscript should be accepted, rejected, or returned to the authors with an invitation to revise and resubmit.
Second, and equally important, you are asked to provide a detailed, educative narrative evaluation that the Action Editor will send to the authors.
These two tasks reflect the dual functions of the scientific peer review process.
The gate-keeping role requires you to render a judgment about whether this manuscript should eventually appear as a published article in JCP, thereby becoming part of the permanent body of scientific literature that will influence the journal's readers for many years to come, not only in terms of future research, but also in terms of the clients, counselors, students, and organizations and communities served by the journal's readers.
Viewed from this perspective, it is difficult to conceive of a more important professional responsibility. Reviewers must remain constantly mindful of their obligation to future generations of researchers, counselors, and clients and communities to make these judgments with all the wisdom, fairness, and fidelity that they can bring to the task.
The role of consultant is no less important than that of gatekeeper. Detailed, educative, respectful reviews are a hallmark of JCP that serve to improve the general quality of research in our field. The journal can accept only about one of every four manuscripts submitted, but we aspire to provide sufficiently high quality feedback so that authors whose manuscripts are rejected are encouraged to continue submitting their best work to the journal, and authors whose work is accepted receive valuable ideas for their next project.
We encourage you as a reviewer to see yourself as an anonymous consultant for every author, even (and especially) for manuscripts you believe should be rejected. Aspire to provide sufficiently helpful, educative feedback so that the next manuscript submitted by these authors makes a substantial contribution to the literature and is of publishable quality. In this way, you will fulfill the second vitally important role of a JCP reviewer, that of contributing to the development of future scholarship.
If you are relatively new to providing reviews for JCP, you may find that the mindsets of gatekeeper and consultant are difficult to assume simultaneously. Many experienced reviewers find that in order to render the most objective, rigorous recommendation about the publication value of a manuscript, they must temporarily set aside their consultant role, and conversely, to provide a sufficiently helpful narrative review for the authors, they must temporarily push to the background their gate-keeping role.
In our experience as Action Editors, the latter conflict tends to crop up more frequently in reviews, that is, occasionally reviewers seem unable to transcend their gate-keeping role. The result can be a caustic, punitive review that is infused with a "gotcha" tone, pointing out errors with no suggestions for improvement, and lacking respect for the authors as scientific colleagues.
Of course, the consultant role is not incompatible with rigorous evaluation. Reviews must be critical. It is a basic task of reviewers to point out flaws in the manuscript, but this should be done with tact and respect for the authors. In addition, the consultant role requires making helpful suggestions that address flaws in the present work, or making suggestions to minimize the problem in future projects.
Considering this basic tension in roles, many reviewers find that they must read the manuscript with "two sets of eyes," scanning in one pass with an emphasis on their publication recommendation, and scanning in another pass with an emphasis on collecting points that will become the basis for educative feedback in their narrative evaluation.
With experience as a reviewer, you will discover which sequence works best for you and, with time, a way to more completely integrate the two roles. However, our first suggestion for preparing a high quality review for JCP is to be aware of the inherent tensions between the two roles of gatekeeper and consultant so that you will be better able to fulfill each of these important tasks for the journal.
To further differentiate the roles of gatekeeper and consultant (and to make the Action Editor's job easier when quite discrepant recommendations are received) we ask that you communicate your publication recommendation only on the standardized evaluation form, which is not shared with authors. Please do not include an explicit recommendation about acceptance, revision, or rejection in your narrative evaluation that will be shared with the authors.
In making your publication recommendation, please consider these guidelines developed by the APA Publication and Communication Board:
To merit publication each manuscript must make an original, valid, and significant contribution to an area of psychology appropriate for the journal to which it is submitted. That is: (1) A manuscript cannot have been published, in whole or in part, in another journal or readily available work. (2) A manuscript must be accurate, and the conclusions and generalizations must follow from the data. (3) A manuscript must be more than free of major fault—it must be an important contribution to the literature. (4) A manuscript must be appropriate for the journal to which it is submitted. For a manuscript not meeting all those criteria, you will usually recommend rejection, with detailed reasons for your recommendation. (emphasis in the original)
As you consider these policies in formulating your publication recommendation to the Action Editor, it may be helpful to think in terms of the answers to three sequential questions:
1. Is the topic of the manuscript appropriate for JCP?
If the Editor believes that a manuscript is clearly outside the scope of the journal, it is rejected without peer review. However, you may receive a manuscript to review because the Editor has some question about its appropriateness for JCP. It is helpful for the Editor to have your opinion on this question. The standardized rating form contains an item assessing fit. You might also decide to address this question in your narrative. A statement describing the topics appropriate for publication in JCP is included inside the front cover of each issue, and is also available on the JCP homepage.
2. Does the manuscript make a significant scientific contribution?
JCP has a rejection rate of around 80% and as such the bar for having a manuscript accepted is high. A key determination is thus: Is the manuscript important? This is a difficult question to answer at times but perhaps these alternative versions of the importance question can help:
- Does it add significantly to the literature in the field?
- Will it stimulate more research/theory in the area?
- Will it be cited frequently?
- Does it offer a new/creative approach that has the promise of serving the field well?
There are many manuscripts that represent sound work, using common methods and designs, but these alone are not appropriate criteria for acceptance.
The manuscript should add significantly to the field. This is not a simple decision but this is perhaps the central issue involved in the publication recommendation. What this means is that many well done studies may not be accepted because they do not surpass the importance criterion. Given the state of our knowledge, as that of ever increasing, this bar is ever changing. What was new and creative three years ago may now be standard. So the key assessment is "Will the manuscript move the field forward significantly?
The JCP Manuscript Evaluation Form contains items for you to rate the scientific contribution of this study.
3. Can the flaws in this manuscript be remedied in a revision?
Separate from the determination of overall importance is the issue of "Can the manuscript be improved?" All research is inevitably flawed, and that despite an investigator's best efforts, flaws will remain in every published study.
Although the initial version of a manuscript may contain many problems and would require extensive reworking, JCP Action Editors are encouraged to invite a revision if
- the manuscript has the potential to make a significant contribution to the literature (see above) and then
- there is a reasonable chance that all the serious issues could be successfully addressed.
So if a manuscript is not potentially important enough or there is not a reasonable chance that the serious issues can be improved then it will be rejected. Certainly, if there is a "fatal flaw" in the study, it cannot be accepted but this is relatively rare. More commonly the issues are those of importance and amount of alteration required.
The crucial point is that your recommendation to reject the manuscript or invite a revision should hinge primarily on your judgment about importance and only then on whether it is possible to address all the major flaws you have found in a revision.
It can be more kind to the author to recommend rejecting a manuscript the first time around rather than to invite revisions that have little chance of correcting the identified flaw(s).
Regardless of your assessment of the importance of the manuscript (the gatekeeper role), it is still vitally important to provide quality feedback on the study to the authors (the consultant role). Such feedback is crucial in refining the manuscript for eventual publication in JCP or elsewhere but also to aid the knowledge and future work of the authors.
In the sections below, guidelines are provided that will hopefully be helpful in preparing your narrative evaluation. Please note that our intention is not to be prescriptive. We acknowledge that there are many ways to approach the task of reviewing a manuscript, and we recognize that individual reviewers may disagree, perhaps strongly, with some of the points below. Our intention is not to impose rigid conformity, but rather to provide general suggestions for those who are relatively new to the task of reviewing for this journal.
The narrative should be phrased as a communication between you and the Action Editor about the manuscript. Please refer to the authors of the manuscript sparingly and, when doing so, use the third person. Critical feedback tends to be easier to accept when a review refers to some aspect of "the manuscript" and avoids phrasing in the second person. Consider the difference in the two examples below. Which would you rather receive as an author?
- (a) The manuscript could be improved considerably by updating the review of literature. . .The sample needs to be described in more detail so that readers can make a determination about its generalizability. . .A planned hierarchical multiple regression should be performed instead of using a stepwise approach.
- (b) You need to update your literature review. . .You haven't described the sample in adequate detail for readers to make a determination about generalizability. . .Your choice of a stepwise hierarchical regression was inappropriate, instead, use a planned hierarchical regression.
Many reviewers use phrasing comparable to example (a) above when making suggestions for revisions. They refer to the authors only in connection with positive features of the work, for example, "The authors used an innovative analogue design to explore this important research area."
Many reviewers begin their narrative with a paragraph summarizing the study. This practice serves two useful functions. First, it provides a brief statement of your essential understanding of the study and its findings, thereby, authors are reassured that you have read the manuscript in detail. (The positive effect on authors is not unlike the effective use of an accurate paraphrase in a counseling session.) Second, this paragraph serves to remind you about the study weeks later when you receive copies of the editorial decision letter and other reviews.
Next, many reviewers add a paragraph or two commenting generally on the manuscript. Commendable features can be mentioned in this section. The strengths of the manuscript should be described in some detail. It is important for authors to know what you think they have done well, together with your comments about what should be changed. This paragraph might also be the best place to point out problems with APA style that crop up throughout the manuscript, for example, biased use of language.
For both the Action Editors and the authors, clear separation of major and minor points is crucial. A more effective review will emerge when you are able to separate the forest from the trees.
What are the most salient points regarding the manuscript that you wish to share? Making these clear and listing and explaining them is very helpful. Please help the authors and Action Editor step-back to see the forest, or larger issues, and help differentiate them from the smaller, more specific issues. Sometimes a simple organizational structure of listing and explaining the major points first and then having a separate section detailing the minor ones is the most helpful.
It is also very helpful if you number each substantive point or request for revision in your narrative. These numbers facilitate the Action Editor's reference to your review in the editorial decision letter.
What follows is a fairly thorough listing of questions and issues you should think about in reviewing the manuscript. Again the key issue is not to go through the manuscript looking at each question but to look at the larger issues and does the study address them. A review that goes through the following issues point by point is not especially helpful as there is no determination of what is important and what is not.
So as much as possible, it is important to provide structure of what is important and what is less important with respect to the comments and evaluations provided. It is also helpful to differentiate one's opinions and preferences for how something should be done from clear statements about what exactly need to be attended to in any revision.
Certainly it is important that reviews provide enough information that the Action Editors and authors have enough to understand the evaluation. This is almost always more than a single spaced page in length. Most reviews provide much more than minimal evaluative comments. However, it is not uncommon for reviews to go on for several pages.
In my experience as an Action Editor and an author, reviews that are longer than 3 pages single spaced are not especially helpful. Generally such a level of detail means that the larger and smaller issues are not being distinguished. So a good guideline is to keep the review between 1 and 3 pages single spaced. This should result in enough detail to explain the points made and not too much minutia so as to lose the key issues.
Key Issues to Remember
- Is the manuscript important?
- Is the manuscript fixable? How could it be altered?
- What are the key points and what are the more particular points and are these differentiated?
- In general, good reviews do not exceed 3 pages single spaced (there are always exceptions).
- Keep in mind that you too are an author and ask yourself what would a good review look like if this were your manuscript?
After the reviewers submit their recommendations to the Action Editor, this individual must then take these recommendations and make an editorial decision of acceptance status. While this decision is often in agreement with the majority of the recommendations of the reviewers, this is not a requirement.
The Action Editor has an independent say as to the disposition of the manuscript. If the Action Editor decides that a manuscript is not appropriate for publication, then this message will be delivered to the authors along with clear justification of the reasoning for the decision. If the Action Editor decides that the manuscript is appropriate for resubmission and reconsideration, then it is his or her task to make it very clear what exactly is needed in the revision.
This will certainly use many of the points raised by the reviewers as well as any separate ones deemed appropriate by the Action Editor. However, it is important that the Action Editor also makes it clear what reviewer points do not need to be addressed in the revision. It is important that the Action Editor be as clear as possible about what is and what is not required in a revision. This helps the authors craft a revision.
The Collaborative Review Model (CRM) is the approved P&C Board mentoring model for manuscript reviewing and general introduction into the APA publications pipeline.
The CRM asks a participating reviewer to consider using others in a mentor/mentee review process.
The CRM requires
- prior notification to and approval from the inviting editor
- the reviewing mentor to train the graduate student/early career psychologist mentee about the scholarly, legal, and ethical parameters prior to distributing a confidential manuscript
- that the reviewing process is a collaborative product developed at the direction of the senior reviewer
- that all reviewers' names are submitted with the review so that students and early career psychologists receive credit for their work
The P&C Board recommends Reviewing Scientific Works in Psychology (Sternberg, 2005) which offers intelligent, "how-to-review" guidance for experienced and novice reviewers alike. Specifically, the P&C Board does not accept the senior review/student review paired process which generates two separate reviews.
Thus, the Collaborative Review Model mentorship program involves a total collaborative "working together jointly" type of review in which the senior person walks the student through the review process step-by-step and mentors the student in a repeated series of meetings.
On submission of a review via the JBO, reviewers will now be able to add one co-reviewer to their paper, and on submission of their review have this new reviewer's contact information added to the JBO pool automatically. If the co-reviewer is still a predoctoral student, that individual will be marked as a student and his/her name will not display on future search results. However, if the person is post-doctoral fellow or working professional, the name will be available for future reviewer requests.
Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38, 215–299.
Haverkamp, B. E. (2005). Ethical perspectives on qualitative research in applied psychology. Journal of Counseling Psychology, 52, 146–155.
Maher, B. A. (1978). A reader's, writer's and reviewer's guide to assessing research reports in clinical psychology. Journal of Consulting and Clinical Psychology, 46, 835–838.
Mallinckrodt, B. (2006). Editorial. Journal of Counseling Psychology, 53, 126–131.
Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52, 250–260.
Ponterotto, J. G. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology, 52, 126–136.
Sternberg, R. J. (Ed.) Reviewing Scientific Works in Psychology. Washington, DC: American Psychological Association.
Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various effect sizes. Journal of Counseling Psychology, 51, 473–481.
Wilkinson, L. and the Task Force on Statistical Inference APA Board of Scientific Affairs (1999). Statistical methods in psychology journals. American Psychologist, 54, 594–604.
Below are a set of issues to consider in any manuscript. They should not be used as an outline to construct your review but as issues to consider in your evaluation.
Although the guidelines below generally follow the order of the components in a manuscript (i.e., introduction, method, results), after the opening paragraphs your narrative evaluation need not follow the same order, nor is it necessary for your evaluation to comment on each section of the manuscript. In fact, Action Editors generally find reviews more helpful when all the most serious concerns are collated together into a single series of points early in the review, followed by a clearly separated section of less serious concerns.
The basic task of this section of the manuscript is to make a persuasive case for the importance of this study. Consider a typically well-informed, regular reader of JCP, but one who is not an expert in this specific research domain.
- Would such a reader be convinced by this introduction that the study addresses an important research problem, and that the research questions are well-justified?
- Does the study advance scientific inquiry in counseling psychology and constitute an original and substantive contribution to the field?
- Do the authors provide justification for the study based upon a review and incorporation of relevant literature, both qualitative and quantitative?
- Does the author articulate the goals of the study and the contribution of the study in extending or addressing gaps in the literature?
- Are research questions stated clearly and derived logically either from theory, conceptual framework, thorough literature review, anecdotal evidence, and/or clinical experience (or some combination thereof)?
- In some cases, more exploratory studies may not have a clearly defined framework. For example, some qualitative studies would support having the participant observer enter into a community to determine community need. If no conceptual framework is given, has the author given a clear justification?
In quantitative designs
- Are research questions/hypotheses well grounded in theory and previous research?
- If hypotheses are proposed, is each one phrased as a falsifiable statement, and is each one logically derived from the theory and previous research presented?
- If research questions are posed, is this choice justified in your view, or do you believe the state of knowledge in this research area warrants a specific hypothesis instead of a general research question?
- Have the authors clarified the critical assumptions that underlie the logic of their design, or are there important assumptions left implicit that should be directly addressed?
- Have authors justified convincingly why this quantitative approach is the most appropriate methodology for their study?
For qualitative designs
- Are the research questions logically derived from previous research, theory, and/or experiential evidence?
- Do the authors locate their research questions and methods in an appropriate research paradigm (Ponterotto, 2005)?
- Are the authors clear about whether their study is confirmatory/verification oriented or discovery oriented?
- If the former, do the authors anchor their research questions in established theory?
- If the latter, are the authors careful not to let theoretical postulates overly focus the research questions and limit discovery?
- Have authors justified convincingly why the chosen qualitative design is the most appropriate methodology for their study? [Note: There is a misunderstanding among some that qualitative researchers neglect, avoid, or dismiss theory. Qualitative researchers must, of course, master theory and research in their topical area, but some qualitative paradigms (e.g., more constructivist) address and integrate theory more in the discussion to maintain the "discovery" focus of the research questions, whereas other paradigms (e.g., more postpositivist) anchor their research questions (and interview protocols) in theory.]
- Have the authors justified convincingly why this qualitative approach is the most appropriate methodology for their study?
One basic task of this section is to provide sufficient information so that future researchers could replicate this work.
- If you were required to replicate this study with only the method section as a resource, would you be able to do so with sufficient fidelity?
- Has the population of interest been defined and described in sufficient detail?
- Are participants adequately described?
- Is the sample adequate for answering the questions posed?
In quantitative designs
- Have the sampling methods, demographic characteristics, and attrition of the sample been described in sufficient detail so that readers can reach an informed conclusion about what generalizations are possible?
- Have the proportions of ethnic/racial minority members of the sample been adequately described?
- When a sample of convenience is used, are the generalizations proposed by the authors reasonable?
- How have the purposes of the study been described to participants, and how might these procedures influence the results?
- Did considerations of statistical power contribute to an a priori determination of adequate sample size?
- In your view, does this sample afford sufficient statistical power and can meaningful generalizations be drawn to the population of interest?
In experimental studies
- Have appropriate manipulation checks and experimental controls been included in the design?
- Has the procedure for assigning subjects to conditions introduced possible confounds?
In quantitative designs
- Have all variables been appropriately operationalized, and the measures used to assess them adequately described?
- Are the measures appropriate for the participants in this sample?
- Have appropriate psychometric characteristics (e.g., scoring, dimensionality, reliability, and validity) been reported for all measures and subscales used? (For example, has retest reliability been reported for all measures of constructs the researchers have conceptualized as traits? Has predictive validity been reported for any measure used as a screening tool?)
- Are the reports of psychometric properties that are cited from other studies relevant for the sample used in this study? [An error still frequently seen in manuscripts submitted to JCP is that reliability and validity are ascribed to a measure without reference to the sample from which estimates were derived. See Wilkinson, L. and the Task Force on Statistical Inference APA Board of Scientific Affairs (1999)].
- How is reliability for the study sample addressed? For example, are reliability coefficients for the current sample reported?
- If measures were developed for this study, is sufficient information provided about the psychometric integrity of these measures?
- If rating scales are used, have the raters been adequately trained and has the reliability of the rating scheme been sufficiently documented (e.g., Kappa or Intraclass Correlation Coefficients)?
- As you consider the methodology in total, and keeping in mind that no research design can be free of flaws, does the design adequately control threats to internal and external validity?
For qualitative studies
- Has the paradigm underpinning the research been clearly articulated; or, if not specified, do the research question, research design (method), data gathering, and analysis reflect a congruent paradigmatic approach?
- Have variants of the method or analysis approach been cited?
- Has the research method or design been clearly identified and justified as appropriate for the research purpose?
- Has the researcher's stance in relation to the participants, community, and phenomenon been articulated?
- Is there evidence of reflexivity on the part of the researcher?
- Have paradigm-appropriate strategies been described for managing subjectivity (e.g., self-reflective journal, research team or peer debriefers, auditors, participant feedback)?
For qualitative studies
- Are the procedures involved in the qualitative method described thoroughly?
- Have the procedures been described in enough detail so that the readers can judge that the investigation was carried out in a trustworthy manner (Morrow, 2005)?
- For example:
- Is the sampling/selection process thoroughly described; was sampling purposeful; are selection criteria described, and are they informed directly by the research questions; are decisions about sample size — e.g., redundancy, saturation — articulated; are recruitment strategies described adequately?
- Are issues of entry into the field and the use of gatekeepers described?
- Are data management aspects such as recording, transcription, and compilation of the data corpus spelled out?
- Are data collection strategies described in detail? Has the author specified who was involved in data collection? Are monitoring processes described (e.g., checking of interviews, analytic memos, site visits, discussion, observation)?
For qualitative studies
- Is the investigator's interviewing stance (e.g., structured, semistructured, unstructured) and approach described?
- Are interview questions included, either in the text or an appendix?
- Are the training and supervision of interviewers described?
- Are other forms of data described adequately (e.g., observations, focus groups, documentary evidence, field notes, participant follow-ups)?
- Are data analysis steps described in detail sufficient for understanding how results were generated and so that the analysis could be replicated?
- Have the researchers provided for ways to check the adequacy of the analysis (e.g., audits, peer reviewers, triangulation of data, involvement of participants)?
- Have the authors specified how participants were involved in the interpretation of the data (for example, if participant checks are used, does the researcher indicate how disagreements (different perspectives on interpretation) are resolved?
- Are authors' interpretive statements congruent with the results obtained, and with supporting quotes?
- Are data analysis software packages used for data analysis identified, and are reasons given for their selection?
- Are standards of trustworthiness (rigor) clearly articulated, either in a separate section or imbedded in the text (Morrow, 2005)?
- Has the investigator identified particular ethical considerations in regard to the study, including steps taken to reduce potential risks to participants. especially with regard to issues of confidentiality and researcher/participant roles that are unique to qualitative research (Haverkamp, 2005)?
Results, Statistical Analyses, Figures, and Tables
- Do the results follow closely from the goals described previously, that is, have the researchers studied the questions set forth in the introduction?
- Do the results provide answers to the research questions that have been posed?
Quantitative manuscripts submitted to JCP should conform to guidelines for reporting statistical analyses published in 1999 by the APA Task Force on Statistical Inference. (American Psychologist, 54, 594–604). Reviewers not familiar with these guidelines should consult this resource.
For quantitative studies
- Have the data been screened for coding errors and outliers?
- Is the treatment of missing data appropriate?
- Are the quantitative methods used the most appropriate choices for testing the hypotheses or research questions?
- Are they appropriately matched to the nature of the data (e.g. random vs. fixed effects, longitudinal vs. cross-sectional)?
- Have the authors demonstrated that requirements and underlying assumptions of each statistical test have been fulfilled by the data (for example, assumptions of independence, and normality – both univariate and multivariate)?
For quantitative studies
- Reports of statistical significance should be coupled with an appropriate estimate of effect size. (See Vacha-Haase & Thompson, 2004).
- Effect size estimates should be reported for each statistical test of each hypothesis or research question.
- Reports of point estimates should be accompanied by appropriate confidence intervals.
- Have appropriate corrections for the inflation in Type I error been used in reporting results of multiple statistical tests?
- Is each figure and table clear, accurately labeled, and essential, or could the material be presented more efficiently in text?
- Is the material in each table or figure self-explanatory?
- Does the text unnecessarily duplicate material that readers can glean more efficiently from the table?
For qualitative studies
- Is the Results section consistent with the defining paradigm and approach, producing results consistent with what was anticipated in the Introduction?
- Do the results seem logical and clear to the reader given the detailed description of the procedures and given the detail and organization of the Results?
- Do the category labels fit with the examples, and have adequate definitions of categories been provided?
- Can the reader say, "Yes, I can see how these themes were generated"?
- Are the results characterized by "thick description" (i.e., rich, complex descriptions set in the context of participant's lives) and saturation of categories?
- Are the results concise, fluid, and interesting rather than laborious to read and digest?
- Has the researcher provided a sufficient number of examples so that the results come to life?
- Are participant voices (quotes) presented in sufficient detail, and are they logically connected and placed thematically?
- Where appropriate, are results presented in an appropriate figure or table consistent with the paradigm and design?
- Does the Discussion provide an integration of the findings, referencing literature and theory presented in the Introduction, rather than merely restating the Results?
- Are the results discussed in the context of the available qualitative and quantitative literature?
- Have the authors noted the unique contributions of this study to theory and method?
- Is an integrative summary provided noting how the present study and its operating paradigm have advanced the science given previous research on the topic?
- Have the authors noted all the important limitations of the study?
- Have the authors developed conclusions and recommendations that are justified by the data and results, appropriately limiting and clearly identifying as speculative those inferences or conclusions that go beyond the data?
Specifically for quantitative studies
- Is the discussion confined to interpretation of findings directly relevant to the hypotheses or research questions posed in the introduction, or do sections stray from this focus?
- Are the interpretations justified by the findings?
- Is language implying causal relationships, if it is used at all, used appropriately?
- Is each hypothesis or research question thoroughly addressed in terms of the results of this study?
- Are these results compared sufficiently with the results of other studies? For example, how do the effect size intervals obtained in this study compare to those reported in previous research? Practical significance of the effects should be discussed together with statistical significance.
- When results of this study differ from previous research, are plausible explanations offered?
- Have alternative explanations for results of this study been presented?
Specifically for qualitative studies
- If a constructivist study, does the Discussion now examine theory, conceptual models, and further research studies that were not included in the Introduction to maintain the "discovery attitude"?
- In a constructivist or critical study, is there a statement as to how the researcher herself/himself was impacted or changed by participation in the study?
- Has the author been clear in the results and discussion about the proper uses of qualitative findings (e.g., that findings are not generalizable); has the author avoided using generalized statements about people who have experienced the phenomenon (thus implying generalizability)?
- Has the author distinguished between limits due to qualitative methods (e.g., smaller sample sizes, researcher subjectivity, etc., which are legitimate components of a study) and limitations of the study?
Before completing your evaluation, please review the abstract once more.
- Does the abstract provide a balanced description of the most important findings?
- In quantitative studies, is an account of hypotheses that were and were not supported provided?
- Does any portion of the abstract overstate the strength of effects or the magnitude of support for a specific hypothesis?
- For qualitative studies, is the approach or paradigm guiding the research mentioned?
- Have the findings been accurately summarized?
- Finally, is the abstract as informative as possible?
- Do you have suggestions for portions that could be deleted and replaced, within the space limitations, to increase the information value of the abstract?
- Is the abstract followed by up to five keywords to guide the PsycINFO indexing process?
Revised August 2010
In preparing this document, I have slightly modified an earlier guide developed by Brent Mallinckrodt (dated 8/4/2006). There have been many versions of this document and thus this version owes much (most) to previous authors including Charles Gelso, Sam Osipow, Clara Hill, and Jo-Ida Hansen.