“Preoccupations of a Journal Editor”: Still Preoccupied

David M. Schultz Chief Editor

Search for other papers by David M. Schultz in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-1558-6975
Free access

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

“I have always been intrigued by the remarkable ubiquity of that unit of communication, the scientific paper.” Batchelor (1981, p. 6)

I do not recall exactly when or how, but in the 1990s I discovered “Preoccupations of a Journal Editor,” by the noted fluid dynamicist G. K. Batchelor (Batchelor 1981). “Preoccupations” was a 25-page editorial published on the 25th anniversary of the Journal of Fluid Mechanics, a journal that he founded and where he served as chief editor for that 25-year period (and would go on to serve for more than 40 years). The editorial was engaging, insightful, honest, and bold, openly talking about the often-secretive world of peer review. That editorial inspired me to become an editor someday. Turns out I was not the only one, as I later met other editors who were also similarly inspired.

I have served 3 years as associate editor, 4 years as editor, and 15 years as chief editor of Monthly Weather Review. I am honored and humbled to have served for so long, especially during Monthly Weather Review’s sesquicentennial. The year started with a historical review of the journal’s 150 years (Schultz and Potter 2022), cowritten with Sean Potter, biographer of Cleveland Abbe (Potter 2020), who was Monthly Weather Review’s longest-serving editor and arguably its most influential one. Nearly every month since, Monthly Weather Review has been publishing editorials (Table 1). Some described topics from Monthly Weather Review’s 150-yr history (e.g., Schultz 2022a,b; Potter and Schultz 2022). Other editorials connected our sesquicentennial with the centennial of the publication of L. F. Richardson’s Weather Prediction by Numerical Process (Schultz and Lynch 2022) and the sesquicentennial of the Leipzig Meteorological Conference (Börngen et al. 2022)—a crucial early step in securing international collaboration in the collection, analysis, and sharing of weather observations, and one that makes meteorology a model for global open science. A third type of editorial provided practical tips for reviewers (Schultz 2022c) and authors (Schultz et al. 2022), the latter written with the current Monthly Weather Review editors.

Table 1

Editorials during the 150th year of Monthly Weather Review.

Table 1

In this final editorial of the year, I delve into the mind of an editor, having had two decades to reflect on scientific journals and still preoccupied with the topics about which Batchelor wrote 31 years ago, but brought to a modern, and Monthly Weather Review–specific, context. Batchelor (1981) structured his editorial around six topics: the scope of a journal, the unit of communication, the statistics of acceptance of papers, the evaluation of papers for Journal of Fluid Mechanics, the role of referees, and the future of journals. I will focus on the last four topics, so I repeat his structure here.

1. The statistics of acceptance of papers

Batchelor (1981) wrote about the decision-making process at the Journal of Fluid Mechanics, discussing how they would calculate individual editors’ rates of acceptance and compare them regularly to ensure broad agreement about what constitutes a publishable paper. The mean acceptance rate at that time was 47%, and Batchelor (1981) compared that rate with those in other disciplines, finding that acceptance rates were highest among the sciences and lowest among the social sciences and humanities.

When I started as an editor at Monthly Weather Review in 2004, I was amazed at how many submissions were being rejected: 34%. Even more surprising to me was that these rejections included submissions written by well-respected scientists at prestigious research laboratories and by professors who were teaching and supervising students. Could so much science be unpublishable, I wondered? Could Monthly Weather Review’s decision-making really be so harsh? I got some pushback from talking about rejection rates openly. For some editors, the first rule of “Editors’ Club” was not to talk about Editors’ Club. However, like Batchelor, I wanted the inner workings of Monthly Weather Review to be more transparent. My thinking is that the more people knew about the peer-review process, the better the system would be for all involved.

In 2008, I conducted a survey of 63 journals that published articles on atmospheric science at the time, obtaining rejection rates from 47 (75%). Monthly Weather Review’s rejection rate was actually around the average for all journals (38%) and was typical of those published by the American Meteorological Society (AMS) (range of 21%–39%). So, neither Monthly Weather Review nor AMS journals as a whole were atypical in this regard (Schultz 2010a).

Further analysis of the results challenged some conventional wisdom about scientific journals (Schultz 2010a). In particular, more prestigious journals did not necessarily have higher rejection rates than less prestigious journals. Except for certain journals with rejection rates exceeding 90%, many prestigious journals had low rejection rates (15%–30%). The method of determining prestige was unimportant—whether determined by one’s subjective assessment or by impact factor (i.e., the average number of citations that articles published in that journal received in two years). It is possible that prestigious journals received higher-quality submissions and so were less likely to reject them. On the other hand, one factor that was related to rejection rates was the publisher, with journals owned by commercial publishers at that time having higher rejection rates than those owned by professional societies and other nonprofits. Because publishing in journals from commercial publishers was often free at that time, such journals might be flooded with manuscripts by authors seeking to minimize costs, resulting in high rejection rates.

The study of rejections continued to interest me, leading to further studies. When comparing the perceived differences in manuscripts (here and throughout, we use “manuscript” to indicate a paper that has been submitted for review but not yet accepted or rejected) between Monthly Weather Review and Weather and Forecasting, then Chief Editor Bill Gallus asked me whether manuscripts that had three reviewers were more likely to be rejected than those that had two. A dataset of 500 manuscripts at Monthly Weather Review showed that manuscripts with more reviewers did not lead to more rounds of reviews or higher rejection rates (Schultz 2010b).

Another question that arose was the problem of multiple-part manuscripts (those with Part 1, Part 2, etc., in the title). Regardless of whether the multiple parts were submitted together or separately, these manuscripts often seemed to struggle in the peer-review process. Editors and reviewers did not like them much either because of the additional workload. An investigation into 308 of these manuscripts at Monthly Weather Review revealed that their rejection rate was not substantially different from the overall rejection rate for Monthly Weather Review (Schultz 2011). Reading the reviewer comments and the editors’ decisions led to the following recommendations: Authors should write manuscripts sensibly independent of each other, should avoid references to unsubmitted content, and should have sufficient and substantiated scientific content within each manuscript. I have been pleased to see that apparently fewer multiple-part manuscripts have been submitted to Monthly Weather Review over the past few years.

2. The evaluation of papers for Monthly Weather Review

Batchelor (1981) expounded on the criteria to determine suitability for publication in Journal of Fluid Mechanics, including the subjective criterion of what constituted a significant contribution. He also discussed the scenario in which “an editor might fail to recognize the value of a paper and so a journal with a high rejection rate might unwittingly suppress an important development; and that this risk is so serious as to justify a less rigorous selection” (Batchelor 1981, p. 16). Indeed, what is the chance that a rejected manuscript is a diamond that editors failed to recognize? Batchelor (1981, p. 17) concluded that this was unlikely at Journal of Fluid Mechanics, in part because “there are not many gems even among the papers that are accepted!” To investigate that question at Monthly Weather Review, I looked at the most recent 100 rejections from 2022. I tallied the reason or reasons for the rejections given by the editor and reviewers. The reasons fell into four categories (Fig. 1). Because multiple reasons could have been stated by the editor for the rejection, the numbers in Fig. 1 add up to more than 100%.

Fig. 1.
Fig. 1.

Principal reasons for rejection of the 100 most recently rejected manuscripts at Monthly Weather Review from 2022. The reasons are categorized into four categories (colored): inherent problems with the manuscript (blue), an improved approach to the science is needed (black), new knowledge is poorly presented or minimal (green), and poor communication (red).

Citation: Monthly Weather Review 150, 12; 10.1175/MWR-D-22-0306.1

One category of rejections was for inherent problems with the manuscript (Fig. 1).

  • Ten of the manuscripts were rejected, in part or solely, because the authors did not adequately respond to reviewer comments from previous rounds of reviews. Authors are free to disagree and present evidence to refute reviewer comments, but decision letters are clear that authors must respond to all comments by reviewers and editors, with failure to do so leading to possible rejection.

  • Ten of the rejected submissions were because the manuscript was about a topic that was unsuitable for Monthly Weather Review. Although there are subtleties about what might be suitable, reading the terms of reference (AMS 2022a) and looking through typical published articles in Monthly Weather Review could have avoided most of these rejections.

  • Eight of the manuscripts were rejected for plagiarized content. Plagiarism is unacceptable in scientific articles and is addressed by the AMS Author Expectations (AMS 2022b).

  • Eight of the manuscripts were rejected in part because the language was unacceptably poor. Of those eight, the language was the only reason for rejection in three of those cases. Although Monthly Weather Review welcomes international exchange of scientific knowledge, we do expect our readers not to have to struggle to read submissions. Schultz et al. (2022, section 8) offers suggestions to address these issues.

  • One manuscript was rejected because it was much too long (50% over the 7500-word limit), and the editor did not approve the request for the length waiver.

A second category included issues that could have been identified early in the research through an improved approach to the science (Fig. 1). For example, the most common reason for rejection across all categories was the need for a better experimental design and choice of methods (29 rejections). Other concerns were about ensuring that the dataset and calculations were appropriate for the study (11), using a single case study for modeling or data-assimilation experiments where multiple cases or a longer period of simulation would be expected to obtain generalizable results (9), not comparing results with standard baselines (2), and not verifying assimilation experiments against independent datasets (i.e., those that are not being assimilated) (2). One study was just plain wrong from its formulation to its execution.

A third category of rejections included those in which the new knowledge was unstated, was not presented clearly enough in the manuscript, was relatively small, or was nonexistent (27 rejections) (Fig. 1). Reviewers and editors will often refer to this as “novelty,” although “new knowledge” or “contribution to the literature” is perhaps a more accurate description. Authors wishing to avoid this problem can ensure that they ask a good scientific question, make a clear statement as to what the new knowledge is in the manuscript, and distinguish their results clearly from previously published results.

A fourth category of rejections included those with poor communication, poor presentation, or poor understanding of the science (Fig. 1). These included superficial understanding of the results or the inability to communicate them clearly (27 rejections), a mostly descriptive presentation of the results with little to no accompanying physical explanation (20), methods not explained or not explained well enough (15), a poorly motivated and written introduction (13), relevant literature not being cited (11), statements made without evidence from the manuscript to support them (3), and lack of coherence in the writing (2). Our editorial from last month provides guidance to correct these kinds of problems (Schultz et al. 2022).

Of the 58 rejections that received reviewer reports, 5 received no reviewer recommendations of rejection. In these cases, the editor rejected the manuscript because the scope of the work requested by the reviewer(s) meant that the manuscript would need to undergo substantial revisions (2 rejections), the editor saw flaws in the manuscript that none of the reviewers identified (2), or the review process was not converging after a number of rounds (1). These decisions are some of the hardest ones that editors make, and they are not made lightly. Monthly Weather Review is certainly not the only journal where such decisions are made (e.g., Hargens 1988; Hargens and Herting 2006; Bornmann and Daniel 2010), but this is the value that experience and expertise of editors bring to their position.1

Given that these 100 manuscripts have been recently rejected, it is too early to see if they will be revised, resubmitted, and published at Monthly Weather Review or elsewhere, let alone what their impact on the field might be (e.g., through citations or downloads). So, to address the question of what happens to manuscripts rejected from Monthly Weather Review, 50 manuscripts rejected from Monthly Weather Review in 2018 were examined. The year 2018 was chosen to allow sufficient time for the authors to possibly revise, resubmit, and publish the manuscript. Of those 50 manuscripts, 30 (60%) were later published: 18 (36%) in other journals and 12 (24%) in Monthly Weather Review. All 30 appeared in print in 2019 or 2020. Thus, returning to Batchelor’s question of whether editors failed to recognize the value of rejected manuscripts, these statistics indicate reasons for rejection that would merit a substantial rethink of the manuscript, considerable revision, and resubmission (although there is the possibility that rejected manuscripts were resubmitted to a different journal relatively unchanged and were eventually published). Nevertheless, publishable science and dedicated authors can eventually find a home for their revised manuscript.

Why manuscripts get rejected is not merely an academic question, however. At another journal for which I served, the commercial publishing partner asked whether rejected manuscripts could go to a new proposed journal where these second-class manuscripts would be published. This proposed second-class journal would serve two purposes: to bring more money to the publisher and to bring more money to the publisher. What the publisher did not realize is that this other journal already publishes (as do AMS journals) manuscripts that meet minimum standards. Manuscripts that were rejected did not meet those minimum standards. Given that there were valid reasons for rejection, and that differences of opinion or newsworthiness tended not to factor into the decisions (Fig. 1) as can happen at some other journals, such a proposed second-class journal would be full of articles with substantial methodological flaws or communication problems.

Taking this argument one step farther, one might argue that peer review could be skipped with the submissions going right to press. The market has already solved this problem in two ways: preprint servers and predatory publishers. Preprint servers are the most effective way of getting non-peer-reviewed research out into the public domain more quickly—and are widely recognized as such. On the other hand, predatory journals are those that will take submissions, give them minimal, if any, peer review, and then take the authors’ money. The bar is very low to get published, with even fictional and nonsense manuscripts getting accepted in such journals (e.g., Bohannon 2013; Allf 2020; Jan and Gul 2022). Perhaps predatory journals are just the natural extension of the view that peer review is not a determination of whether a paper is right or wrong but merely that two to three reviewers and an editor have examined the manuscript, have found no obvious flaws in it, and have found it to meet minimum standards. Predatory and other low-quality journals have just taken off this mask for the sake of money. Batchelor (1981, p. 16) was strongly opposed to such a perspective, writing:

Papers of poor quality do more than waste printing and publishing resources; they mislead and confuse inexperienced readers, they waste and distract the attention of experienced scientists, and by their existence they lead future authors to be content with second-rate work.

Indeed. We at Monthly Weather Review agree.

3. The role of referees

Batchelor (1981) next discussed the peer reviewers. (I prefer the term reviewer to referee as a more accurate description of the role. Whereas referees in a football game make calls and enforce them, peer reviewers merely comment on the manuscript and provide recommendations to the editor, whose job it is to make the editorial decisions.) Batchelor’s comments were wide ranging, addressing whether the burden of reviewing should fall uniformly across the whole discipline or should be concentrated in those who are recognized authorities, how long performing a review takes, whether the number of reviewers on a manuscript affected their rejection rates, his “pleasant experience” (Batchelor 1981, p. 20) that most authors value reviewer comments, and whether the peer-review system was biased by the perceived status of referees and authors. Batchelor (1981) even raised the issue of whether revealing reviewer names and publishing the peer reviews would remove biases. He concluded that such efforts to address bias should fall to the editor, and that few would want to read the peer-review exchanges and manuscript drafts. The success of present systems of open peer review (i.e., journals archiving the manuscript drafts, reviews, and responses to reviews for future readers) challenge that notion.

Like Batchelor’s experience, my experience is that reviewers are genuinely sincere in wanting to be helpful to authors. Negativity occasionally creeps in, but these instances are thankfully infrequent and are easily dealt with. Serving as a reviewer is a way to give back to the community. The best reviewers who write numerous high-quality reviews and submit them on time are typically excellent candidates for associate editorships. Being an associate editor (i.e., someone who provides a large number of high-quality reviews and provides special assistance to decision-making editors) is a way to be recognized for those reviews. Associate editors are often early-career researchers, with some academics having served in this role before receiving tenure, for example. Our Editors Award winners are almost entirely drawn from our associate editors.

Serving as an associate editor is also a step for those wishing eventually to be editors (i.e., those who oversee the peer-review process and make decisions on manuscripts). The additional criteria that we look for in an editor are people who (i) demonstrate an ability to be prompt in completing reviews (and, hence, will be prompt in handling manuscripts as editors), (ii) write exceptionally insightful reviews that get to the heart of the manuscript’s suitability for publication and comments to make it publishable or improve it, (iii) demonstrate an ability to discriminate between manuscripts that require rejection and those that require revision, (iv) have success in getting most, if not all, of their submitted manuscripts published, (v) have the personal confidence to identify manuscripts that should not be sent to peer reviewers because of their poor quality, (vi) have domain expertise that fills a scientific gap in the editorial board for which a reasonable number of manuscripts are received, and (vii) have the enthusiasm to serve in the position.

Writing reviews goes just beyond service to the community, however. Authors have the responsibility to perform reviews (two or three times their number of submissions, ideally). Moreover, learning to read others’ papers, to see the strengths and the flaws, and then to write a critique is an essential part of being a scientist. Like most skills, improvement comes most effectively through practicing and writing reviews; it even feeds back to writing manuscripts. The large numbers of reviews I did early in my career made me a more efficient writer of reviews, making it easier for me to quickly spot the weaknesses in others’ papers or my own papers. The feedback I was giving to others meant that I was internalizing these comments and (it is hoped) not making those same mistakes in my own papers. In this way, active engagement with the peer-review process is a way to further one’s own career.

I close this section with a plea for our community to recognize the need for professional conduct during the peer-review process. The sometimes-adversarial situation that occurs in the peer-review process can pit reviewers and editors against authors, and rejection of one’s manuscript can elicit understandably strong emotions. However, lashing out at the editor or reviewers with accusations of bias is not helpful. Fortunately, the number of incidences is small, but they do undermine Monthly Weather Review’s professionalism. In the past, aggressive behavior by authors against editors was often excused or dismissed. Since 2019, AMS has had its Code of Conduct that governs the ethical conduct of members (AMS 2022c). As such, examples of bullying behavior, abusive or demeaning language, or exerting author privilege can now be addressed by the professional society. Debates about science are acceptable; there may be multiple valid viewpoints, and we encourage debate both within the peer-review process and in exchanges in the Correspondence section (i.e., comments and replies) where appropriate. Personal attacks, however, as well as attempts to bully, intimidate, or harass editors, are unacceptable. Although it occurs infrequently, I hope that AMS takes steps to address this kind of behavior in the future.

4. The future of journals

In this section, Batchelor mulled the future of journals, in particular describing a study from Leicester University about the future possibilities of an “electronic journal” (Batchelor 1981, p. 24):

A newly submitted paper is fed into the computer system, perhaps at the author’s own institution, and all subsequent communications between ‘editor’, referees, author and readers can be conducted via terminals of the computer system. When a paper has been ‘accepted’, perhaps after revision suggested by referees, it is put on one of the regular lists promulgated by the editorial centre, and readers in universities and research institutions can then call up the abstract, or the whole paper if they wish, on the display screen of their communications terminal.

This prescience mirrors what came to pass at AMS journals starting in the 1990s (as reviewed in Schultz and Potter 2022, 28–30) and scholarly publications in general. Although electronic peer review and dissemination have brought myriad benefits, there is a downside: the development of predatory and low-quality electronic journals (as discussed in section 2).

Another result of the growth of electronic journals is the ubiquitous bibliometrics, or statistics on journals, articles, and authors. These metrics have been useful in the sense that it is easier to track publications and who is citing them. What has been arguably less useful is the importance placed on these so-called objective measures of science. Such metrics, especially the impact factor, have many well-documented problems, such as being discipline dependent, being easily gamed by a few highly cited articles, changes in the calculations over time, and the fact that not all journals have an impact factor (e.g., PLOS Medicine Editors 2006; Panaretos and Malesios 2009; Bornmann et al. 2012; Ioannidis and Thombs 2019; Davis 2020; Allen and Iliescu 2021). Nevertheless, these metrics hold sway over many authors—often to the authors’ own detriment. For example, one scenario is when Monthly Weather Review is an author’s first choice for publication—not because it is the most appropriate for the journal or that it would yield the largest, most relevant readership—but because our metrics are good enough to meet some arbitrary standards important to the author (and possibly imposed by rules or expectations from governments, funding agencies, employers, or even the scientific community). This scenario becomes clear when authors decline to transfer to a more appropriate journal, but one without the same metrics: such submissions are almost always rejected thereafter.

Another issue is that the impact factor is only designed to capture citations within two years after publication (although there is also a five-year impact factor). However, a single metric cannot represent the rich diversity of quantitative information about a journal. As Roebber (2009, 601–602) wrote about forecast-verification metrics, “summary measures of forecast quality are the start rather than the endpoint of investigations intending to understand and improve performance.” Specifically, metrics other than the impact factor show Monthly Weather Review near the top of the rankings in the category “Meteorology & Atmospheric Sciences” by the Institute of Scientific Information Web of Science, Thomson Reuters, Inc. (now Clarivate). In particular of those 108 journals, Monthly Weather Review has the fourth highest cited half-life of 16.4 years, meaning that one-half of the Monthly Weather Review articles that were cited in the most recent year in all articles across all journals tracked by Web of Science were published more than 16.4 years ago. Thus, Monthly Weather Review publishes articles with enduring impact. The cited half-life is perhaps not a well-known metric, but its existence demonstrates how a single metric such as impact factor cannot, and should not, be used in isolation to characterize a journal.

A further comment about such metrics is warranted. In 2008, Monthly Weather Review was ranked 13th of 52 journals (25th percentile) by impact factor in the category “Meteorology & Atmospheric Sciences.” In 2021, Monthly Weather Review was ranked 44th of 108 journals (41st percentile). Is this drop real? Do we have less prestige now? Monthly Weather Review’s impact factor has increased during this time, but, take a look at some of the journals that have been added to this category since 2008: Earth System Science Data; Earth’s Future; Communications Earth and Environment; Climate Risk Management; International Journal of Disaster Risk Reduction; Aerosol Science and Technology; Elementa: Science of the Anthropocene; Climate Services; Geoscience Letters; Ocean Science; Journal of Operational Oceanography; Geomatics, Natural Hazards and Risk; Nonlinear Processes in Geophysics; Geoscience Data Journal; Journal of Space Weather and Space Climate; City and Environment Interactions; Earth Systems and Environment; and Progress in Disaster Science. Although many of these journals publish some atmospheric-science research, most practitioners in meteorology and atmospheric science would not consider these atmospheric-science journals, raising the question about the suitability of the Web of Science categories and the meaning of such journal rankings. Yet, these types of journals are affecting the ranking of atmospheric-science journals, and their addition is one reason for the drop in the ranking of Monthly Weather Review. Despite their problems, bibliometrics will continue. Understanding the pros and cons of such metrics is the best way to avoid being led astray and to challenge those who use them in irresponsible ways.

Another change in the journals, and one that will continue in the future, has been the need to communicate scientific results more broadly than just to other specialists. When I became chief editor, I advocated for the publication of more review articles (Schultz 2008). I am pleased to report that Monthly Weather Review has published 15 review articles since 2008 (AMS 2022d), doubling the number since their start by Chief Editor Roger Pielke Sr. in 1983 (Schultz and Potter 2022, p. 35). The addition of significance statements to AMS journals (Huntington and Lackmann 2020; Schultz et al. 2020) and the creation of the @MonWeaRev Twitter account were also important steps in broadening our reach. I look forward to seeing how science communication changes in the future and how the AMS journals respond.

5. Conclusions

I conclude with a statement about why I volunteer for AMS. Not only does my volunteer work help to maintain the standards of the discipline, but I give back to the professional society rather than to for-profit publishers. Page charges can be thousands of dollars. None of that money at AMS goes to editors or reviewers—we are all volunteers. In return, authors receive quality copyediting and hosting, with articles becoming open access after a year. Any surplus from publishing goes back into the Society, rather than corporate shareholders. Page charges also go toward maintaining the free archive of journal articles older than a year (Rauber 2017), allowing a policy of no page charges at the AMS journal Weather, Climate, and Society (Lynch 2016), offsetting publication charges for authors from countries who require waivers to publish, and advancing the educational mission of the Society. These are important missions that I am pleased to support.

In closing, I acknowledge the important contributions that the AMS Publications Department staff have made not only to Monthly Weather Review but to all of their journals over the years as the publication process has become increasingly automated and online. I thank them for their hard work on behalf of authors, reviewers, and editors. In particular, Gwendolyn Whittaker and Michael Friedman have been incredibly helpful and supportive to me and the other editors. I thank the recent commissioners of the AMS Publications Commission for their support and mentorship: David Jorgensen, Robert Rauber, and Anthony Broccoli. I thank the 47 editors, hundreds of associate editors, and thousands of reviewers associated with Monthly Weather Review with whom I have had the pleasure to work since 2004. I value their contributions to the successes of Monthly Weather Review. One of those editors has been Dr. Ron McTaggart-Cowan of Environment and Climate Change Canada, who I am pleased to announce will become the chief editor for the 151st volume. I thank him for his comments on this editorial, and I hope that he will find the role to be as educational and enjoyable as I have.

1

The flip side are those submissions for which one of the reviewers recommends rejection, but the editor decides upon revision, and the manuscript is eventually revised and accepted. Here, the editor’s experience and expertise is what determines that the manuscript is a potential contribution to the literature and that it can be improved and eventually published.

REFERENCES

Save
  • Fig. 1.

    Principal reasons for rejection of the 100 most recently rejected manuscripts at Monthly Weather Review from 2022. The reasons are categorized into four categories (colored): inherent problems with the manuscript (blue), an improved approach to the science is needed (black), new knowledge is poorly presented or minimal (green), and poor communication (red).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1150 273 92
PDF Downloads 891 155 10