Tuesday, 21 November 2017

The fight for the future of science in Berlin



A group of scientists, scholars, data scientists, publishers and librarians gathered in Berlin to talk about the future of research communication. With the scientific literature being so central to science, one could also say the conference was about the future of science.

This future will be more open, transparent, findable, accessible, interoperable and reusable.


The open world of research from Mark Hooper on Vimeo.

Open and transparent sounds nice and most seem to assume that more is better. But it can also be oppressive, help the powerful with the resources to mine the information efficiently.

This is best known when it comes to government surveillance, which can be dangerous; states are powerful and responsible for the biggest atrocities in history. The right to vote in secret, to privacy, to organize and protections against unreasonable searches are fundamental protections against power abuse.

Powerful lobbies and political activists abuse transparency laws to harass inconvenient science.

ResearchGate, Google Scholar profiles and your ORCID ID page contribute to squeezing scientists like lemons by prominently displaying the number of publications and citations. This continual pressure can lead to burnout, less creativity and less risk taking. It encourages scientists to pick low hanging fruits rather than do those studies they think would bring science forward the most. Next to this bad influence on publications many other activities, which are just as important for science, suffer from this pressure. Many good-willing people were trying to solve this by also quantifying these activities. But in doing so add more lemon presses.

Science is a creative profession (even if many scientists do not seem to realise this). You have good ideas when you relax under the shower, in bed with fever or on a hike. The modern publish or perish system is detrimental to cognitive work. Work that requires cognitive skills is performed worse if you pressure people, it needs autonomy, mastery and purpose.

Scientists work on the edge of what is known and invariably make mistakes often. If you are not making mistakes you are not pushing your limits. This needs some privacy because unfortunately making mistakes is not socially acceptable for adults.



Chinese calligraphy with water on a stone floor. More ephemeral communication can lead to more openness, improve the exchange of views and produce more quality feedback.
Also later on in the process the ephemeral nature of a scientific talk requires deep concentration from the listener and is a loss for people not present, but it is also a feature early in a study. Without the freedom to make mistakes there will be less exiting research and slower progress. Scientists are also humans and once an idea is fixed on "paper" it becomes harder to change, while the flexibility to update your ideas to the evidence is important and likely needed in early stages.

These technologies also have real benefits, for example, make it easier to find related articles by the same author. A unique researcher identifier like ORCID especially helps when someone changes their name or in countries like China where one billion people seem to share about 1000 unique names. But there is no need for ResearchGate to put the number of publications and citations in huge numbers on the main profile page. (The prominent number of followers on Twitter profile pages also makes it less sympathetic in my view and needlessly promotes competition and inequality. Twitter is not my work, artificial competition is even more out of place.)

Open Review is a great option if you are confident about your work, but fear that reviewers will be biased, but sometimes it is hard to judge how good your work is and nice to have someone discretely point to problems with your manuscript. Especially in interdisciplinary work it is easy to miss something a peer review would notice, while your network may not include someone from another discipline you can ask to read the manuscript.

Once an article, code or dataset is published, it is free game. That is the point where I support Open Science. For example, publishing Open Access is better than pay-walled. If there is a reasonable chance of re-use publishing data and code helps science progress and should be rewarded.

Still I would not make a fetish out of it; I made the data available for my article on benchmarking homogenisation algorithms. This is an ISI highly-cited article, but I only know of one person having used the data. For less important papers publishing data can quickly be additional work without any benefits. I prefer nudging people towards Open Science over making it obligatory.

The main benefactor of publishing data and code is your future self, no one is more likely to continue your work. This should be an important incentive. Another incentive are Open Science "badges": icons presented next to the article title indicating whether the study was preregistered and provides open data and open materials (code). The introductions of these badges in the journal "Psychological Science" increased the percentage of articles with available data quickly to almost 40%.

The conference was organised by FORCE11, a community interested in future research communication and e-scholarship. There are already a lot of tools for the open, findable and well-connected world of the future, but their adoption could go faster. So the theme of this year's conference was "changing the culture".

Open Access


Christopher Jackson; on the right. (I hope I am allowed to repeat his joke.)
A main address was by Christopher Jackson. He has published over 150 scientific articles, but only became aware of how weird the scientific publishing system is when he joined ResearchGate, a social network for scientists, and was not allowed to put many of his articles on it because the publishers have the copy rights and do not allow this.

The frequent requests for copies of his articles on Research Gate also created an awareness how many scientists have trouble accessing the scientific literature due to pay-walls.

Another key note speaker, Diego Gómez, was threatened with up to eight years in jail for making scientific article accessible. His university, Universidad del Quindio in Costa Rica, spends more on licenses for scientific journals ($375,000) than on producing scientific knowledge themselves ($253,000).



The lack of access to the scientific literature makes research in poorer countries a lot harder, but even I am regularly not able to download important articles and have to ask the authors for a copy or ask our library to order a photocopy elsewhere, although the University of Bonn is not a particularly poor university.

Also non-scientists may benefit from being able to read scientific articles, although when it is important I would prefer to consul an expert over mistakenly thinking I got the gist of an article in another field. Sometimes a copy of the original manuscript is found on one of the authors homepages or a repository. Google (Scholar) and and the really handy browser add-on unpaywall can help find those using the Open Access DOI database.

Also sharing passwords and Sci-Hub are solutions, but illegal. The real solutions to making research more accessible are Open Access publishing and repositories for manuscripts. By now about half of the recently published articles are Open Access and in this pace all articles would be Open Access by 2040. Interestingly the largest fraction of the publicly available articles does not have an Open Access license, also called bronze Open Access. This means that the download possibility could also be revoked again.

The US National Institutes of Health and the European Union mandate that its supported research will be published Open Access.

A problem with Open Access journals can be that some are only interested in the publication fees and do not care about the quality. These predatory journals are bad for the reputation of real Open Access journals, especially in the eyes of the public.

I have a hard time believing that the authors do not know that these journals are predatory. Next to the sting operations to reveal that certain journals will publish anything, it would be nice to also have sting predatory journals that openly email the authors that they will accept any trash and see if that scares away the authors.

Jeffrey Beall used to keep a list of predatory journals, but had to stop after legal pressure from these frauds. The publishing firm Cabell now launched their own proprietary (pay-walled) blacklist, which already has 6000 journals and is growing fast.

Preprint repositories

Before a manuscript is submitted to a journal, the authors naturally still have the copy rights. They can thus upload the manuscript to a database, so-called preprint or institutional repositories. Unfortunately some publishers say this constitutes publishing a manuscript and they refuse to publish it because it is no longer new. However, most publishers accept the publications of the manuscript as it was before submission. A smaller part is also okay with the final version being published on the author's homepages or repositories.

Where a good option for an Open Access journal exists we should really try to use it. Where it is allowed, we should upload our manuscripts to repositories.

Good news for the readers of this blog is that a repository for the Earth Sciences was opened last week: EarthArXiv. This fall, the AGU will also demonstrate its preprint repository at the AGU Fall meeting. For details see my previous post.  EarthArXiv already has 15 climate related preprints.

This November also a new OSF ArXiv has started: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.
    When we combine the repositories with peer review organised by the scientific community itself, we will no longer need pay-walling scientific publishers. This can be done in a much more informative way than currently where the reader only knows that the paper was apparently good enough for the journal, but not why it is a good article, not how it fits in the (later published) literature. With Grassroots scientific publishing we can do a much better job.

    One way the reviews at a Grassroots journal can be better is by openly assessing the quality of the work. Now all we know is that the study was sufficiently interesting for some journal at that time for whatever reason. What I did not realise before Berlin is that this wastes a lot of time reviewing. In traditional journals waste resources on manuscripts, which are valid, but are rejected because they are seen as not important enough for the journal. For example, Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

    On average scientists pay $5,000 per published article. This while scientists do most of the work for free (writing, reviewing, editing) and while the actual costs are a few $100. The money we save can be used for research. In the light of these numbers it is actually amazing that Elsevier only makes a profit of 35 to 50%. I guess the CEO salary eats into the profits.

    Preprints would also have the advantage of making studies available faster. Open Access makes text and data mining easier, which helps in finding all articles on molecule M or receptor R. First publishers are using Text mining and artificial intelligence to suggest suitable peer reviewers to their editors. (I would prefer editors who know their field.) It would also help in detecting plagiarism and even statistical errors.

    (Before our machine overlords find out, let me admit that I did not always write the model description of the weather prediction model I used from scratch.)



    Impact factors

    Another issue Christopher Jackson highlighted is the madness of the Journal Impact Factors (JIF or IF). They measure how often an average article in a journal is cited in the first two or five years after publication. They are quite useful for librarians to get an overview over which journals to subscribe to. The problem begins when this impact factor is used to determine the quality of a journal or the articles in it.

    How common this is, is actually something I do not know. For my own field I would think I have a reasonable feeling about the quality of the journals, which is independent of the impact factor. More focussed journals tend to have smaller impact factors, but that does not signal that they are less good. Boundary Layer Meteorology is certainly not worse than the Journal of Geophysical Research. The former has in Impact Factor of 2.573, the latter of 3.454. If you made a boundary layer study it would be madness to then publish it in a more general geophysical journal where the chance is smaller that relevant colleagues will read it. Climate journals will have higher impact factors than meteorological journals because meteorologists mainly cite each other, while many sciences build on climatology. When the German meteorological journal MetZet was still a pay-wall journal it had a low impact factor because not many outside of Germany had a subscription, but the quality of the peer review and the articles was excellent.

    I would hope that reviewers making funding and hiring decisions know the journals in their field and take these kind of effects into account and read the articles itself. The [[San Francisco Declaration on Research Assessment]] (DORA) rejects the use of the impact factor. In Germany it is officially forbidden to judge individual scientists and small groups based on bibliographic measures such as the number of articles times the impact factor of the journals. Although I am not sure if everybody knows this. Imperial College recently adopted similar rules:
    “the College should be leading by example by signalling that it assesses research on the basis of inherent quality rather than by where it is published”
    “eliminate undue reliance on the use of journal-based metrics, such as JIFs, in funding, appointment, and promotion considerations”
    The relationship between the number of citations an article can expect and the impact factor is weak because there is enormous spread. Jackson showed this figure.



    This could well be a feature and not a bug. We would like to measure quality, not estimate the (future) number of citations of an article. For my own articles, I do not see much correlation between my subjective quality assessment and the number of citations. Which journal you can get into may well be a better quality measure than individual citations. (The best assessment is reading articles.)

    The biggest problem is when the journals, often commercial entities, start optimising for the number of citations rather than quality. There are many ways to get more citations, a higher impact factor, than making the best possible quality control. An article that reviews the state of the scientific field typically get a lot of citations, especially if writing by the main people in the field. Nearly every article will mention it in the introduction. Review papers are useful, but we do not need a new one every year. Articles with many authors typically get more citations. Articles on topics many scientists work on will get more citations. For Science and Nature it is important to get coverage in the main stream press, which is also read by scientists and leads to more citations.

    Reading articles is naturally work. I would suggest to reduce the number of reviews.

    Attribution, credit

    Traditionally one gets credit for scientific work by being author of a scientific paper. However, with increased collaboration and interdisciplinary work author lists have become longer and longer. Also the publish or perish system likely contributed: outsourcing part of the work is often more efficient than doing it yourself, while the person doing a small part of the analysis is happy to have another paper on the publish or perish list.

    What is missing from such a system is getting credit for a multitude of other import tasks. How does one value non-Traditional output items supplied by researchers: Code, software, data, design, standards, models, MOOC lectures, newspaper articles, blog posts, community engaged research and citizen science? Someone even mentioned musicals.

    A related question is who should be credited: technicians, proposal writers, data providers? As far as I know it would be illegal to put people in such roles in author list, but they do work that is important, needs to be done and thus needs to be credited somehow. A work-around is to invite them to help in editing the manuscript, but it would be good to have systems where various roles are credited. Designing such a system is hard.

    One is temped to make such a credit system very precise, but ambiguity also has its advantages to deal with the messiness of reality. I once started a study with one colleague. Most of this study did not work out and the final article was only about a part. A second colleague helped with that part. For the total work the first colleague had done more work, for the part that was published the second one. Both justifiably found that they should be second author. Do you get credit for the work or for the article?

    Later the colleague who had become third author of this paper wrote another study where I helped. It was clear that I should have been the second author, but in retaliation he made me the third author. The second author wrote several emails that this was insane, not knowing what was going on, but to no avail. A too precise credit system would leave no room for such retaliation tactics to clear the air for future collaborations.

    In one session varies systems of credit "badges" were shown and tried out. What seemed to work best was a short description of the work done by every author, similar to a detailed credit role at the end of a movie.

    This year a colleague wrote on a blog that he did not agree with a sentence of an article he was author of. I did not know that was possible; in my view authors are responsible for the entire article. Maybe we should also split up the authors list in authors who guarantee with their name and reputation for the quality of the full article and honorary authors who only contributed a small part. This colleague could then be a honorary author.

    LindedIn endorsements were criticised because they are not transparent and they make it harder to change your focus because the old endorsements and contacts stick.

    Pre-registration

    Some fields of study have trouble replicating published results. These are mostly empirical fields where single studies — to a large part — stand on their own and are not woven together by a net of theories.

    One of the problems is that only interesting findings are published and if no effect is found the study is aborted. In a field with strong theoretical expectations also finding no effect when one is expected is interesting, but if no one expected a relationship between A and B, finding no relationship between A and B is not interesting.

    This becomes a problem when there is no relationship between A and B, but multiple experiments/trails are made and some will find a fluke relationship by chance. If only those get published that gives a wrong impression. This problem can be tackled by registering trails before they are made, which is becoming more common in medicine.

    A related problem is p-hacking and hypothesis generation after results are known (HARKing). A relationship which is statistically significant if only one outlier were not there, makes it tempting to find a reason why the outlier is a measurement error and should be removed.

    Similarly the data can be analyses in many different ways to study the same question, one of which may be statistically significant by chance. This is also called "researcher degrees of freedom" or "the garden of forking paths". The Center for Open Science has made a tool where you can pre-register your analysis before the data is gathered/analysed to reduce the freedom to falsely obtain significant results this way.



    A beautiful example of the different answers one can get analysing the same data for the same question. If found this graph via a FiveThirtyEight article, which is also otherwise highly recommended: "Science Isn’t Broken. It’s just a hell of a lot harder than we give it credit for."

    These kind of problems may be less severe in natural sciences, but avoiding them can still make the science more solid. Before Berlin I was hesitant about pre-registering the analysis because in my work every analysis is different, which makes is harder to know in detail in advance how the analysis should go; there are also valid outlier that need to be removed, selecting the best study region needs a look at the data, etc.

    However, what I did not realise, although quite trivial, is that you can do the pre-registered analysis, but also additional analysis and simply mark them as such. So if you can do a better analysis after looking at the data, you can still do so. One of the problems of pre-registering is that quite often people did not do the analysis in the same way and that reviewers mostly do not check this.

    In the homogenisation benchmarking study of the ISTI we will describe the assessment measures in advance. This is mostly because the benchmarking participants have a right to know how their homogenisation algorithms will be judged, but it can also be seen as pre-registration of the analysis.

    To stimulate the adoption of pre-registration, the Center for Open Science has designed Open Science badges, which can be displayed with the articles meeting the criteria. The pre-registration has to be done at an external site where the text cannot be changed afterwards. The pre-registration can be kept undisclosed for up to two years. To get things started they even award 1000 prices of $1000 for pre-registered studies.

    The next step would be journals that review "registered reports", which are peer reviewed before the results are in. This should stimulate the publication of negative (no effect found) results. (There is still a final review when the results are in.)

    Quick hits

    Those were the main things I learned, now some quick hits.

    With the [[annotation system]] you can add comments to all web pages and PDF files. People may know annotation from Hypothes.is, which is used by ClimateFeedback to add comments to press articles on climate change. A similar initiative is PaperHive. PaperHive sells its system as collaborative reading and showed an example of students jointly reading a paper for class, annotating difficult terms/passages. It additionally provides channels for private collaboration, literature management and search. It has also already been used for the peer review (proof reading) of academic books. They now both have groups/channels to allow groups to make or read annotations, as well as private annotations, which can be used for your own paper archive. Web annotations aimed at the humanities are made by Pund.it.

    Since February this year, web annotation is a World Wide Web (W3C) standard. This will hopefully mean that web browsers will start adding annotation in their default configuration and it will be possible to comment every homepage. This will likely lead to public annotation streams going down to the level of YouTube comments. Also for the public channel some moderation will be needed, for example to combat doxing. PaperHive is a German organisation and thus removes hate speech.

    Peer Community In (PCI) a system to collaboratively peer review manuscripts that can later be send to an official journal.

    The project OpenUp studied a large number of Open Peer Review systems and their pros and cons.

    Do It Yourself Science. Not sure it is science, but great when people are having fun with science. When the quality level is right, you could say it is citizen science led by the citizens themselves. (What happened to the gentlemen scientists?)

    Philica: Instant academic publishing with transparent peer-review.



    Unlocking references from the literature: The Initiative for Open Citations. See also their conference abstract.

    I never realised there was an organisation behind the Digital Object Identifiers for scientific articles: CrossRef. It is a collaboration of about eight thousand scientific publishers. For other digital sources there are other organisation, while the main system is run by the international DOI Foundation. The DOIs for data are handled, amongst others, by DataCite. CrossRef is working on a system where you can also see the webpages that are citing scientific articles, what they call "event data". For example, this blog has cited 142 articles with a DOI. CrossRef will also take web annotations into account.

    Climate science was well represented at this conference. There were posters on open data for the Southern Ocean and on the data citation of the CMIP6 climate model ensemble. Shelley Stall of AGU talked about making FAIR and Open data the default for Earth and space science. (Et moi.)



    In the Life Sciences they are trying to establish "micro publications", the publication of a small result or dataset, several of which can then later be combined with a narrative into a full article.

    A new Open Science Journal: Research Ideas and Outcomes (RIO), which publishes all outputs along the research cycle, from research ideas, proposals, to data, software and articles. They are interested in all areas of science, technology, humanities and the social sciences.

    Collaborative writing tools are coming of age, for example, Overleaf for people using LaTex. Google Docs and Microsoft Word Live also do the trick.

    Elsevier was one of the sponsors. Their brochure suggests they are ones of the nice guys serving humanity with cutting edge technology.

    The Web of Knowledge/Science (a more selective version of Google Scholar) moved from Thomson Reuters to Clarivate Analytics, together with the Journal Citation Reports that computes the Journal Impact Factors.

    Publons has set up a system where researchers can get public credit for their (anonymous) peer reviews. It is hoped that this stimulates scientists to do more reviews.

    As part of Wikimedia, best known for Wikipedia, people are building up a multilingual database with facts: wikidata. Like in Wikipedia volunteers build up the database and sources need to be cited to make sure the facts are right. People are still working on software to make contributing easier for people who are not data scientists and do not dream of the semantic web every night.

    Final thoughts

    For a conference about science, there was relatively little science. One could have made a randomized controlled trial to see the influence of publishing your manuscript on a preprint server. Instead the estimated larger number of citations for articles also submitted to ArXiv (18%) was based on observational data and the difference could be that scientists put more work in spreading their best articles.

    The data manager at CERN argued that close collaboration with the scientists can help in designing interfaces that promote the use of Open Science tools. Sometimes small changes produce large increases in adoption of tools. More research into the needs of scientists could also help in creating the tools in a way that they are useful.

    Related reading, resources

    The easiest access to the talks of the FORCE2017 conference is via the "collaborative note taking" Google Doc

    Videos of last year's FORCE conference

    Peer review

    The Times Literary Supplement: Peer review: The end of an error? by ArXiving mathematician Timothy Gowers

    Peer review at the crossroads: overview over the various open review options, advantages and acceptance

    Jon Tennant and many colleagues: A multi-disciplinary perspective on emergent and future innovations in peer review

    My new idea: Grassroots scientific publishing

    Pre-prints

    The Earth sciences no longer need the publishers for publishing
     
    ArXivist. A Machine Learning app that suggest the most relevant new ArXiv manuscripts in a daily email

    The Stars Are Aligning for Preprints. 2017 may be considered the ‘year of the preprint

    Open Science


    The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles

    Open Science MOOC (under development) already has an extensive resources page

    Metadata2020: Help us improve the quality of metadata for research. They are interested in metadata important for discoverability and reuse of data

    ‘Kudos’ promises to help scientists promote their papers to new audiences. For example with plain-language summaries and tools measure which dissemination actions were effective

    John P. A. Ioannidis and colleagues: Bibliometrics: Is your most cited work your best? Survey finds that highly cited authors feel their best work is among their most cited articles. It is the same for me, still looking at all articles it is not a strong correlation

    Lorraine Hwang and colleagues in Earth and Space Science: Software and the scientist: Coding and citation practices in geodynamics, 2017


    Saturday, 7 October 2017

    A short history of homogenisation of climate station data



    The WMO Task Team on Homogenisation (TT-HOM) is working on a guidance for scientists and weather services who want to homogenise their data. I thought the draft chapter on the history of homogenisation doubles as a nice blog post. It is a pretty long history, starting well before people were worrying about climate change. Comments and other important historical references are very much appreciated.

    Problems due to inhomogeneities have long been recognised and homogenisation has a long history. In September 1873, at the “International Meteorologen-Congress” in Vienna, Carl Jelinek requested information on national multi-annual data series ([[k.k.]] Hof- und Staatsdruckerei, 1873), but decades later, in 1905 G. Hellmann (k.k. Zentralanstalt für Meteorologie und Geodynamik, 1906) still regretted the absence of homogeneous climatological time series due to changes in the surrounding of stations and new instruments and pleaded for stations with a long record, “Säkularstationen”, to be kept as homogeneous as possible.

    Although this “Conference of directors” of the national weather services recommended maintaining a sufficient number of stations under unchanged conditions today these basic inhomogeneity problems still exist.

    Detection and adjustments

    Homogenisation has a long tradition. For example, in early times documented change points have been removed with the help of parallel measurements. Differing observing times at the astronomical observatory of the k.k. University of in Vienna (Austria) have been adjusted by using multi-annual 24 hour measurements of the astronomical observatory of the k.k. University of Prague (today Czech Republic). Measurements of Milano (Italy) between 1763 and 1834 have been adjusted to 24 hour means by using measurements of Padova (Kreil, 1854a, 1854b).

    However, for the majority of breaks we do not know the break magnitude; furthermore it is most likely that series contain undocumented inhomogeneities as well.Thus there was a need for statistical break detection methods. In the early 20th century Conrad (1925) applied and evaluated the Heidke criterion (Heidke, 1923) using ratios of two precipitation series. As a consequence he recommended the use of additional criteria to test the homogeneity of series, dealing with the succession and alternation of algebraic signs, the Helmert criterion (Helmert, 1907) and the “painstaking” Abbe criterion (Conrad and Schreier, 1927). The use of Helmert’s criterion for pairs of stations and Abbe’s criterion still has been described as appropriate tool in the 1940s (Conrad 1944). Some years later the double-mass principle was popularised for break detection (Kohler, 1949).


    German Climate Reference Station which was founded in 1781 in Bavaria on the mountain Hohenpeißenberg.

    Reference series

    Julius Hann (1880, p. 57) studied the variability of absolute precipitation amounts and ratios between stations. He used these ratios for the quality control. This inspired Brückner (1890) to check precipitation data for inhomogeneities by comparison with neighbouring stations;he did not use any statistics.

    In their book “Methods in Climatology” Conrad and Pollak (1950) formalised this relative homogenization approach, which is now the dominant method to detect and remove the effects of artificial changes. The building of reference series, by averaging the data from many stations in a relatively small geographical area, has been recommended by the WMO Working Group on Climatic Fluctuations (WMO, 1966).

    The papers by Alexandersson (1986) and Alexandersson and Moberg (1997) made the Standard Normal Homogeneity Test (SNHT) popular. The broad adoption of SNHT was also for the clear guidance on how to use this test together with references to homogenize station data.

    Modern developments

    SNHT is a single-breakpoint method, but climate series typically contain more than one break. Thus a major step forward was the design of methods specifically designed to detect and correct multiple change-points and work with inhomogeneous references (Szentimrey, 1999; Mestre, 1999; Caussinus and Mestre, 2004). These kind of methods were shown to be more accurate by the benchmarking study of the EU COST Action HOME (Venema et al., 2012).

    The paper by Caussinus and Mestre (2004) also provided the first description of a method that jointly corrects all series of a network simultaneously. This joint correction method was able to improve the accuracy of all but one contribution to the HOME benchmark that was not yet using this approach (Domonkos et al., 2013).

    The ongoing work to create appropriate datasets for climate variability and change studies promoted the continual development of better methods for change point detection and correction. To follow this process the Hungarian Meteorological Service started a series of “Seminars for Homogenization” in 1996 (HMS 1996, WMO 1999, OMSZ 2001, WMO 2004, WMO 2006, WMO 2010).

    Related reading

    Homogenization of monthly and annual data from surface stations
    A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.
    Statistical homogenisation for dummies
    A primer on statistical homogenisation with many pictures.
    Just the facts, homogenization adjustments reduce global warming
    Many people only know that climatologists increase the land surface temperature trend, but do not know that they also reduce the ocean surface trend and that the net effect is a reduction of global warming. This does not fit to well to the conspiracy theories of the mitigation sceptics.
    Five statistically interesting problems in homogenization
    Series written for statisticians and climatologists looking for interesting problems.
    Why raw temperatures show too little global warming
    The raw land surface temperature probably shows too little warming. This post explains the reasons why: thermometer screen changes, relocations and irrigation.
    New article: Benchmarking homogenization algorithms for monthly data
    Raw climate records contain changes due to non-climatic factors, such as relocations of stations or changes in instrumentation. This post introduces an article that tested how well such non-climatic factors can be removed.

    References

    Brückner, E., 1890: Klimaschwankungen seit 1700 nebst Bemerkungen über Klimaschwankungen der Diluvialzeit. E.D. Hölzel, Wien and Olnütz.
    Alexandersson, A., 1986: A homogeneity test applied to precipitation data. J. Climatol., 6, pp. 661-675.
    Alexandersson, H. and A. Moberg, 1997: Homogenization of Swedish temperature data .1. Homogeneity test for linear trends. Int. J. Climatol., 17, pp. 25-34.
    Caussinus, H. and O. Mestre, 2004: Detection and correction of artificial shifts in climate series. Appl. Statist., 53, Part 3, pp. 405-425.
    Conrad, V. and C. Pollak, 1950: Methods in Climatology. Harvard University Press, Cambridge, MA, 459 p.
    Conrad V., O. Schreier, 1927: Die Anwendung des Abbe’schen Kriteriums auf physikalische Beobachtungsreihen. Gerland’s Beiträge zur Geophysik, XVII, 372.
    Conrad, V., 1925: Homogenitätsbestimmung meteorologischer Beobachtungsreihen. Meteorologische Zeitschrift, 482–485.
    Conrad V., 1944: Methods in Climatology. Harvard University Press, 228 p.
    Domonkos, P., V. Venema, O. Mestre, 2013: Efficiencies of homogenisation methods: our present knowledge and its limitation. Proceedings of the Seventh seminar for homogenization and quality control in climatological databases, Budapest, Hungary, 24 – 28 October 2011, WMO report, Climate data and monitoring, WCDMP-No. 78, pp. 11-24.
    Hann, J., 1880: Untersuchungen über die Regenverhältnisse von Österreich-Ungarn. II. Veränderlichkeit der Monats- und Jahresmengen. S.-B. Akad. Wiss. Wien.
    Heidke P., 1923: Quantitative Begriffsbestimmung homogener Temperatur- und Niederschlagsreihen. Meteorologische Zeitschrift, 114-115.
    Helmert F.R., 1907: Die Ausgleichrechnung nach der Methode der kleinsten Quadrate. 2. Auflage, Teubner Verlag.
    Peterson T.C., D.R. Easterling, T.R. Karl, P. Groisman, N. Nicholls, N. Plummer, S. Torok, I. Auer, R. Boehm, D. Gullett, L. Vincent, R. Heino, H. Tuomenvirta, O. Mestre, T. Szentimrey, J. Salinger, E.J. Forland, I. Hanssen-Bauer, H. Alexandersson, P. Jones, D. Parker, 1998: Homogeneity adjustments of in situ atmospheric climate data: A review. Int. J. Climatol., 18, 1493-1517.
    Hungarian Meteorological Service (HMS), 1996: Proceedings of the First Seminar for Homogenization of Surface Climatological Data, Budapest, Hungary, 6-12 October 1996, 44 p.
    Kohler M.A., 1949: Double-mass analysis for testing the consistency of records and for making adjustments. Bull. Amer. Meteorol. Soc., 30: 188 – 189.
    k.k. Hof- und Staatsdruckerei, 1873: Bericht über die Verhandlungen des internationalen Meteorologen-Congresses zu Wien, 2.-10. September 1873, Protokolle und Beilagen.
    k.k. Zentralanstalt für Meterologie und Geodynamik. 1906: Bericht über die internationale meteorologische Direktorenkonferenz in Innsbruck, September 1905. Anhang zum Jahrbuch 1905. k.k. Hof-und Staatsdruckerei.
    Kreil K., 1854a: Mehrjährige Beobachtungen in Wien vom Jahre 1775 bis 1850. Jahrbücher der k.k. Central-Anstalt für Meteorologie und Erdmagnetismus. I. Band – Jg 1848 und 1849, 35-74.
    Kreil K., 1854b: Mehrjährige Beobachtungen in Mailand vom Jahre 1763 bis 1850. Jahrbücher der k.k. Central-Anstalt für Meteorologie und Erdmagnetismus. I. Band – Jg 1848 und 1849, 75-114.
    Mestre O., 1999: Step-by-step procedures for choosing a model with change-points. In Proceedings of the second seminar for homogenisation of surface climatological data, Budapest, Hungary, WCDMP-No.41, WMO-TD No.962, 15-26.
    OMSZ, 2001: Third Seminar for Homogenization and Quality Control in climatological Databases, Budapest.
    Szentimrey, T., 1999: Multiple Analysis of Series for Homogenization (MASH). Proceedings of the second seminar for homogenization of surface climatological data, Budapest, Hungary; WMO, WCDMP-No. 41, 27-46.
    Venema, V., O. Mestre, E. Aguilar, I. Auer, J.A. Guijarro, P. Domonkos, G. Vertacnik, T. Szentimrey, P. Stepanek, P. Zahradnicek, J. Viarre, G. Müller-Westermeier, M. Lakatos, C.N. Williams,
    M.J. Menne, R. Lindau, D. Rasol, E. Rustemeier, K. Kolokythas, T. Marinova, L. Andresen, F. Acquaotta, S. Fratianni, S. Cheval, M. Klancar, M. Brunetti, Ch. Gruber, M. Prohom Duran, T. Likso,
    P. Esteban, Th. Brandsma. Benchmarking homogenization algorithms for monthly data. Climate of the Past, 8, pp. 89-115, doi: 10.5194/cp-8-89-2012, 2012. See also the introductory blog post and a post on the weaknesses of the study.
    WMO, 1966: Climatic Change, Report of a working group of the Commission for Climatology. Technical Note 79, WMO – No. 195. TP.100, 79 p.
    WMO 1999: Proceedings of the Second Seminar for Homogenization of Surface Climatological Data, Budapest, Hungary, 9 – 13 November 1998, 214 p.
    WMO, 2004: Fourth Seminar for Homogenization and Quality Control in Climatological Databases, Budapest, Hungary, 6-10 October 2003, WCDMP-No 56, WMO-TD No. 1236, 243 p.
    WMO, 2006: Proceedings of the Fifth Seminar for Homogenization and Quality Control in Climatological Databases, Budapest, Hungary, 29 May – 2 June 2006. Climate Data and Monitoring WCDMP- No 71, WMO/TD- No. 1493.
    WMO, 2010: Proceedings of the Meeting of COST-ES0601 (HOME) Action, Management Committee and Working groups and Sixth Seminar for Homogenization and Quality Control in Climatological Databases, Budapest, Hungary, 26 – 30 May 2008, WMO reports on Climate Data and Monitoring, WCDMP-No. 76.

    Sunday, 1 October 2017

    The Earth sciences no longer need the publishers for publishing



    Manuscript servers are buzzing around our ears, as the Dutch say.

    In physics it is common to put manuscripts on the ArXiv server (pronounced: Archive server). A large part of these manuscripts are later send to a scientific journal for peer review following the traditional scientific quality control system and assessment of the importance of studies.

    This speeds up the dissemination of scientific studies and can promote informal peer review before the formal peer review. Manuscripts do not have copyrights yet, so this also makes the research available to all without pay-walls. Expecting the manuscripts to be published on paper in a journal later, ArXiv is called a pre-print server. In these modern times I prefer manuscript server.

    The manuscript gets a time stamp, a pre-print server can thus be used to claim precedence. Although the date of publication is traditionally used for this and there are no rules which date is most important. Pre-print servers can also give the manuscript a Digital Object Identifier (DOI) that can be used to cite it. A problem could be that some journals see a pre-print as prior publication, but I am not aware of any such journals in the atmospheric sciences, if you do please leave a comment below.

    ArXiv has a section for atmospheric physics, where I also uploaded some manuscripts as a young clouds researcher. However because most meteorologists did not participate it could not perform the same function as it does in physics; I never got any feedback based on these manuscripts. When ArXiv made uploading manuscripts harder to get rid of submissions by retire engineers, I stopped and just put the manuscripts on my homepage.

    Three manuscript archives

    Maybe the culture will now change and more scientists participate with three new initiatives for manuscript servers for the Earth sciences. All three follow a different concept.

    This August a digital archive started for Paleontology (paleorXiv, twitter). If I see it correctly they already have 33 manuscripts. (Only a part of them are climate related.) This archive builds on the open source preprint server of the Open Science Framework (OSF) of the non-profit Center for Open Science. The OSF is a platform for the entire scientific workflow from idea, to coding and collaboration to publishing. Also other groups are welcome to make a pre-print archive using their servers and software.

    [UPDATE. Just announced that in November a new ArXiv will start: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.]

    Two initiatives have just started for all of the Earth sciences. One grassroots initiative (EarthArXiv) and one by AGU/Wiley (ESSOAr).

    EarthArXiv will also be based on the open source solution of the Open Science Framework. It is not up yet, but I presume it will look a lot like paleorXiv. It seems to catch on with about 600 twitter listeners and about 100 volunteers in just a few days. They are working on a logo (requirements, competition). Most logos show the globe; I would include the study of other planets in the Earth sciences.

    The American Geophysical Union (AGU) has announced plans for an Earth and Space Science Open Archive (ESSOAr), which should be up and running early next year. They plan to be able to show a demo at the AGU's fall meeting in December.

    The topic would thus be somewhat different due to the inclusion of space science and they will also permanently archive posters presented at conferences. That sounds really useful; now every conference designs their own solution and the posters and presentations are often lost after some time when the homepage goes down. EarthArXiv unfortunately seems to be against hosting posters. ESSOAr would also make it easy to transfer the manuscripts to (AGU?) journals.

    A range of other academic societies are on the "advisory board" of ESSOAr, including EGU. ESSOAr will be based on proprietary software of the scientific publisher Wiley. Proprietary software is a problem for something that should function for as close to an eternity as possible. Not only Wiley, but also the AUG itself are major scientific publishers. They are not Elsevier, but this quickly leads to conflicts of interest. It would be better to have an independent initiative.

    There need not be any conflict between the two "duelling" (according to Nature) servers. The manuscripts are open access and I presume they will have an API that makes it possible to mirror manuscripts of one server on the other. The editors could then remove the ones they do not see as fitting to their standards (or not waste their time). Beyond esoteric (WUWT & Co.) nonsense, I would prefer not to have much standards, that is the idea of a manuscript server.



    Paul Voosen of Nature magazine wonders whether: "researchers working in more sensitive areas of the geosciences, such as climate science, will embrace posting their work prior to peer review." I see no problem there. There is nothing climate scientists can do to pacify the American culture war, we should thus do our job as well as possible and my impression is that climatology is easily in the better half of the Open Science movement.

    I love to complain about it, but my impression is that sharing data is more common in the atmospheric sciences than average. This could well be because it is more important because data is needed from all over the world. The World meteorological Organization was one of the first global organizations set up to coordinate this. The European Geophysical Union (EGU) has open review journals for more than 15 years. The initial publication in a "discussion" journal is similar to putting your manuscript on a pre-print server. Many of the contributions to the upcoming FORCE2017 conference on Research Communication and e-Scholarship that mention a topic are about climate science.

    The road to Open Access

    A manuscript server is one step on the way to an Open Access publishing future. This would make articles better accessible to researchers and the public who paid for it.

    Open Access would break the monopoly given to scientific publishers by copyright laws. An author looking for a journal to publish his work can compare price and service. But a reader typically needs to read one specific article and then has to deal with a publishers with monopoly power. This has led to monopolistic profits and commercial publishers that have lost touch with their customers, the scientific community. That Elsevier has a profit margin of "only" 36 percent thus seems to be mismanagement, it should be close to a 100 percent.



    ArXiv shows that publishing a manuscripts costs less than a dollar per article. Software to support the peer review can be rented for 10 dollar per article (see also: Episciences.org and Open Journal Systems). Writing the article and reviewing it is done for free by the scientific community. Most editors are also scientists working for free, sometimes the editor in chief gets some secretarial support, some money for a student help. Typesetting by journals is highly annoying as they often add errors doing so. Typesetting is easily done by a scientist, especially using Latex, but also with a Word template. That scientists pay thousands of dollars per article is not related to the incurred costs, but due to monopoly brand power.

    Publishers that serve the community, articles that everyone can read and less funding wasted on publishing is a desirable goal, but it is hard to get there because the barriers to entry are large. Scientists want to publish in journals with a good reputation and if the journals are not Open Access with a broad circulation. This makes starting a new journal hard, even if a new journal does a much better job at a much lower price, it will start with no reputation and without a reputation it will not get manuscripts to prove its worth.

    To make it easier to get from the current situation to an Open Access future, I propose the concept of Grassroot Scientific Publishing. Starting a new journal should be as easy as starting a blog: Make an account, give the journal name and select a lay-out. Finished, start reviewing.

    To overcome the problem that initially no one will submit manuscripts a grassroots journal can start with reviewing already published articles. This is not wasted time because we can do a much better job communicating the strength and weakness as well as the importance of an article than we do now, where the only information we have on the importance is the journal in which it is published. We can categorise and rank them. We can have all articles of one field in the same journal, no longer scattered around in many different journals.

    Even without replacing traditional journals, such a grassroots journal would provide a valuable service to its scientific community.

    To explain the idea and get feedback on how to make it better I have started a new grassroots publishing blog:
    Once this kind of journals is established and has shown it provides superior quality assurance and information, there is no longer any need for pay-wall journals and we can just review the articles on manuscript servers.

    Related reading

    Paul Voosen in Nature: Dueling preprint servers coming for the geosciences

    AGU: ESSOAr Frequently Asked Questions

    The Guardian, long read: Is the staggeringly profitable business of scientific publishing bad for science?

    If you are on twitter, do show support and join EarthArXiv

    Three cheers for gatekeeping

    Peer review helps fringe ideas gain credibility

    Grassroots scientific publishing


    * Photo Clare Night 2 by Paolo Antonio Gonella is used under a Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.

    Friday, 22 September 2017

    Standard two letter codes for meteorological variables

    For the file names of the Parallel Observations Science Team (ISTI-POST) we needed short codes for the meteorological variables in the file.

    Two letter codes are used quite often in the scientific literature, which suggests there exists a standard, but I was unable to find a WMO standard. Thus we would suggest to follow standard two-letter conventions.

    CodeElementUnit
    Wind
    ddwind direction[in degrees; calm = -1]
    ffwind speed[in m/s]
    Temperatures
    tmmean temperature[in °C]
    tnminimum temperature[in °C]
    txmaximum temperature[in °C]
    trread temperature (at a specific time)[in °C]
    twwet-bulb temperature[in °C]
    tssurface temperature[in °C]
    Radiation
    snsunshine duration[in h]
    sdsolar radiation flux down[in W m-2]
    susolar radiation flux up[in W m-2]
    hdinfra-red (heat) radiation flux down[in W m-2]
    huinfra-red (heat) radiation flux up[in W m-2]
    Snow
    tstotal snow[in mm]
    nsnew snow[in mm]
    Diverse
    rhrelative humidity[in %]
    pppressure[in mbar]
    rrprecipitation[in mm per day]
    nncloud cover[in %]

    If you know of an existing system, please say so. Many codes are quite common in the literature, but some also less. If you have suggestion for other codes for these or would like to propose abbreviations for other variables, please contact us or write a comment below.

    Monday, 18 September 2017

    Angela Merkel wins German election



    After my spectacular success as UK election pollster let my try my luck with a prediction for the elections here in German next week Sunday: Angela Merkel will win the election and stays Chancellor. I admit that it would have been more fun to make a claim that goes against the punditry, but that is harder to do for Germany than for the UK or the USA; the quality of German (public) media is quite high. The pundits also do not have a hard time this election, the only question is who is Merkel going to govern with and that depends on details we will only know on election night.

    [UPDATE on the eve of the election. Something I did not think of because I have not heard anyone talk about it is that Merkel may step down when her party loses more than 5% and the coalition loses more than 10%. She took quite some time considering whether she would run again. My impression is that that was not just theatre; it is a tough job. Losing the election may well be the right excuse to hand power to the next generation.]

    Germany is a representative parliamentary democracy. The voters select their representatives in parliament, like in the UK, and parliament elects the prime minister (Bundeskanzler). The prime minister is the most powerful politician, although officially ranked third after the president (Bundespräsident) and the president of the parliament. The advantage of this system is that when you notice your leader is an incompetent ignorant fool with no interest in working, you can get rid of them. Not to end up with a power vacuum, a new prime minister has to be elected to remove the old one, just voting against the old one is not enough.


    Advertisement & song for a major supermarket chain. The title literally translated is: "Super horny", but more accurate is: "terrific". On the other hand, we have less gory violence on TV than the USA.

    Germans get two votes: one for their local representative, just like the districts in the UK or USA, and a second vote for a party. This way you have politicians that represent their district, which some people seem to see as important; I have never understood why. The second vote determines the proportions in parliament. Parties make lists of candidates and they are added to the directly elected candidates to get a proportional result. This way all voters count, parties have to campaign everywhere and [[gerrymandering]] does not help. Win, win, win.

    The only deviation from a representative result is that there is an [[election threshold]] of 5%. If a party gets less than 5%, their votes are unfortunately lost, except for elected direct candidates. In the last federal election 16% of the votes were lost that way. The election threshold should reduce the number of parties, but also conveniently limits competition for the existing parties.

    Political parties

    The latest polls are shown below.


    Election polls over the last 4 years. For comparison the results of the 2013 election were: CDU/CSU: 41.5%, SPD: 25.7%, Greens: 8.4%, FDP: 4.8%, Die Linke 8.6%, Pirate party: 2.2%, AfD: 4.7%.

    It is expected that six parties will cross the election limit. The largest party will be the Christian Democrats or Conservatives of Kanzler Merkel. They actually are two parties who caucus together in parliament: The Christian social Union (CSU) running in Bavaria and the Christian Democratic Union (CDU) in the rest of Germany.

    The second largest party will be the Social Democrats, similar to Labour in the UK. The upward jump in spring this year of almost 10% almost made the party as large as the Christian Democrats. This was when their new party leader Martin Schulz was elected and he suggested to again treat unemployed people as humans and get rid of the policy package called [[Harz IV]].

    This peak went away when Schulz explained that actually he only wanted to make a few small Clintonite tweaks. This Harz IV package was made by Germany's Tony Blair, the neo-liberal Gerhard Schröder who is now living on Vladimir Putin's pay check. The party strategists must have seen the movement in the polls, but threatening the middle class that they can fall really deep into poverty if they do not conform was apparently more important to them than being Social Democrats.


    The Doctors: Man & Woman. die ärzte - M&F

    The four small parties have about the same size this time. It is the policy of the Conservatives to be a sufficiently nationalistic big tent party to keep purely racist parties below the 5%, but this time the anti-Muslim party Alternative for Germany (Alternative für Deutschland) will likely make it into parliament. It started with a Euro currency sceptical party whose leader was open to racists to pass the 5% threshold and then got kicked out by them.

    The latest polls show a few percent less for the two main parties and the Alternative for Germany at or above 10%. The easily exited punditry is immediately talking about 15 or 20%. People tend to worry whether people answer polling questions well when it comes to racist parties. The evidence shows that there is no bias, but that the noise error can be larger, especially for new parties. Racist parties typically are new parties as they do not last long being a coalition of unreasonable people with often a violent criminal past.

    The other right-wing small party is the pro-business party FDP. They are officially classical liberals, but unfortunately in practise often crony capitalists. They got kicked out of parliament in the last election because their coalition government with the Conservatives was so disastrous. Their new leader Christian Lindner resurrected the party by stressing the pro-human parts of their liberal heritage. All these terms should be interpreted in a German perspective: Not even this classical liberal party would deny people health care and Barrack Obama could be a good replacement for Lindner.

    On the left we have we a party called "The Left", Die Linke. They are mostly the Social Democrat party the SPD once was. Their main campaign promise is to get rid of the Harz IV package. However, they were born out of the communist party of Eastern Germany, which has left its traces. Due to old ties and maybe kompromat they are very pro-Russia. They are against NATO and German military actions, but were not particularly worried about the Russian occupation of Crimea. Because of their communist past and officially because of their foreign policies, most other parties are not willing to govern with them. It could be that this taboo will be broken this election or the next; about time almost three decades after the fall of communism.


    Election billboard of the German Green party: Environment is not everything, but without the environment everything is nothing.

    The German Greens are traditionally seen as part of the left being born out of the hippy movement, but for a Green party they are very conventional, the old geezers have become much like the parents they once revolted against. Half of the party would like to be in the middle and the party is flirting with the idea of a coalition government with the Conservatives. In one of the most conservative German states Baden-Württemberg the Green politician Winfried Kretschmann leads the coalition government with the Christian Democrats. I mostly mention this to emphasize that politics in Europe is a bit different than in corrupt Washington.

    Coalitions

    I am not expecting any large changes in the last week and German polls are normally quite good.

    [UPDATE.
    Now the the preliminary final result is in we can see that the last polls were reasonably good. The uncertainty is given as 2 to 3% and was met. Still the difference for the Conservatives is rather large and the larger percentage for the racist party sad.
    PartyPollingResultDifference
    Conservatives (CDU/CSU) 35.8%33.0%-2.8%
    Social Democrats (SPD) 21.8%20.5%-1.3%
    Greens (B'90/Grüne)  7.8% 8.9%+1.1%
    Classical liberals (FDP) 9.6%10.7%+1.1%
    The Left (Linke)  9.5% 9.2%-0.3%
    Racists (AfD) 11.0%12.6%+1.6%
    Others  4.6%
    ]

    Theoretically Schulz could discover his inner Jeremy Corbyn and still announce to get rid of Harz IV, but even that would likely not change the coalition options much. The results will be very similar to those of 2013, but the two big parties will likely lose a few percent and the AfD and the FDP will likely pass the threshold this time. Because of this this 10% less votes will be wasted and the other parties will get less seats for the same percentage of votes. Thus all current parties will likely lose seats.

    Currently the Bundeskanzler is Angela Merkel and she is likely the next one as well. There is no limit to how often one can become Bundeskanzler. Helmut Köhl did it four times. Merkel already made coalition governments with the social democrats (SDP), the FDP and currently again the SPD. Each time her coalition partner suffered clear losses.

    To govern normally a coalition of parties is needed. The best part of the election night in 2013 was to look at the face of Angela Merkel when the exit polls suggested she might have a majority without any coalition partner. She clearly did not look forward to having to implement her platform without being able to blame the coalition partner for softening it.

    The right parties (CDU and FDP) will not want to make a coalition with the racists (AfD). The left parties (SPD and Greens) will likely not be willing to make a coalition with the former communists (Die Linke) and this coalition is also likely not big enough. Also a coalition of Social Democrats, Greens and classical liberals is likely too small.

    So whatever coalition is possible, it will include the Christian Democrats of Merkel. If it is possible to make a coalition with the classical liberals she will do so. This is likely only possible if the AfD stays below the election threshold. Due to this threshold it would perversely be best for people on the left if the racists get into parliament.

    If a coalition with the liberals is not possible Merkel will most likely try to build a coalition with the Greens, a new combination federally, but a coalition that has been tested in the German states the last years as preparation and works.

    Maybe I do have one complaint about the German punditry, they keep on talking about a coalition of Conservatives, classical liberals and greens (CDU/CSU, FDP, Greens). I understand that the small parties like such speculation to keep themselves in the news and having more options improves their negotiation position, but I do not see this coalition as a realistic option, although not fully impossible. The members of the Greens will have to vote on the coalition agreement and I see it as highly unlikely that they would approve such a right-wing government.

    Whether any of these options work will depend on the last few percent of votes and we will thus have to wait for election night. The most likely result, which is always possible, but not a popular option, will be a continuation of the current ruling coalition of Christian Democrats and Social Democrats. Both parties will probably lose votes and not be keen to continue the coalition and likely loose again in four years.


    An election billboard of the racists of Alternative for Germany above a sign saying "liars have tall ladders". A bit unfair: racists parties are not particularly popular after what they did to Germany and the world, so they have to hang their posters up high lest they get vandalized. Parties are allowed to advertise on the streets for free to make money in politics less important. They also get free time on public television.

    Climate Change

    A main environmental group (BUND) has made a comparison of the party platforms on climate change. No German party denies climate change, except for the racist party. It makes sense that a party that is willing to shoot refugees to kill at the border is also willing to destroy their existence and kill them at home. In their party platform they go full Trump and deny man-made climate change and call for higher CO2 concentrations. They are also a Trumpian party in the sense that they get a little help from foreign racists and Moscow in their quest against free and open societies. As typical for these kind of  parties the candidates are mostly incompetent and many have criminal records.

    The two big parties have deep ties with big industry and the last four years have seen a reduction in ambitions to fight climate change. As a consequence the CO2 emission goals for 2020 will be hard to reach for the next government.

    The classical liberal FDP reject solutions to climate change beyond the European Emissions Trading System, which makes sense from their perspective, however, it does not work and Germany alone cannot fix it. Thus this easily leads to doing nothing in practice.

    The Greens are naturally best on climate change. After ending nuclear power, they now want to end coal power in 2030 (Kohleausstieg). Angela Merkel indicated her willingness to form a coalition by writing in to the Conservative platform: an end to lignite coal power (Braunkohleausstieg).

    Bonn

    As an example of how the electoral system works let's consider Bonn, where I live (although I am not allowed to vote because the German parliament may have to vote whether to go to war with The Netherlands; as EU citizen I can vote locally.) If you, as reader of this blog, care mostly about the environment your best option for your direct (first) vote is the social democrat Ulrich Kelber. He is a strong pro-environment politician within the SPD, but the coal-NRW SPD did not put him on the party list, so he has to get a direct mandate. For this reason Campact is campaigning for Kelber.

    In the last election Kelber won the direct votes, while the Christian Democrats got more of the party (second) votes. The Christian Democrat from Bonn Claudia Lücking-Michel still got elected via the list, something that is again likely as she has place 27 on the party list of North Rhine-Westphalia.

    Your second vote would then be The Greens. The Green candidate Katja Dörner is third on the party list and will thus likely be elected via the party list although she has no chance to get a direct mandate in Bonn. Thus if Kelber gets the direct mandate, Bonn would likely be represented by the same three members of parliament as now. Because of the party lists, many districts have more than one candidates, but three is quite a lot.

    This electoral system also distributes the power. The local/district party members determine their direct candidates. The state party members determine the party lists. The federal party only determines the leading candidate.

    Related reading

    Carbon Brief: German election 2017: Where the parties stand on energy and climate change.

    If you are still undecided who to vote for, the Wahl-O-Mat can help you. (In German)

    Where the donor money goes: Parteispenden - Wer zahlt? Wie viel? An wen? (In German)

    Sonntagfrage Aktuell: Graph with the latest polls. (In German, but not many words)

    The Guardian: Angela Merkel races ahead in polls with six weeks to go.


    * Top photo Girls'Day-Auftaktveranstaltung am 26.04.2017 in Anwesenheit von Bundeskanzlerin Angela Merkel im Bundeskanzleramt, Berlin by Initiative D21 used under a Creative Commons Attribution-NoDerivs 2.0 Generic (CC BY-ND 2.0) license.

    Photo Bundestagswahl 2017 #btw2017 Die Grünen by Markus Spiske used under a Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.


    Wednesday, 13 September 2017

    My EMS2017 highlights

    When I did my PhD, our professor wanted everyone to write short reports about conferences they had attended. It was a quick way for him to see what was happening, but it is also helpful to remember what you learned and often interesting to read yourself again some time later. Here is my short report on last week's Annual Meeting of the European Meteorological Society (EMS), the European Conference for Applied Meteorology and Climatology 2017, 4–8 September 2017, Dublin, Ireland.

    This post is by its nature a bit of lose sand, but there were some common themes: more accurate temperature measurements by estimating the radiation errors, eternal problems estimating various trends, collaborations between WEIRD and developing countries and global stilling.

    Radiation errors

    Air temperature sounds so easy, but is hard to measure. What we actually measure is the temperature of the sensor and because air is a good isolator, the temperature of the air and the sensor can easily be different. For example, due to self-heating of electric resistance sensors or heat flows from the sensor holder, but the most important heat flow is from radiation. The sun shining on the sensor or the sensor losing heat via infra-red radiation by contact with the cold atmosphere.



    In the [[metrology]] (not meteorology) session there was a talk and several posters on the beautiful work by the Korea Research Institute of Standards and Science to reduce the influence of radiation on the temperature measurements. They used two thermometers one dark and one light coloured to estimate how large radiation errors are and to be able to correct for them. This set-up was tested outside and in their amazing calibration laboratory.

    These were sensors to measure the vertical temperature profile, going up to 15 km high. Thus they needed to study the sensors over a huge range of temperatures (-80°C to 25°C); it is terribly cold at the tropopause. The dual sensor was also exposed to a large range of solar irradiances from 0 to 1500 Watts per square meter; the sun is much stronger up there. The pressure ranged from 10 hPa to the 1000 hPa we typically have at the surface. The low pressure makes the air an even better isolator. The radiosondes drift with the wind reducing ventilation, thus the wind only needed to be tested from 0 to 10 meters per second.

    I have seen this set-up to study radiation errors for automatic weather stations, it would be great to also use it for operational stations to reduce radiation errors.

    The metrological organisation of the UK is working on a thermometer that does not have a radiation error by directly measuring the temperature of the air. Micheal de Podesta does so by measuring the speed of sound very accurately. The irony is that it is hard to see how well this new sound thermometer works outside the lab because the comparison thermometer has radiation errors.

    Micheal de Podesta live experiments with the most accurate thermometer in human history:



    To lighten up this post: I was asked to chair the metrology session because the organiser of the session (convener) gave a talk himself. The talks are supposed to be 12 minutes with 3 minutes for questions and changing to the next speaker. Because multiple sessions are running at the same time and people may switch it is important to stick to the time. Also people need some time between the time blocks to recharge.

    One speaker crossed the 12 minutes and had his back towards me so that I could not signal his time was up. Thus I walked across the screen to the other side in front of him. This gave some praise on Twitter.

    If you speak a foreign language (and are nervous) it can be hard to deviate from the prepared talk.

    Satellite climate data

    There were several talks on trying to make a stable dataset from satellite measurements to make them useful for climate change studies. Especially early satellites were not intended for quantitative use, but only to look at the moving cloud systems. And also later the satellites were mostly designed for meteorological uses, rather than climate studies.

    Interesting was Ralf Quast looking at how the spectral response of the satellites deteriorated while in space. The sensitivity for visible light did not decline similarly for all colours, but deteriorated faster for blues than for reds. This was studied by looking at several calibration targets expected to be stable: the Sahara desert, the dark oceans, and the bright top of tropical convective clouds. The estimates for post-launch measurements were similar to pre-launch calibrations in the lab.

    Gerrit Hall explained that there are 17 sources of uncertainties for visible satellite measurements from the noise when looking at the Earth and when looking at the space for partial calibration to several calibration constants and comparisons to [[SI]] standards (the measurement units everyone, but the USA uses).

    The noise levels also change over time, typically going up over the life time, but sometimes also going down for a period. The constant noise level in the design specification often used for computations of uncertainties is just a first estimate. When looking at space the channels (measuring different frequencies of light) should be uncorrelated, but they are not always.

    Global Surface Reference Network

    Peter Thorne gave a talk about a future global surface climate reference network. I wrote about this network for climate change studies before.

    A manuscript describing the main technical features of such a network is almost finished. The Global Climate Observing System of WMO is now setting up a group to study how we can make this vision a reality to make sure that future climatologists can study climate change with a much higher accuracy. The first meeting will be in November in Maynooth.

    Global stilling

    The 10-meter wind speed seems to be declining in much of the mid-latitudes, which is called "global stilling". It is especially prevalent in middle Europe (as the locals say, in my youth this was called east Europe). The last decade there seems to be an uptick again; see graph to the right from The State of the Climate 2016.

    Cesar Azorin-Molina presented the work of his EU project STILLING in a longer talk in the Climate monitoring session giving an overview of global stilling research. Stilling is also expected to be one of the reasons for the reduction in pan evaporation.

    The stilling could be due to forest growth and urbanization, both make the surface rougher to the wind, but could also be due to changes in the large scale circulation. Looking at vertical wind profiles one can get an idea about the roughness of the surface and thus study whether that is the reason, but there is not much such data available over longer periods.

    If you have such data, know of such data, please contact Cesar. Also for normal wind data, which is hard to get, especially observations from developing countries. The next talk was about a European wind database and its quality control, this will hopefully improve the data situation in Europe.

    This was part of the climate monitoring session, which has a focus on data quality because Cesar also studied the influence of the ageing of cup anemometers that measure the wind speed. Their ball bearings tend to wear out, producing lower observed wind speeds. By making parallel measurements with new equipment and a few year old instruments he quantified this problem, which is quite big.

    Because these anemometers are normally regularly calibrated and replaced I would not expect that this would produce problems for the long-term trend. Only if the wearing is larger now than it was in the past it would create a trend bias. But this does create quite a lot of noise in the difference time series between one station and a neighbour, thus making relative homogenisation harder.



    Marine humidity observations

    My ISTI colleague Kate Willet was recipient of the WCRP/GCOS International Data Prize 2016. She leads the ISTI benchmarking group and is especially knowledgeable when it comes to humidity observations. The price was a nice occasion to invite her to talk about the upcoming HadISD marine humidity dataset. It looks to become a beautiful dataset with carefully computed uncertainties.

    There is a decline in the 2-meter relative humidity over land since about 2000 and it is thus interesting to see how this changes over the ocean. Preliminary results suggest that also over the ocean the relative humidity is declining. Both quality control of individual values and bias corrections are important.

    Developing countries

    There was a workshop on the exchange of information about European initiatives in developing countries. Saskia Willemse of Meteo Swiss organised it after her experiences from a sabbatical in Bhutan. Like in the rest of science a large problem is that funding is often only available for projects and equipment, while it takes a long time to lift an organisation to a higher level and the people need to learn how to use the equipment in praxis and it is a problem that equipment is often not interoperable.

    More collaboration could benefit both sides. Developing countries need information to adapt to climate change and improve weather predictions. To study the climate system, science needs high quality observations from all over the world. For me it is, for example, hard to find out how measurements are made now and especially in the past. We have no parallel measurements in Africa and few in Asia. The Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has much too few observations in developing countries. We will probably run into the same problem again with a global station reference network.

    At the next EMS (in Budapest) there will be a session on this topic to get a discussion going how we can better collaborate. The organisers will reach out to groups already doing this kind of work in WMO, UNEP and the World Bank. One idea was to build a blog to get an overview of what is already happening.

    I hope that it will be possible to have sustainable funding for weather services in poor countries, for capacity building and for making observations in return for opening up their data stores. That would be something the UN climate negotiations could do via the [[Green Climate Fund]]. Compared to the costs of reducing greenhouse gases and adapting our infrastructure the costs of weather services are small and we need to know what will happen for efficient planning.

    Somewhat related to this is the upcoming Data Management Workshop (DMW) in Peru modelled after the European EUMETNET DMWs, but hopefully with more people from South and Central America. The Peru workshop is organised by Stefanie Gubler of the Swiss Climandes project and will be held from 28th of May to the 1st of June 2018. More information follows later.

    Wet bulb temperature

    For the heat stress of workers, the wet bulb temperature is important. This is the temperature of a well-ventilated thermometer covered in a wet piece of cloth. If there is some wind the wet bulb temperature is gives an indication of the thermal comfort of a sweating person.

    The fun fact I discovered is that the weather forecasts for the wet bulb temperature are more accurate than for the temperature and the relative humidity individually. There is even some skill up to 3 weeks in advance. Skill here only means that the weather prediction is better than using the climatic value. Any skill can have economic value, but sufficiently useful forecasts for the public would be much shorter-term.

    Plague

    The prize for the best Q&A goes to the talk on plague in the middle ages and its relationship with the weather in the previous period (somewhat cool previous summer, somewhat warm previous winter and a warm summer: good rat weather).

    Question: why did you only study the plague in the Middle Ages?
    Answer: I am a mediaevalist.

    Other observational findings

    Ian Simpson studied different ways to compute the climate normals (the averages over 30 years). The main difference between temperature datasets were in China due to a difference in how China itself computes the daily mean temperature (from synoptic fixed hour measurements at 0, 6, 12, 18 hours universal time) and how most climatological datasets do it (from the minimum and maximum temperature). Apart from that the main differences were seen when data was incomplete because datasets use different methods to handle this.

    There was another example where the automatic mode (joint detection) of HOMER produced bad homogenisation results. The manual mode of HOMER is very similar to PRODIGE, which is a good HOME recommended method, but the joint detection part is new and was not studied well yet. I would advice against its use by itself.

    Lisa Hannak of the German weather service looked at inhomogeneities in parallel data: manual observations made next to automatic measurements. Because they are so highly correlated it is possible to see very small inhomogeneities and quite frequent ones. An interesting new field. Not directly related to EMS, but there will be a workshop on parallel data in November as part of the Spanish IMPACTRON project.

    The European daily climate dataset ECA&D, which is often used to study changes in extreme weather, will soon have a homogenised version. Some breaks in earlier periods were not corrected because there were no good reference stations in this period. I would suggest to at least correct the mean in such a case, that is better than doing nothing and having a large inhomogeneity in a dataset people expect to be homogenised is a problem.

    One of the things that seems to help us free meteorological/climate data is that there is a trend towards open government. This means that as much as possible the data the government has gathered is made available to the public via an [[API]]. Finland is just working on such an initiative and also freed the data of the weather service. There are many people and especially consultants using such data. We can piggy back on this trend.

    One can also estimate humidity with GPS satellites. Such data naturally also need to be homogenised. Roeland Van Malderen works on a benchmark to study how well this homogenisation would work.

    The Austrian weather service ZAMG is working on an update for the HISTALP dataset with temperature and precipitation for the Greater Alpine Region. The new version will use HOMER. Two regions are ready.

    It was great to see that Mexico is working on the homogenisation of some of their data. Unfortunately the network is very sparse after the 90s, which makes homogenisation difficult and the uncertainty in the trends large.