You are browsing the archive for Guest Post.

The great potential of citizen science

- November 12, 2014 in Featured, Guest Post, Research

This is a guest post by Benedikt Fecher of The Alexander von Humboldt Institute for Internet and Society (HIIG) and is re-posted from the HIIG blog

Citizen science is nothing new

What do Benjamin Franklin, Johann Wolfgang von Goethe, and Francis Bacon have in common?

All were amateur scientists. Franklin invented the lightning rod, Goethe discovered the incisive bone and was moderately successful as an art theorist and Bacon can be considered as nothing less than the father of empiricism, or can he? Either way, the three shared a passion for discovering things in their spare time. None of them earned their pennies as professional scientists, if that profession even existed back then.

 

Discovery is a matter of thirst for adventure

Discovery is a matter of thirst for adventure

Citizen science is in fact old hat. It existed long before disciplines existed and could be described as the rightful predecessor of all empirical science. It laid the foundations for what we know today as the scientific method: the rule-governed and verifiable analysis of the world around us. Still, amateurs in science have often become marginalized over the past 150 years, as scientific disciplines have emerged and being a scientist has become a real thing to do (read more here).

Citizen science’s second spring

Today, citizen science is experiencing a second spring and it is no surprise that the internet has had a hand in it. In recent years, hundreds of citizen science projects have popped up, and they’re encouraging people to spend their time tagging, categorizing and counting in the name of science (see here and here). Some unfold proteins in an online game (Foldit), while others describe galaxies from satellite images (GalaxyZoo and here) or count wild boars in Berlin and deliver the numbers to an online platform (Wild boars in the city). Citizen science has moved online. And there are thousands of people in thousand different places that do many of funny things that can alter the face of science. The Internet is where they meet.

Berlin Wall; East Side Gallery

Berlin Wall; East Side Gallery

The logic of Internet-based citizen science: Large scale, low involvement

Citizen science today works differently to the citizen science of Goethe’s or Franklin’s time. The decentralised and voluntary character of today’s citizen science projects questions the way research has been done for a long time. It opens up science for a multitude of voluntary knowledge workers that work (more or less) collaboratively. In some respect, the new kind of citizen science is drawing on open innovation strategies developed in the private sector. In their recent Research Policy article, Franzoni and Sauermann refer to this type of amateur science as crowd science. The the term is extremely effective at capturing the underlying mechanics of most citizen science projects, which involve low-threshold-large-scale-participation. Today, participation of volunteers in science is scalable.

The advantages of citizen science

When it comes to data collection, social participation and science communication, citizen science is promising.

For scientists, it is an excellent way to collect data. If you visit one of the citizen science directories (for example here and here) and scroll through the projects, you will see that most of them involve some kind of documenting. These citizen scientists count rhinoceros beetles, wild boars, salamanders, neophytes, mountains and trees. There is nothing that cannot be quantified, and a life solely devoted to counting the number of rhinoceros beetles in North America would indeed be mundane for an individual scientist, not to speak of the travel expenses. Citizen scientists are great data sensors.

For citizen scientists it is a way of partaking in the process of discovery and learning about fields that interest them. For example, in a German project from the Naturschutzbund (German Society for the Conservation of Nature), sports divers are asked to count macrophytes in Northern German lakes. The data the divers collect help monitoring the ‘state of health’ of their freshwater lakes. In follow-up sessions, the divers are informed about the results. The case illustrates how citizen science works. Volunteers help scientists and in return receive first-hand information about the results. In this regard, citizen science can be an excellent communication and education tool.

Citizen science brings insight from without into the academic ivory tower and allows researchers and interested non-researchers to engage in a productive dialogue. This is a much-needed opportunity: for some time now, scholars and policy makers have been saying how challenging it is to open up science and involve citizens. Still, what makes the new kind of internet-enabled citizen science science, is rather the context volunteers work in than the tasks they perform.

The honey bee problem of citizen science

The old citizen scientists, like Franklin, Goethe or Bacon asked questions, investigated them and eventually discovered something, like Goethe did with his incisive bone. In most citizen science projects today, however, amateurs perform rather mundane tasks like documenting things (see above), outsourcing computing power (e.g. SETI@home) or playing games (e.g. Foldit). You can go to the Scientific American’s citizen science webpage and search for the word ‘help’ and you will find that out of 15 featured projects, 13 are teasered help scientists do something. The division of roles between citizens and real scientists is evident. Citizen scientists perform honey bee tasks. The analytic capacity remains with real researchers. Citizen science today is often a twofold euphemism.

That is not to say that collecting, documenting and counting is not a crucial part of research. In many ways the limited task complexity even resembles the day-to-day business of in-person research teams. Citizen scientists, on the other hand, can work when they want to and on what they want to. That being said, citizen science is still a win win in terms of data collection and citizen involvement.

An alternative way to think of citizen science: Small scale, high involvement

A second way of doing citizen science is not to think of volunteers as thousands of little helpers but as knowledge workers on a par with professional researchers. This small-scale type of citizen science is sometimes swept under the mat even though it is equally promising.

Timothy Gower’s Polymath Project is a good case for the small-scale-high-involvement type of citizen science. In 2009, Gowers challenged the readers of his blog to find a new combinatorial proof of the density version of the Hales-Jewett theorem. One has to know, that Gowers is a field medallist in math and apparently his readers share the same passion. After seven weeks, he announced that the problem had been solved with the help of 40 volunteers, a number far too small to count as massively collaborative.

Nevertheless, Gower’s approach was successful. And it designated an form of citizen science in which a few volunteers commit themselves for a longer period to solve a problem. This form of citizen science is fascinating regarding its capacity to harvest tacit expert knowledge that does not reside in a scientific profession. The participation is smaller in scale but higher in quality. It resembles Benkler’s commons-based peer production or the collective invention concept from open innovation.

The core challenges for this kind of citizen science is to motivate and enable expert volunteers to make a long-term commitment to a scientific problem.

Both strategies, the large scale low involvement participation as well as the small scale high involvement participation have the capacity to alter science. The second however would be a form of citizen science that lives up to its name. Or did you never want to discover your own incisive bone?

Goethe and the incisive bone

Goethe and the incisive bone

Pictures

  1. Franklin with kite: Franklin’s Experiment, June 1752
  2. Wall: Own picture
  3. Goethe dozing on stones: Goethe in the Roman Campagna (1786) by Johann Heinrich Wilhelm Tischbein.
  4. Incisive bone, Gray’s Anatomy, 1918

Thanks to Roisin Cronin, Julian Staben, Cornelius Puschmann, Sascha Friesike and Kaja Scheliga for their help.

Improving openness, transparency and reproducibility in scientific research

- October 24, 2014 in Featured, Guest Post, Reproducibility, Research, Tools

This is a guest post for Open Access Week by Sara Bowman of the Open Science Framework.

Understanding reproducibility in science

Reproducibility is fundamental to the advancement of science. Unless experiments and findings in the literature can be reproduced by others in the field, the improvement of scientific theory is hindered. Scholarly publications disseminate scientific findings, and the process of peer review ensures that methods and findings are scrutinized prior to publication. Yet, recent reports indicate that many published findings cannot be reproduced. Across domains, from organic chemistry ((Trevor Laird, “Editorial Reproducibility of Results” Organic
Process Research and Development) to drug discovery (Asher Mullard, “Reliability
of New Drug Target Claims Called Into Question

Nature Reviews Drug Development) to psychology (Meyer and Chabris, “Why Psychologists’ Food Fight Matters” Slate), scientists are discovering difficulties in replicating
published results.

Various groups have tried to uncover why results are unreliable or what characteristics make studies less reproducible (see John Ioannidis’s “Why Most Published Research Findings Are False,” PLoS, for example). Still others look for ways to incentivize practices that promote accuracy in scientific publishing (see Nosek, Spies, and Motyl, “Scientific Utopia II: Restructuring Incentives and Practices to Promote Truth Over Publishability” Perspectives on Psychological Science). In all of these, the underlying theme is the need for transparency surrounding the research process – in order to learn more about what makes research reproducible, we must know more about how the research was conducted
and how the analyses were performed.
Data, code, and materials sharing can shed light on research design and analysis decisions that lead to reproducibility. Enabling and incentivizing these practices is the goal of The Open Science Framework, a free, open source web application built by the Center for Open Science.


The right tools for the job

The
Open Science Framework (OSF)
helps researchers manage their research workflow and enables data and materials sharing both with collaborators and with the public. The philosophy behind the OSF is to meet researchers where they are, while providing an easy means for opening up their research if it’s desired or the time is right. Any project hosted on the OSF is private to collaborators by default, but making the materials open to the public is accomplished with a simple click of a button.

Here, the project page for the Reproducibility Project: Cancer Biology demonstrates the many features of the Open Science Framework (OSF). Managing contributors, uploading files, keeping track of progress and providing context on a wiki, and accessing view and download statistics are all available through the project page.

Here, the project page for the Reproducibility Project: Cancer Biology demonstrates the many features of the Open Science Framework (OSF). Managing contributors, uploading files, keeping track of progress and providing context on a wiki, and accessing view and download statistics are all available through the project page.

Features of the OSF facilitate transparency and good scientific practice
with minimal burden on the researcher. The OSF logs all actions by contributors and maintains full version control. Every time a new version of a file is uploaded to the OSF, the previous versions are
maintained so that a user can always go back to an old revision. The OSF performs logging and maintains version control without the researcher ever having to think about it – no added steps to the workflow, no extra record-keeping to deal with.

The OSF integrates with other services (e.g., GitHub, Dataverse, and Dropbox)
so that researchers continue to use the tools that are practical, helpful, and a part of the workflow, but gain value from the other features the OSF offers. An added benefit is in seeing materials from
a variety of services next to each other – code on GitHub and files on Dropbox or AmazonS3 appear next to each other on the OSF – streamlining research and analysis processes and improving workflows.

 Each project, file, and user on the OSF has a persistent URL, making content citable. The project in this screenshot can be found at https://osf.io/tvyxz.

Each project, file, and user on the OSF has a persistent URL, making content citable. The project in this screenshot can be found at https://osf.io/tvyxz.

Other features of the OSF incentivize researchers to open up their data and materials. Each project, file, and user is given a globally unique identifier – making all materials citable and ensuring
researchers get credit for their work. Once materials are publicly available, the authors can access statistics detailing the number of views and downloads of their materials, as well as geographic
information about viewers. Additionally, the OSF applies the idea of “forks,” commonly used in open source software development, to scientific research. A user can create a fork of another project, to
indicate that the new work builds on the forked project or was inspired by the forked project. A fork serves as a functional citation; as the network of forks grows, the interconnectedness of a body of research becomes apparent.

Openness and transparency about the scientific process informs the development of best practices for reproducible research. The OSF seeks both to enable that transparency, by taking care of “behind
the scenes” logging and versioning without added burden on the researcher – and to improve overall efficiency for researchers and their daily workflows. By providing tools for researchers to
easily adopt more open practices, the Center for Open Science and the OSF seek to improve openness, transparency, and – ultimately – reproducibility in scientific research.

Building an archaeological project repository I: Open Science means Open Data

- February 27, 2014 in Guest Post, Research

This is a guest post by Anthony Beck, Honorary
fellow, and Dave Harrison, Research fellow, at the University of Leeds School of
Computing.

In 2010 we authored a series of blog posts for the Open Knowledge Foundation subtitled ‘How open approaches can empower archaeologists’. These discussed the DART project, which is on the cusp of concluding.

The DART project collected
large amounts of data, and as part of the project, we created a
purpose-built data
repository
to catalogue this and make it available, using CKAN, the Open Knowledge Foundation’s
open-source data catalogue and repository. Here we revisit the need
for Open Science in the light of the DART project. In a subsequent
post we’ll look at why, with so many repositories of different kinds,
we felt that to do Open Science successfully we needed to roll our
own.

Open data can change science

Open inquiry is at the heart of the scientific enterprise. Publication
of scientific theories – and of the experimental and observational data
on which they are based – permits others to identify errors, to support,
reject or refine theories and to reuse data for further understanding
and knowledge. Science’s powerful capacity for self-correction comes from
this openness to scrutiny and challenge. (The Royal Society,
Science as an open enterprise, 2012)

The Royal Society’s report Science as an open enterprise
identifies how 21st century communication technologies are changing
the ways in which scientists conduct, and society engages with,
science. The report recognises that ‘open’ enquiry is pivotal for the
success of science, both in research and in society. This goes beyond
open access to publications (Open Access), to include access
to data and other research outputs (Open Data), and the
process by which data is turned into knowledge (Open
Science).

The underlying rationale of Open Data is this: unfettered access to
large amounts of ‘raw’ data enables patterns of re-use and knowledge
creation that were previously impossible. The creation of a rich,
openly accessible corpus of data introduces a range of data-mining and
visualisation challenges, which require multi-disciplinary
collaboration across domains (within and outside academia) if their
potential is to be realised. An important step towards this is
creating frameworks which allow data to be effectively accessed and
re-used
. The prize for succeeding is improved knowledge-led policy
and practice that transforms communities, practitioners, science and
society.

The need for such frameworks will be most acute in disciplines with
large amounts of data, a range of approaches to analysing the data,
and broad cross-disciplinary links – so it was inevitable that they
would prove important for our project, Detection of Archaeological
residues using Remote sensing Techniques (DART).

DART: data-driven archaeology

DART aimed is to develop analytical methods to differentiate
archaeological sediments from non-archaeological strata, on the basis
of remotely detected phenomena (e.g. resistivity, apparent dielectric
permittivity, crop growth, thermal properties etc). The data collected
by DART is of relevance to a broad range of different communities.
Open Science was adopted with two aims:

  • to maximise the research impact by placing the project data and
    the processing algorithms into the public sphere;
  • to build a community of researchers and other end-users around the
    data so that collaboration, and by extension research value, can be
    enhanced.

‘Contrast dynamics’, the type of data provided by DART, is critical
for policy makers and curatorial managers to assess both the state and
the rate of change in heritage landscapes, and helps to address
European Landscape Convention (ELC) commitments. Making the best use
of the data, however, depends on openly accessible dynamic monitoring,
along the lines of that developed for the Global Monitoring for
Environment and Security (GMES) satellite constellations under
development by the European Space Agency. What is required is an
accessible framework which allows all this data to be integrated,
processed and modelled in a timely manner.

It is critical that policy makers and curatorial managers are able
to assess both the state and the rate of change in heritage
landscapes. This need is wrapped up in national commitments to the
European Landscape Convention (ELC). Making the best use of the data,
however, depends on openly accessible dynamic monitoring, along
similar lines to that proposed by the European Space Agency for the
Global Monitoring for Environment and Security (GMES) satellite
constellations. What is required is an accessible framework which
allows all this data to be integrated, processed and modelled in a
timely manner. The approaches developed in DART to improve the
understanding and enhance the modelling of heritage contrast detection
dynamics feeds directly into this long-term agenda.

Cross-disciplinary research and Open Science

Such approaches cannot be undertaken within a single domain of
expertise. This vision can only be built by openly collaborating with
other scientists and building on shared data, tools and techniques.
Important developments will come from the GMES community, particularly
from precision agriculture, soil science, and well documented data
processing frameworks and services. At the same time, the information
collected by projects like DART can be re-used easily by others. For
example, DART data has been exploited by the Royal Agricultural
University (RAU) for use in such applications as carbon sequestration
in hedges, soil management, soil compaction and community mapping.
Such openness also promotes collaboration: DART partners have been
involved in a number of international grant proposals and have
developed a longer term partnership with the RAU.

Open Science advocates opening access to data, and other scientific
objects, at a much earlier stage in the research life-cycle than
traditional approaches. Open Scientists argue that research synergy
and serendipity occur through openly collaborating with other
researchers (more eyes/minds looking at the problem). Of great
importance is the fact that the scientific process itself is
transparent and can be peer reviewed: as a result of exposing data and
the processes by which these data are transformed into information,
other researchers can replicate and validate the techniques. As a
consequence, we believe that collaboration is enhanced and the
boundaries between public, professional and amateur are blurred.

Challenges ahead for Open Science

Whilst DART has not achieved all its aims, it has made significant
progress and has identified some barriers in achieving such open
approaches. Key to this is the articulation of issues surrounding
data-access (accreditation), licensing and ethics. Who gets access to data, when, and under what conditions, is a serious ethical issue for the heritage sector. These are obviously issues that need co-ordination through organisations like Research Councils UK with
cross-cutting input from domain groups. The Arts and Humanities
community produce data and outputs with pervasive social and ethical
impact, and it is clearly important that they have a voice in these
debates.

Open Scholar Foundation

- December 6, 2013 in Announcements, Guest Post, Reproducibility, Research, Tools

This is a guest post from Tobias Kuhn of the Open Scholar Foundation. Please comment below or contact him via the link above if you have any feedback on this initiative!

logo(2)

The goal of the Open Scholar Foundation is to improve the efficiency of scholarly communication by providing incentives for researchers to openly share their digital research artifacts, including manuscripts, data, protocols, source code, and lab notes.

The proposal of an “Open Scholar Foundation” was one of the winners of the 1K challenge of the Beyond the PDF conference. This was the task of the challenge:

What would you do with 1K that would significantly advance scholarly communication that does not involve building a new software tool?

The idea was to establish a committee that would certify researchers as “Open Scholars” according to given criteria. This was the original proposal:

I would set up a simple “Open Scholar Foundation” with a website, where researchers can submit proofs that they are “open scholars” by showing that they make their papers, data, metadata, protocols, source code, lab notes, etc. openly available. These requests are briefly reviewed, and if approved, the applicant officially becomes an “Open Scholar” and is entitled to show a banner “Certified Open Scholar 2013” on his/her website, presentation slides, etc. Additionally, there could be annual competitions to elect the “Open Scholar of the Year”.

An alternative approach (perhaps more practical and promising) would be to provide a scorecard for researchers to calculate their “Open Scholar Score” on their own. There is an incomplete draft of such a scorecard in the github repo here.

In any case, his project should lead to an established and recognized foundation that motivates scholars to openly share their data and results. Being a certified Open Scholar should be something that increases one’s reputation and visibility, and should give a counterweight to possible benefits from keeping data and results secret. The criteria for Open Scholars should become more strict over time, as the number of “open-minded” scholars hopefully increases over the years. This should go on until, eventually, scholarly communication has fundamentally changed and does not require this special incentive anymore.

It is probably a good idea to use Mozilla Open Badges for these Open Scholar banners.

We are at the very beginning with this initiative. If you are interested in joining, get in touch with us! We are open to any kind of feedback and suggestions.

Open science & development goals: shaping research questions

- September 13, 2013 in Collaborations, events, External Meetings, Guest Post, Meetings, Research

This is cross-posted from the OpenUCT blog.

What do we include in our definition of open science? And what is meant by development? Two key questions when you’re discussing open science for development, as we were yesterday on day one of the IDRC OKFN-OpenUCT Open Science for Development workshop.

Participants from Africa, Asia and Latin America and the Carribbean have gathered at the University of Cape Town in an attempt to map current open science activity in these regions, strengthen community linkages between actors and articulate a framework for a large-scale IDRC-funded research programme on open science. The scoping workshop aims to uncover research questions around how open approaches can contribute to development goals in different contexts in the global South. Contextualization of open approaches and the identification of their key similarities and differences is critical in helping us understand the needs and required frameworks of future research.

Several key themes, which generally provided more questions than answers, came up throughout a day packed of presentations, discussion and debate: strategic tensions, inequalities, global power dynamics, and the complexity of distilling common challenges (and opportunities) over large geographical areas. Some of the key strategic tensions identified include the balance between the “doing” of open science as opposed to researching it, as well as the tension between high quality research and capacity building at an implementation level. Both tensions are centred on inextricably linked components which are important in their own right. This brings up the question of where should the focus be? Where is it most relevant and important?

The issue of inequality and inclusivity also featured strongly in the discussions, particularly around citizen science – by involving people in the research process, you empower them before they are affected. But this begs the questions: How open should citizen science be? Who takes the initiative and sets goals? Who is allowed to participate and in what roles? With regard to knowledge, a small number of countries and corporate entities act as gatekeepers of the knowledge produced globally. How should this knowledge be made more accessible? Will open scientific approaches make dialogue and knowledge distribution more inclusive?

By the end of the first day’s discussion, the workshop had surfaced opportunities and challenges for each of the regions, but many questions still remain in terms of how to address the complex issues at hand and bring together the complex and disparate components of open scientific activity. Day two of the workshop will be focused on articulation of research problems, possible areas of activity and the structure of the envisioned research programme.

Join the discussion via Twitter via #OpenSciDev.

by SarahG (Pictures by Uvania Naidoo)

Open Access Button Hackday, 7-8 Sep, London, UK

- September 1, 2013 in External Meetings, Guest Post, Hackday, Tools

This is a guest post from Joe and David from the Open Access Button project.

oabutton2608

Millions of people a day are denied access to the research they both need and paid for because of paywalls. It doesn’t have to be like this, but we need your help. We’re two students from the UK making a tool to help change the system – it’s called the Open Access Button. The button is a browser-based tool which tracks every time someone is denied access to a paper. We then display this, along with the person’s location, profession and story on a real time, worldwide, interactive map of the problem.

It gets better though. While creating pressure to open up scholarly and scientific research, we help people work within the current broken system by helping them get access to the paper they need. We started building a prototype at the BMJ Hack Weekend, and came third place. But we’re not finished yet and our launch is coming up fast! To help build it we’re hosting a hackathon on the 7-8th of September in London. If you’re a developer, have an eye for design or both we’d love to see you. Not in the UK? Doesn’t matter! you can join in from anywhere in the world – just sign up below.

If you want any more information about the project- email us oabutton@gmail.com or read more here.

Announcing Open Science Finland

- March 25, 2013 in Announcements, Guest Post, Meetings

This is a guest post by Antti Poikola of Open Science Finland

Open Science Finland Group kickstarted in Tuusula Feb 2012.

Open Science Finland working group was kickstarted at the OKF Finland Convention in Tuusula 8-9.2.2013. The intense Round Table session attracted 15 open science enthusiasts despite being the last session of the evening and competing against the parallel sauna session! The working group was subsequently officially established under Open Knowledge Finland association (ry.).

Finland has already seen activity in promoting open access. The Finnish open access group FinnOA, for instance, promotes openess across the whole spectrum of scientific knowledge production. Publicly funded “Tutkimuksen tietoaineistot” (Research Data Resources) project supports more open science policy and building of the necessary open research data infrastructure.

A wider cultural change in the academic community is needed, however, to establish open practices as part of standard research practices. While many researchers favour open practices in science, truly coordinated efforts are still lacking at the national level. We believe that the strengthening the social networks through the open science network will facilitate the movement. Therefore, instead of focusing our limited resources to direct lobbying, we focus on community-driven activities and community building: we have established a (rather vibrant!) Facebook group for discussions and Kippt -linklist for sharing Open Science related links, and the group has rapidly grown to connect nearly 100 people.

Some of our current key activities include preparation of Finnish teaching material on open research, research-oriented software libraries, and mapping of the scattered national activities around the topic. If you’re involved in research, working in Finland and/or just interested in the topic, don’t hesitate to join us!

Open Science Course Sprint: An Education Hackathon for Open Data Day

- February 11, 2013 in External Meetings, Guest Post, Hackday

A blog entry by Billy Meinke cross-posted from the Creative Commons blog.

An Education Sprint

The future of Open is a dynamic landscape, ripe with opportunities to increase civic engagement, literacy, and innovation. Towards this goal, the Science Program at Creative Commons is teaming up with the Open Knowledge Foundation and members of the Open Science Community to facilitate the building of an open online course, an Introduction to Open Science. The actual build will take place during a hackathon-style “sprint” event on Open Data Day on Saturday, February 23rd and will serve as a launch course for the School of Open during Open Education Week (Mar 11-15).

Screen-shot-2013-02-10-at-3.56.45-PM1

Want to help us build this?

The course will be open in it’s entirety, the building process and content all available to be worked on, all to help people learn about Open Science. Do you know a thing or two about Open Access? Are you a researcher who’s practicing Open Research? Do you have experience in instructional or visual design? This is an all-hands event and will be facilitated by representatives at CC, OKFN, and others in the Community. Open Science enthusiasts in the Bay Area are invited to the CC Headquarters in Mountain View for the live event. Remote participants will also be able to join and contribute online via Google Hangout.

The day will begin with coffee, refreshments and a check-in call with other Open Data Day Hackathons happening around the globe. The Open Science Community is strengthened by shared interests and connections between people, which we hope will grow stronger through networked events on Open Data Day. The Open Science course sprint at CC HQ will build upon open educational content, facilitate the design of challenges for exploration, and provide easy entry for learners into concepts of Open Access, Open Research, and Open Data. It will be done in a similar fashion to other “sprint-style” content-creation events, with lunch and refreshments provided for in-person participants. We’re literally going to be hacking on education. Sound like something you’d be interested in?

Join us.

For details about the ways you can participate, see the Eventbrite page here.
To see the draft (lightly framed) course site on Peer to Peer University, go here.
For information about other Open Data Day events, see the events wiki here.

Screen-shot-2013-02-10-at-4.17.35-PM-e1360547585923
Opendataday.org/map

Developers

We need you, too! Basic skills for working with open datasets is important, and can be difficult to grasp. Who better to develop great lessons about working with data than you? Similarly, for those interested in building upon apps and projects from other Open Data Events, updated source code and repository information will be posted to a public feed (for now, follow hashtags #ODHD13 and #opendataday on Twitter.

For other information, contact billy dot meinke at creative commons dot org or @billymeinke.

This event is being organized by the Science Program at Creative Commons with support from the Open Knowledge Foundation and members of the Open Science Community.

Making Open Science Possible – Global Young Academy statement on Open Science

- November 28, 2012 in Guest Post

The following is a statement by the Global Young Academy.

The Open Science movement – giving free Internet access to scientific results and data – is a revolutionary development in the way science is made public. It has profound implications for the way in which libraries, data centres, researchers, universities, publishers, and funding bodies operate and interact. Most significantly, it offers opportunities to foster collaboration between scientists in the developed and developing world, as well as between scientists and interested non-scientists. Recent examples can be seen in the ‘Galaxy Zoo’ project, where the public can help astrophysicists classify images from the Hubble telescope, or the ‘open source malaria drug discovery program’, a network of scientists openly sharing drug development data. With initiatives like these, Open Science may foster the transformation of scientific research from a primarily academic, First World activity to a truly global endeavour.

As the Open Science movement evolves, young scientists need to play an active role in shaping its future. Early career researchers are often on the frontline of knowledge creation, and involving them ensures they have a say in how and where the data is distributed. If the Open Science movement is to truly take hold, it will require young scientists to adopt new ways of disseminating the results of research, and to carry these forward as their careers mature.

Despite the promise of positive change, several obstacles stand in the way of realising Open Science, ranging from practical to institutional features of contemporary science practices. Chief among these are:

  1. financial stability: a new model for sharing research results must be one that is financially sustainable in the long term. Publishing houses, institutions and scientists must work together to develop a systemic, fair way to disseminate research which protects poorly funded research fields and groups as well as developing countries. It should not put the financial burden of publishing squarely on the shoulders of the authors.
  2. scientific sustainability: traditional criteria to evaluate scientific success does not recognise and reward scientific efforts to share data and publications through open access platforms. If we want open science to be possible, these criteria need to be revised so that all high-quality contributions t the development of scientific research are recognised and rewarded. At the same time, open science requires a publishing model that limits overabundance of information and helps to avoid a data-deluge. Too much or unmanageable publications and publication-supporting data makes open science untenable.
  3. data sustainability: the creation of publicly accessible data archives presents problems of long-term storage. This is particularly urgent in the case of the high-volume, high-velocity, and/or high-variety datasets (‘big data) obtained through recent technologies, which require new forms of processing to enable discovery. What digital formats should be used, and how should data be curated and organised so it can be accessed in the future? What happens if the commercial or government organisations tasked with maintaining such archives become defunct?
The Global Young Academy feels that the broad aims of the Open Science movement are in the best interest of young scientists, and in the best interest of science itself. Therefore we advocate:
  • That publishers and funding agencies work towards a publishing model that allows free and public access to the results of publicly funded research. This access should be extended, free of charge, to those working in developing countries. Involving young scientists in developing such a model is a key factor in ensuring its long-term success.
  • That funding bodies and research institutions adequately recognise work published in open access journals and online, as well as work involved in collecting, curating and sharing information (whether data or papers), rather than assuming journal impact factors as a suitable proxy for scientific excellence.
  • That funding bodies recognise and encourage the development of innovative Open Science projects by allocating funding to projects which embrace the tenets of the Open Science movement. Grant applications should not be penalised if the proposed project outcome is a publicly accessible data set rather than a publication in a conventional journal; the publication of both data and claims produced by any one project should be supported and rewarded.
  • That a long-term strategy for data storage and the maintenance of data archives must be developed. As the Open Science movement grows, governments, academics and publishing houses are starting to develop strategies to ensure data is freely available for future generations. The planning of future data storage, such as the ELIXIR initiative launched by the European Union, need to involve early career researchers as well as senior academics. Young researchers are likely to have valuable knowledge of which types of data need preserving in the long term, and how this is best realised, given (1) the high stakes that these issues have for the development of their own career, (2) their recent experiences in data gathering; and (3) their exposure to digital means of data dissemination, which is likely to be more extensive than that of academics who spent most of their career without these technologies.
The statement was prepared by Arianna Betti (NL), Sabina Leonelli (UK), Michael Sutherland (UK), Martin Dominik (UK) and approved by the GYA EC November 2012.
About GYA
The Global Young Academy, founded in 2010, serves as the voice of young scientists around the world. Members

 

The Tamiflu story: Why we need access to all data from clinical trials

- November 20, 2012 in Guest Post

**The [BMJ Open Data Campaign](http://www.bmj.com/tamiflu) has been attracting [a lot of attention](https://www.google.com/news?ncl=dVg2lA2EQy3xZ4MtX-yZdTTIgTxTM&q=tamiflu&lr=English&hl=en). Here Dr Tom Jefferson, one of the people whose attempts to provide reliable information on the anti-flu drug Tamiflu kicked the campaign off, tells the story of how we got here.**

We started working on a Cochrane review of neuraminidase inhibitors in 1998. [Cochrane reviews](http://www.cochrane.org/cochrane-reviews) are studies summing up what is known of the effects of an intervention in healthcare. In this case the intervention was the class of drugs called neuraminidase inhibitors. At the time this comprised two anti-influenza compounds: zanamivir (sold as Relenza by GlaxoSmithKlein) and oseltamivir (Tamiflu, by Roche).

The [Cochrane Collaboration](http://www.cochrane.org/) is a network of volunteers who do and update reviews. We don’t take money from pharma and we have to follow highly structured protocols which are posted publicly before we start work. Comments by any reader can be posted on the protocol or the full review at any time. We have to respond.

Rightly or wrongly, our reviews are considered the gold standard for evidence-based decision making. It’s hard work, as we have to update our reviews every two years or so.

In 2009, our review was in its third update, the world was in the throws of an influenza pandemic (or so WHO was telling us) and we received a letter from a Japanese paediatrician. He wanted
to know how it was possible that in our 2005 update we had included 8 unpublished Tamiflu trials contained in extreme summary form within another review funded by Roche and carried out by Roche staff and consultants. How could we possibly have done that as we had not seen the original studies? We asked the two Roche consultants for the data. They told us to go and ask Roche. We did. **They asked us to sign a confidentiality agreement with a secrecy clause. We said no thank you.** Once the very powerful medical journal BMJ got involved with ITV Channel 4, they promised us full study reports, but gave us only the first chapter of the 10 trials. In the meantime we discovered many more trials (the list has grown from 26 to 123 – the vast majority Roche
sponsored). We asked for all the Roche completed trial reports. Roche gave us a variety of reasons why they would not share the data with us. You can read about those [here](http://bit.ly/HIbwqO).

At the end of 2010 the European Regulator EMA accepted a ruling by the European Ombudsman that trial data for drugs on which a regulatory decision had been made should be accessible.
They opened their archives. We received incomplete reports for 16 Tamiflu trials, all they had.

We published half of this (and 2000 pages of FDA comments on Tamiflu) in the 2012 version of our Cochrane review. One consequence of our access to this bonanza of regulatory material has been a comparison between the details and broad message of the few published trials and their regulatory much more detailed reports. Apart from discrepancies in reporting harms and some less-than-detailed aspects of study design, we think **the mode of action of the drug is not what the manufacturer says and (like FDA) could not find any evidence supporting a number of effects
of the drug (including those for which it was stockpiled)**.

But we do not know for sure because **we do not have all the data.** The practical result of all this is our refusal to consider published trials (either on their own or as part of reviews) for inclusion in our reviews. There are signs that this distrust of the published word is spreading.

Meanwhile what started as comment from a Japanese colleague has turned into a global campaign for access to data on trials. You can read about that [here](http://www.bmj.com/content/345/bmj.e7304). The BMJ set up a Tamiflu micro site on BMJ.com with lots of goodies including our correspondence with Roche, WHO and CDC: [http://www.bmj.com/tamiflu](http://www.bmj.com/tamiflu). The latter are the two biggest promoters of Tamiflu. If you have time, do read the correspondence. Your time will not be wasted, I promise.

And what about GlaxoSmithKlein? Some of the hype recently suggested that, after its record fine in the
US, GSK would open its archives to researchers, albeit with the intermediary of an academic committee
scrutinizing the worthiness of the analysis plans in the application. Whether this is a genuine
breakthrough or a clever piece of marketing remains to be seen. My group is yet to receive any
data from them. Despite the plaudits, I remain unconvinced, as I refuse to receive data with any
conditions attached to them – such as exclusivity or bans on sharing.

Trials are experiments conducted on human beings. Full reporting of their results (anonimyzed
to prevent individuals being identified) should be a right, not a gift. Your doctor should be in
possession of all the facts. Think about that next time he prescribes something for you.

Watch Tom tell the story: