Planet – OKF Open Science Working Group http://science.okfn.org Tue, 17 Nov 2015 14:13:24 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.2 113588865 Where’s Me Support?! http://senseopenness.com/wheres-me-support/ Tue, 17 Nov 2015 14:13:24 +0000 http://senseopenness.com/?p=23386 Over the two (2+) plus years, I started many projects within the Open * communities that I’m apart of. Most of these projects I started were meant to be worked on with two or more people (including me, of course) but I never had luck in getting anyone to work together with me. Okay, once it has succeeded and two (2) or three (3) times, it was close but still failed. That one time when it succeeded happened because I was on the Membership Board where the members had to be committed.

Because many projects meant for collaboration failed that means either that the communities don’t have enough people willing to work with me (or on anything!) (or a time commitment) or I have networking issues. The latter is within my control and the earlier is one of the problems that most of the Open * communities face.

Lacking support and the feeling of not getting things done over these two plus years is making me to lose motivation to volunteer within these communities. In fact, some of this has already affected four teams within the Ubuntu Community: Ubuntu Women, Ubuntu Ohio, Ubuntu Leadership Team, and Ubuntu Scientists and no news or any activity is shown. As for others, I’m close in removing myself from the communities, something that I don’t want to do and this is why I wrote this. It’s to answer my question of: Where’s my support?! (“me” in the title, but it’s for the lightheartedness that this post needs) I know of a few that maybe feeling this also.

As a thought, as I wrote this post, is what if I worked on a site that could serve as a volunteer board for projects within the Open * communities. Something like “Find a Task” started by Mozilla (I think) and brought over to the Ubuntu Community by Ian W, but maybe as a Discourse forum or Stack Exchange. The only problem that I will face is, again, support for people who want to post and to read. I had issues getting Open Science groups/bloggers/people to add their blog’s feed to Planet Open Science hosted by OKFN’s Open Science But that might be different if it will have almost all types of Open * movements will be represented. Who knows.

Readers, please don’t worry, as this post is written during the CC election in the Ubuntu Community, it will not affect my will to run for a chair. In fact, I think, being in the CC could help me to learn to deal with this issue if others are facing this but they are afraid to talk about in public.

I really, really don’t want to leave any of the Open Communities because of lack of support and I hope some of you can understand and help me. I would like your feedback/comments/advice on this one.

Thank you.

P.S. If this sounded like a rant, sorry, I had to get it out.

]]>
2193
Open Science – Jetzt! http://openscienceasap.org/stream/2015/11/15/open-science-jetzt/ Sun, 15 Nov 2015 09:01:41 +0000 http://openscienceasap.org/?p=3571 2191 Open Science – Jetzt! https://okfn.at/2015/11/14/open-science-jetzt/ Sat, 14 Nov 2015 17:20:28 +0000 http://at.websites.okfn.org/?p=1846 Am 3. Dezember 2015 startet eine Open Science Lecture Series mit einer Kick-Off Veranstaltung an der Uni Wien. Daniel Mietchen wird eine Keynote halten, bevor die anschliessende Paneldiskussion startet.

Wir laden zur Eröffnung der Open Science Lecture Series ein. Mit Daniel Mietchen ist dazu einer der international umtriebigsten Open Science Akteure zu Gast. Er wird von seinen aktuellen Tätigkeiten am National Institutes of Health (NIH) zum Thema Transparenz in der wissenschaftlichen Begutachtung sowie seinen zahlreichen Aktivitäten in der Wikipedia erzählen.

PodiumsteilnehmerInnen:

  • Daniel Mietchen: US National Institutes of Health (NIH)
  • Katja Mayer: Universität Wien, Open Knowledge Austria
  • Lucia Malfent: Open Innovation in Science (Ludwig Boltzmann Gesellschaft)
  • Peter Purgathofer: Forscher, Universitätsprofessor und Designer am Institut für Gestaltungs- und Wirkungsforschung sowie Koordinator des Masterstudiums Medieninformatik

Wann: 3. Dezember 2015, Beginn 19 Uhr
Wo: Sky Lounge der Universität Wien, Oskar-Morgenstern-Platz 1. DG, 1090 Wien
Eintritt frei, Anmeldung notwendig

Die aus insgesamt 5 Lehrveranstaltungen bestehende Lecture Series zu Open Science ist eine Kooperation zwischen dem WTZ Ost, openscienceASAP und Open Knowledge Austria.

]]>
2188
Open * Communities Mindmap http://senseopenness.com/open-communities-mindmap/ Tue, 20 Oct 2015 17:59:19 +0000 http://senseopenness.com/?p=23373 As a brainstorm today (and also for my research), I created a insanely large, almost impossible to read/follow mindmap mapping what is there in the Open * communities and hopefully what should/could be focused on when developing communities:

Open_CommunitiesMindMap

I broke up the sub-items with each major item by Open Source and Non-Open Source.  To me, I think there is some difference in those two communities in how are things are done and what is the focus.

There are two things that I forgot on this map:

  • Meta Documentation (under tools for both Open Source and Non-Open Source)
  • Barrier to Entry (under problems for both)
]]>
2184
Next Open Science MeetUp ‚Open Science for a Better Collaboration‘ https://okfn.at/2015/10/05/next-open-science-meetup-open-science-for-a-better-collaboration/ Mon, 05 Oct 2015 21:09:21 +0000 http://at.websites.okfn.org/?p=1790 We decided to organise our next Open Science Meetup in the context of the upcoming Open Access Week 2015. We are proud to announce our special guest, who will join us in our MeetUp: Puneet Kishor (Creative Commons) will give a short talk and give us the opportunity to exchange with him about Open Science and Citizen Science.

We plan to have a rather informal community meeting with additional lightning talks on current activities, projects and events by the Open Science working group as well as other interested people from the Austrian Open Science Community. We kindly invite you to submit your idea for a lightning talk or any other contribution. The more the merrier! :) If you are interested in giving your contribution to the meeting, please contact us.

At the end of the meeting you will have the opportunity to network and exchange with the community. We are looking forward to the MeetUp and to a large group of attendees!

The meeting will take place on Monday, 19.10.2015 from 18:00 CET at Raum D, Museumsquartier, Museumsplatz 1, Vienna. See you there!

We also have a MeetUp page, and it would be nice if you register there.

]]>
2181
Save the Date: WissKomm Hackathon 21. November, TU Wien https://okfn.at/2015/09/28/save-the-date-wisskomm/ Mon, 28 Sep 2015 11:40:26 +0000 http://at.websites.okfn.org/?p=1787 Unter dem Motto Wissenschaft neu kommunizieren kommen SchülerInnen und junge Studierende aus unterschiedlichen Fachrichtungen für einen Tag zusammen und entwickeln neue, offene Möglichkeiten, um Wissenschaft zu präsentieren.

>> Wann: Samstag, 21. November 2015 von 9 – 20 Uhr

>> Wo: TU Wien, Festsaal und Boecklsaal

>> Wer: Mitmachen können SchülerInnen ab 17 Jahren und Studierende, die sich für Wissenschaft, Kommunikation und Medien interessieren.

>> Wie: Die Teilnahme ist kostenlos, für Essen und Getränke wird gesorgt (ausreichend Mate!)

>> Mehr Infos auf wisskomm.at

Das Projekt ist eine Kooperation des des Bundesministeriums für Wissenschaft, Forschung und Wirtschaft (BMWFW) mit der HCI Group des Instituts für Gestaltungs- und Wirkungsforschung – die Open Knowledge ist als Kooperationspartnerin dabei und unterstützt die Veranstaltung mit (wo)man power in der Organisation und Kommunikation.

>> MentorInnen gesucht!

Wir suchen noch ExpertInnen aus den Bereichen Informatik, Medien, Design und Kommunikation, die den TeilnehmerInnen zur Seite stehen und beim Entwerfen und Umsetzen von Konzepten zur Wissenschaftskommunikation helfen, gegen Speis, Trank und ein kleines Honorar.

Wer sich dafür interessiert, schickt gern ein unverbindliches Email an sonja.fischbauer (et) okfn.at für mehr Infos.

Wir freuen uns schon auf euch!

LOGO_wisskomm_quadratisch

photo credit: andy prokh

]]>
2177
Starting Research: Looking at Building A Successful Non-Technical Open * Community http://senseopenness.com/starting-research-looking-at-building-a-successful-non-technical-open-community/ Mon, 14 Sep 2015 14:41:49 +0000 http://senseopenness.com/?p=23326 After a bunch of unsuccessful attempts of trying to get some sort of project going within a Open Science community, I decided to start research on how to build a successful non-technical Open * community.  I’m aware that could be just be a matter of time commitment but I still think it be worth it to learn how to build one.

I started a public project on the Open Science Framework.  Most of my work done (so far) is in the wiki of the Project.  Right now, this plan is the one that I will follow.   At the moment, it looks like that I will be focusing on the things that I learned/used/experienced from the Ubuntu Community, but it may expend into other topics.

I’m also planning to use Open Undergrad Research Foundation (OpenURF) to set up a experiment to see which tools are needed and how to use them.  But that will be later as the sever guy haven’t e-mail me back.

I will be using my blog for updates.

Afterthought: I really think it may be just be a matter of time commitment or not enough drivers.  If that is the case, then I will start new research on how to fix that, if possible.

]]>
2175
17000 Volunteers Contribute to a PhD http://daniellombrana.es/blog/2015/08/12/17k-users.html Wed, 12 Aug 2015 00:00:00 +0000 http://daniellombrana.es/blog/2015/08/12/17k-users Doing a PhD is laborious, hard, demanding, exhausting... Your thesis is usually the result of blood, sweat and tears. And you are usually alone. Well, what woud you say if I tell you that a researcher got helped by more than 17 thousand volunteers?

Yes, you've read it right: more than 17 thousand people have helped Alejandro Sánchez to do his research, publishing his thesis as a result and getting the best possible mark: cum laude. Amazing, right?

But how this happened? How did he managed to involve such a big crowd? I mean, most people think science is boring, tedious, difficult, add here your adjective... However, this guy managed to get 17 thousand people from all over the world to help him on:

Best part? They did it because they wanted to help. No money involved! Just pure kindness.

In other words, the unexpected happened, and thanks to sharing his work and also asking for help for his research -studying light pollution on cities- he managed to achieve the unconceivable: involving more than 17 thousand people on scientific research.

How this started? Well, let's start from the beginning.

The beginning: laying down the ideas

This adventure started in 2014, in London, UK. I was participating at the Citizen Cyberscience Summit and Alejandro was there because someone told him to learn more about Crowdcrafting.

At the summit there was a workshop where scientists and hackers joined forces to create new citizen science projects. Wait, let me explain first what's citizen science so we can enjoy the trip later on (like this kid, I promise).

Citizen science is the active contribution of people who are not professional scientists to science. It provides volunteers with the opportunity to contribute intellectually to the research of others, to share resources or tools at their disposal, or even to start their own research projects. Volunteers provide real value to ongoing research while they themselves acquire a better understanding of the scientific method.

In other words, citizen science opens the doors of laboratories and makes science accessible to all. It facilitates a direct conversation between scientists and enthusiasts who wish to contribute to scientific endeavor.

Now, with this idea in our minds let's get back to Alejandro's research.

At this workshop Alejandro told me that he was studying light pollution on cities. He and his team realized that the astronauts from the International Space Station take pictures of the earth with a regular camera. Those pictures are then saved in a big archive. However, there are some issues:

  • The pictures could be from cities at night or day.
  • They take selfies too (who doesn't?)
  • The moon, stars and Aurora Borealis are also pretty, so they photograph them too.
  • The archive does not have any order or filter, everything is mixed in there.

In summary, he needs pictures at night of cities (sharp and without clouds) but the archive is a mess. The archive has too many different photos and possible scenarios that algorithms cannot help him to classify them (or at a later stage geolocate them). However, you and me are pretty good at identifying cities at night with a glimpse, so we decided to create a prototype in Crowdcrafting.

The first project was Dark Skies. We had the first prototype in a few hours and we basically asked people to help us to classify the pictures in different categories:

  • City at night
  • Aurora Borealis
  • Stars
  • None of these
  • Black
  • Astronaut
  • I don't know

The project was simple and fun. I remember enjoying a lot classifying beautiful pictures from the ISS. It make me feel I was an astronaut, and I loved that feeling so we share it with our friends and colleagues.

We really believed on the project, specially Alejandro, so he invited me to meet his PhD advisor and his colleagues. We met and studied how we could improve it. As a result two new projects were born in the next months: Lost at night and Night Cities ISS

The small announcement that became huge

After a lot of work, Alejandro thought that the projects were good enough to send them to NASA and ESA. Alejandro wrote a press release and share with them what we were doing.

In the beginning we thought that they will ignore us, but something happened. It started like a tremble. With a tweet:

#Citizenscience at work RT @teleyinex: @esa thanks to your help on Twitter @cities4tnight has 3000 tasks classified in @crowdcrafting

— ESA (@esa) julio 10, 2014

Then, almost one month later NASA wrote a full article about the project and tweeted about it:

Space station sharper images of Earth at night crowdsourced for science: http://t.co/bHBiLwvZSv #ISS pic.twitter.com/bL9LymQ6cq

— NASA (@NASA) agosto 14, 2014

That was the spark, as since that moment everything exploded! The project was covered internationally by the press. Media like Fox TV, Gizmodo, CNN, ... share the project and invited people to help.

Thanks to this coverage, in just one month we were able to classify more than 100 thousand images. One day Crowdcrafting servers stored more than 1.5 answers per second! We were like this:

The calm after the storm

As with any press coverage after a few weeks everything went back to normal. However, lots of people kept coming and helping the projects from Alejandro.

Over a year we kept fixing bugs, adding new tasks, answering questions from volunteers, sharing progress, etc. In July Alejandro defended his thesis with all this work. Amazing!

From my side I'm so happy and proud about it for two reasons. First, while the thesis has been presented, the projects keeps going.

At the time of this writing the Dark Skies project has classified almost 700 images in the last 15 days. Amazing!

The other two projects have less activity, as those projects are more complicated. Lost at Night has located more than 200 photos on a map, and Night Cities ISS has geo-referenced almost 25 pictures.

Secondly, because this is the very first thesis that uses PyBossa and Crowdcrafting for doing open research. I'm impressed and I think this is just the beginning for many more researchers doing their research on the open inviting society to take part on it.

The future? Well, Alejandro has launched a Kickstarter campaign to get financial support to keep running the research his doing. If he gets the financial support more data will be analyzed, new results will be produced and it will help to keep running Crowdcrafting and PyBossa. Thus, if you like the project help Alejandro to build the most beautiful atlas of earth at night!

]]>
Doing a PhD is laborious, hard, demanding, exhausting... Your thesis is usually the result of blood, sweat and tears. And you are usually alone. Well, what woud you say if I tell you that a researcher got helped by more than 17 thousand volunteers?

Yes, you've read it right: more than 17 thousand people have helped Alejandro Sánchez to do his research, publishing his thesis as a result and getting the best possible mark: cum laude. Amazing, right?

But how this happened? How did he managed to involve such a big crowd? I mean, most people think science is boring, tedious, difficult, add here your adjective... However, this guy managed to get 17 thousand people from all over the world to help him on:

Best part? They did it because they wanted to help. No money involved! Just pure kindness.

In other words, the unexpected happened, and thanks to sharing his work and also asking for help for his research -studying light pollution on cities- he managed to achieve the unconceivable: involving more than 17 thousand people on scientific research.

How this started? Well, let's start from the beginning.

The beginning: laying down the ideas

This adventure started in 2014, in London, UK. I was participating at the Citizen Cyberscience Summit and Alejandro was there because someone told him to learn more about Crowdcrafting.

At the summit there was a workshop where scientists and hackers joined forces to create new citizen science projects. Wait, let me explain first what's citizen science so we can enjoy the trip later on (like this kid, I promise).

Citizen science is the active contribution of people who are not professional scientists to science. It provides volunteers with the opportunity to contribute intellectually to the research of others, to share resources or tools at their disposal, or even to start their own research projects. Volunteers provide real value to ongoing research while they themselves acquire a better understanding of the scientific method.

In other words, citizen science opens the doors of laboratories and makes science accessible to all. It facilitates a direct conversation between scientists and enthusiasts who wish to contribute to scientific endeavor.

Now, with this idea in our minds let's get back to Alejandro's research.

At this workshop Alejandro told me that he was studying light pollution on cities. He and his team realized that the astronauts from the International Space Station take pictures of the earth with a regular camera. Those pictures are then saved in a big archive. However, there are some issues:

  • The pictures could be from cities at night or day.
  • They take selfies too (who doesn't?)
  • The moon, stars and Aurora Borealis are also pretty, so they photograph them too.
  • The archive does not have any order or filter, everything is mixed in there.

In summary, he needs pictures at night of cities (sharp and without clouds) but the archive is a mess. The archive has too many different photos and possible scenarios that algorithms cannot help him to classify them (or at a later stage geolocate them). However, you and me are pretty good at identifying cities at night with a glimpse, so we decided to create a prototype in Crowdcrafting.

The first project was Dark Skies. We had the first prototype in a few hours and we basically asked people to help us to classify the pictures in different categories:

  • City at night
  • Aurora Borealis
  • Stars
  • None of these
  • Black
  • Astronaut
  • I don't know

The project was simple and fun. I remember enjoying a lot classifying beautiful pictures from the ISS. It make me feel I was an astronaut, and I loved that feeling so we share it with our friends and colleagues.

We really believed on the project, specially Alejandro, so he invited me to meet his PhD advisor and his colleagues. We met and studied how we could improve it. As a result two new projects were born in the next months: Lost at night and Night Cities ISS

The small announcement that became huge

After a lot of work, Alejandro thought that the projects were good enough to send them to NASA and ESA. Alejandro wrote a press release and share with them what we were doing.

In the beginning we thought that they will ignore us, but something happened. It started like a tremble. With a tweet:

Then, almost one month later NASA wrote a full article about the project and tweeted about it:

That was the spark, as since that moment everything exploded! The project was covered internationally by the press. Media like Fox TV, Gizmodo, CNN, ... share the project and invited people to help.

Thanks to this coverage, in just one month we were able to classify more than 100 thousand images. One day Crowdcrafting servers stored more than 1.5 answers per second! We were like this:

The calm after the storm

As with any press coverage after a few weeks everything went back to normal. However, lots of people kept coming and helping the projects from Alejandro.

Over a year we kept fixing bugs, adding new tasks, answering questions from volunteers, sharing progress, etc. In July Alejandro defended his thesis with all this work. Amazing!

From my side I'm so happy and proud about it for two reasons. First, while the thesis has been presented, the projects keeps going.

At the time of this writing the Dark Skies project has classified almost 700 images in the last 15 days. Amazing!

The other two projects have less activity, as those projects are more complicated. Lost at Night has located more than 200 photos on a map, and Night Cities ISS has geo-referenced almost 25 pictures.

Secondly, because this is the very first thesis that uses PyBossa and Crowdcrafting for doing open research. I'm impressed and I think this is just the beginning for many more researchers doing their research on the open inviting society to take part on it.

The future? Well, Alejandro has launched a Kickstarter campaign to get financial support to keep running the research his doing. If he gets the financial support more data will be analyzed, new results will be produced and it will help to keep running Crowdcrafting and PyBossa. Thus, if you like the project help Alejandro to build the most beautiful atlas of earth at night!

]]>
2152
The Art of Graceful Reloading http://daniellombrana.es/blog/2015/07/01/the-art-of-graceful-reloading.html Wed, 01 Jul 2015 00:00:00 +0000 http://daniellombrana.es/blog/2015/07/01/the-art-of-graceful-reloading The holy grail of web developers is to do deployments without interrupting your users. In this blog post I explain how we have achieved it using uWSGI Zerg Mode for our Crowdcrafting servers.

In a previous post I've already said that I love uWSGI. The main reason? You can do lots of nice tricks in your stack without having to add other layers to it, like for example: graceful reloading.

The documentation from uWSGI is really great, and it covers most of the cases for graceful reloading, however due to our current stack and our auto deployments solution we needed something that integrated well with the so called: Zerg dance.

Zerg Mode

The Zerg mode is a nice feature from uWSGI that allows you to run your web application passing file descriptors over Unix sockets. As stated on the official docs:

Zerg mode works by making use of the venerable “fd passing over Unix sockets” technique.

Basically, an external process (the zerg server/pool) binds to the various sockets required by your app. Your uWSGI instance, instead of binding by itself, asks the zerg server/pool to pass it the file descriptor. This means multiple unrelated instances can ask for the same file descriptors and work together.

This is really great, as you only need to enable a Zerg server and then you are ready to use it.

As we use Supervisor, configuring uWSGI to run as a Zerg server is really simple:

[uwsgi]
master = true
zerg-pool = /tmp/zerg_pool_1:/tmp/zerg_master.sock

Then, you configure your web application to use the zerg server:

[uwsgi]
zerg = /tmp/zerg_master.sock

And you are done! That will configure your server to run in Zerg mode. However, we can configure it to handle reloading in a more useful way: keeping a binary copy of the previous running instance, pausing it, and deploying the new code on a new Zerg. This is known as Zerg Dance, so let's dance!

Zerg Dance

With the Zerg dance we'll be able to do deployments while the users keep using your web application, as the Zerg server will be always handling those requests properly.

The neat trick from uWSGI is that it will handle those requests pausing them, so the user thinks it's getting slower, while the new deployment is taking place. As soon as the new deployment is running it moves the "paused request" to the new code and keeps the old copy in case you broke something. Nice, right?

To achieve this situation all you have to do is use 3 different FIFOs in uWSGI. Why? Because uWSGI can have as many master FIFOs as you want allowing you to pause zerg servers and move between them. This feature allows us to keep a binary copy of previously deployed code on the server, that you can pause/resume and use it when something goes wrong.

This is really fast. The only issue is that you'll need more memory on your server, but I think it's worthy as you'll be able to rollback a deployment with just two commands (we'll see that in a moment).

Configuring the 3 FIFOs

The documentation has a really good example. All you have to do is to add 3 FIFOs to your web application uWSGI config file:

[uwsgi]
; fifo '0'
master-fifo = /var/run/new.fifo
; fifo '1'
master-fifo = /var/run/running.fifo
; fifo '2'
master-fifo = /var/run/sleeping.fifo
; attach to zerg
zerg = /var/run/pool1
; other options ...

; hooks

; destroy the currently sleeping instance
if-exists = /var/run/sleeping.fifo
  hook-accepting1-once = writefifo:/var/run/sleeping.fifo Q
endif =
; force the currently running instance to became sleeping (slot 2) and place it in pause mode
if-exists = /var/run/running.fifo
  hook-accepting1-once = writefifo:/var/run/running.fifo 2p
endif =
; force this instance to became the running one (slot 1)
hook-accepting1-once = writefifo:/var/run/new.fifo 1

After the FIFOs there is a section where we declare some hooks. These hooks will handle automatically which FIFO has to be used in case of a server is started again.

The usual work flow will be the following:

  • You start the server.
  • There is not sleeping or running fifo, so those conditions fail
  • Therefore, once the server is ready to accept requests (thanks to hook-accepting1-once) it moves the server from the new.fifo to running.fifo

Right now you've a server running as before. Imagine now you have to change something in the config or you have a new deployment. You do the changes, and start a new server with the same uWSGI config file. This will happen:

  • You start the second server.
  • There is not sleeping fifo, so this condition fails
  • There is a running fifo, so this condition is met. Thus, the previous server is moved to the sleeping fifo and its paused when the new server is ready to accept requests.
  • Finally, once the server is ready to accept requests t moves the server from the new.fifo to running.fifo.

At this moment we've two servers: one running (the new one with your new code or config changes) and the old one wich is paused consuming only some memory.

Imagine now you realize that you have a bug in your new deployed code. How do you recover from this situation? Simple!

You just pause the new server and unpause the previous one. How do you do it? Like this:

echo 1p > /tmp/running.fifo
echo 2p > /tmp/sleeping.fifo

Our setup

With our auto deployments solution, we needed to find a simple way to integrate this feature with supervisor. In the previous example you do the deployment manually, but we want to have everything automated.

How we have achieved this? Simple! Using two PyBossa servers within Supervisor.

We have the default PyBossa server, and another one named pybossabak in Supervisor.

When a new deployment is done, the auto deployments solution boots the pybossa Backup server just to have a copy of the running state of the server. Then, it gets all the new changes, applies patches, etc. and restarts the default server. This procedure triggers the following:

  • Start backup server: this moves the current running PyBossa server to the pause fifo, so we've a copy of it.
  • The backup server accepts the requests, so users don't see anything wrong.
  • Autodeployments applies changes to the source code, updates libraries, etc.
  • Then, it restarts the default PyBossa server (note: for supervisor the paused PyBossa server is running).
  • This restart moves the previous backup server to the pause fifo (it has the old code running), and boots the new code into production.

If something goes wrong with the new changes, all we have to do is pause the current server and resume the previous one.

This is done by hand, as we want to have control over this specific issue, but overall we are always covered when doing deployments automatically. We only have to click in the Merge Button of Github to do a deployment and we know a backup binary copy is hold on memory in case that we commit an error.

Moreover, the whole process of having uWSGI moving the requests of users from one server to another is great!

We've seen some users getting a 502, but that's because they ask for a request when the file descriptor is being moved to the new server. Obviously, this is not 100% bullet proof, but much better than showing to all your users a maintenance page while you do the upgrade.

We've been using this new work flow for a few weeks now, and all our production deployments are done automatically. Since we adopted this approach we've not have any issues, and we are more focused only on developing more code. We employ less time handling deployments, which is great!

In summary: if you are using uWSGI, use the Zerg Dance, and enjoy the dance!

]]>
The holy grail of web developers is to do deployments without interrupting your users. In this blog post I explain how we have achieved it using uWSGI Zerg Mode for our Crowdcrafting servers.

In a previous post I've already said that I love uWSGI. The main reason? You can do lots of nice tricks in your stack without having to add other layers to it, like for example: graceful reloading.

The documentation from uWSGI is really great, and it covers most of the cases for graceful reloading, however due to our current stack and our auto deployments solution we needed something that integrated well with the so called: Zerg dance.

Zerg Mode

The Zerg mode is a nice feature from uWSGI that allows you to run your web application passing file descriptors over Unix sockets. As stated on the official docs:

Zerg mode works by making use of the venerable “fd passing over Unix sockets” technique.

Basically, an external process (the zerg server/pool) binds to the various sockets required by your app. Your uWSGI instance, instead of binding by itself, asks the zerg server/pool to pass it the file descriptor. This means multiple unrelated instances can ask for the same file descriptors and work together.

This is really great, as you only need to enable a Zerg server and then you are ready to use it.

As we use Supervisor, configuring uWSGI to run as a Zerg server is really simple:

[uwsgi]
master = true
zerg-pool = /tmp/zerg_pool_1:/tmp/zerg_master.sock

Then, you configure your web application to use the zerg server:

[uwsgi]
zerg = /tmp/zerg_master.sock

And you are done! That will configure your server to run in Zerg mode. However, we can configure it to handle reloading in a more useful way: keeping a binary copy of the previous running instance, pausing it, and deploying the new code on a new Zerg. This is known as Zerg Dance, so let's dance!

Zerg Dance

With the Zerg dance we'll be able to do deployments while the users keep using your web application, as the Zerg server will be always handling those requests properly.

The neat trick from uWSGI is that it will handle those requests pausing them, so the user thinks it's getting slower, while the new deployment is taking place. As soon as the new deployment is running it moves the "paused request" to the new code and keeps the old copy in case you broke something. Nice, right?

To achieve this situation all you have to do is use 3 different FIFOs in uWSGI. Why? Because uWSGI can have as many master FIFOs as you want allowing you to pause zerg servers and move between them. This feature allows us to keep a binary copy of previously deployed code on the server, that you can pause/resume and use it when something goes wrong.

This is really fast. The only issue is that you'll need more memory on your server, but I think it's worthy as you'll be able to rollback a deployment with just two commands (we'll see that in a moment).

Configuring the 3 FIFOs

The documentation has a really good example. All you have to do is to add 3 FIFOs to your web application uWSGI config file:

[uwsgi]
; fifo '0'
master-fifo = /var/run/new.fifo
; fifo '1'
master-fifo = /var/run/running.fifo
; fifo '2'
master-fifo = /var/run/sleeping.fifo
; attach to zerg
zerg = /var/run/pool1
; other options ...

; hooks

; destroy the currently sleeping instance
if-exists = /var/run/sleeping.fifo
  hook-accepting1-once = writefifo:/var/run/sleeping.fifo Q
endif =
; force the currently running instance to became sleeping (slot 2) and place it in pause mode
if-exists = /var/run/running.fifo
  hook-accepting1-once = writefifo:/var/run/running.fifo 2p
endif =
; force this instance to became the running one (slot 1)
hook-accepting1-once = writefifo:/var/run/new.fifo 1

After the FIFOs there is a section where we declare some hooks. These hooks will handle automatically which FIFO has to be used in case of a server is started again.

The usual work flow will be the following:

  • You start the server.
  • There is not sleeping or running fifo, so those conditions fail
  • Therefore, once the server is ready to accept requests (thanks to hook-accepting1-once) it moves the server from the new.fifo to running.fifo

Right now you've a server running as before. Imagine now you have to change something in the config or you have a new deployment. You do the changes, and start a new server with the same uWSGI config file. This will happen:

  • You start the second server.
  • There is not sleeping fifo, so this condition fails
  • There is a running fifo, so this condition is met. Thus, the previous server is moved to the sleeping fifo and its paused when the new server is ready to accept requests.
  • Finally, once the server is ready to accept requests t moves the server from the new.fifo to running.fifo.

At this moment we've two servers: one running (the new one with your new code or config changes) and the old one wich is paused consuming only some memory.

Imagine now you realize that you have a bug in your new deployed code. How do you recover from this situation? Simple!

You just pause the new server and unpause the previous one. How do you do it? Like this:

echo 1p > /tmp/running.fifo
echo 2p > /tmp/sleeping.fifo

Our setup

With our auto deployments solution, we needed to find a simple way to integrate this feature with supervisor. In the previous example you do the deployment manually, but we want to have everything automated.

How we have achieved this? Simple! Using two PyBossa servers within Supervisor.

We have the default PyBossa server, and another one named pybossabak in Supervisor.

When a new deployment is done, the auto deployments solution boots the pybossa Backup server just to have a copy of the running state of the server. Then, it gets all the new changes, applies patches, etc. and restarts the default server. This procedure triggers the following:

  • Start backup server: this moves the current running PyBossa server to the pause fifo, so we've a copy of it.
  • The backup server accepts the requests, so users don't see anything wrong.
  • Autodeployments applies changes to the source code, updates libraries, etc.
  • Then, it restarts the default PyBossa server (note: for supervisor the paused PyBossa server is running).
  • This restart moves the previous backup server to the pause fifo (it has the old code running), and boots the new code into production.

If something goes wrong with the new changes, all we have to do is pause the current server and resume the previous one.

This is done by hand, as we want to have control over this specific issue, but overall we are always covered when doing deployments automatically. We only have to click in the Merge Button of Github to do a deployment and we know a backup binary copy is hold on memory in case that we commit an error.

Moreover, the whole process of having uWSGI moving the requests of users from one server to another is great!

We've seen some users getting a 502, but that's because they ask for a request when the file descriptor is being moved to the new server. Obviously, this is not 100% bullet proof, but much better than showing to all your users a maintenance page while you do the upgrade.

We've been using this new work flow for a few weeks now, and all our production deployments are done automatically. Since we adopted this approach we've not have any issues, and we are more focused only on developing more code. We employ less time handling deployments, which is great!

In summary: if you are using uWSGI, use the Zerg Dance, and enjoy the dance!

]]>
2146
Building A Non-Technical Community Around the OSF and the Goals http://senseopenness.com/building-a-non-technical-community-around-the-osf-and-the-goals/ Thu, 11 Jun 2015 15:02:26 +0000 http://senseopenness.com/?p=23235 This is old, unpublished news/post that I never got around to posting for some reason…

In the last week of March, I started to think about how the Open Science Framework (OSF) can foster a non-technical community.  At first, I thought about only of  advocacy and teaching of the scientific process.  But after the response from Brian Nosek (reply #2) of how scientists don’t know how to get on board using it, the idea of community generated use-cases/case studies came to light.  That thread can be found here.

I wrote a mission statement and started a project for Undergrad researchers (and their PI) usage of OSF.

Anyone can join in to help either through the threads or through the frame work project itself.

]]>
2140