Sune Vuorela: Debconf 2015 – 1

Planet Debian - Sat, 15/08/2015 - 07:59

When greeted by Clint with one single word: “kamelåså“, one has arrived to Debconf.

Categories: Elsewhere

Drupal core announcements: This Month in Drupal Documentation - August 2015

Planet Drupal - Sat, 15/08/2015 - 06:51

It's been a busy month in Drupal documentation land. Here's an update from the Documentation Working Group (DocWG) on what has been happening in Drupal Documentation in the last month or so. Sorry... because this is posted in the Core group as well as Documentation, comments are disabled.

If you have comments or suggestions, please see the DocWG home page for how to contact us. Thanks!

Notable Documentation Updates

(pages or sections that have been worked on recently, see notes below)

See the DocWG home page for how to contact us, if you'd like to be listed here in our next post!

Thanks for contributing!

(list of how many people have made updates in the last month, and the top few contributors, see notes below)

Since July 15th (our previous TMIDD post), 234 contributors have made 705 total Drupal.org documentation page revisions, including 4 people that made more than 30 edits -- thanks everyone!

Extra big shout-out to these contributors.

  • lolandese (111 revisions)
  • Wim Leers (49 revisions)
  • jhodgdon (41 revisions)
  • Francewhoa (30 revisions)

In the core issue queue there's been a lot of movement to improve in-line documentation as we continue to get closer to a release of Drupal 8. This patch to improved the TypedData documentation is a great example of the kind of work that's being done. https://www.drupal.org/node/2548279

In addition, there were many many commits to Drupal Core and contributed projects that improved documentation -- these are hard to count, because many commits combine code and documentation -- but they are greatly appreciated too!

Documentation Priorities

The Current documentation priorities page is always a good place to look to figure out what to work on, and has been updated recently.

Work on the Drpual 8 User Guide is moving along splendidly. We had two IRC meetings in the last month and the level of involvement has been great. Helping with this documentation is a great way to get started with documentation and to learn a bit about Drupal 8 while you're at it. The focus right now is on writing a first draft of each of the topics in the guide, and work is also underway to figure out a final home for the new guide in https://www.drupal.org/node/2522024. Follow https://groups.drupal.org/documentation for announcements and join us for our next IRC meeting.

If you're new to contributing to documentation, these projects may seem a bit overwhelming -- so why not try out a New contributor task to get started?

The Drupal Association staff have recently updated their 2015 roadmap and it currently includes a couple of big wins for documentation. Including work to convert sections of the community documentation into a more maintainable format. The issue here https://www.drupal.org/node/2533684 doesn't have a lot of information soon, but keep on eye on it. And/or watch this recording of the presentation from DrupalCon LA about the work being done on content strategy for Drupal.org to get an idea of what's coming. https://events.drupal.org/losangeles2015/sessions/content-strategy-drupa...

Upcoming Events
  • DrupalCon Barcelona, Spain, 21-25 September, with a session Let's talk about documentation and a documentation sprint on Drupal 8 documentation and the D8 User Guide. Please sign up for the sprint! Members of the DocsWG will be in attendence at DrupalCon and would love to chat with you about your ideas for improving Drupal's documentation or to help you find ways to get involved so come say hello anytime during the week.

If you're attending or helping to organize a Drupal event that will feature a documentation related sprint, or sessions let us know and we'll get it added to the list.

Report from the Working Group

We just recently had our regular monthly meeting, though it had actually been over a month since the last time we met. We didn't have a whole lot to discuss in that period, and had been putting a lot of time and effort into getting the Drupal 8 User Guide project underway. At our last meeting the big thing that came up was the need to develop a clear set of guidelines for when it is appropriate to delete a comment from either the community documentation or the api.drupal.org documentation. https://www.drupal.org/node/2515002. We've jotted down some ideas and plan to discuss this further at our next meeting in September. Afer which we'll post the ideas we've come up with for consideration before making anything official. Let us know in the issue if you've got any thoughts about what this should look like.

Categories: Elsewhere

Norbert Preining: Kobo GloHD firmware 3.17.0 mega update (KSM, nickel patch, ssh, fonts)

Planet Debian - Sat, 15/08/2015 - 03:15

As with all the previous versions, I have prepared a mega update for the Kobo firmware 3.17.0. Big warning for a start – this is for Mark6 hardware, thus GloHD only. The update includes all my favorite patches and features. Please see details (although for for Kobo Glo) for 3.16.0, 3.15.0, 3.13.1 and 3.12.1. As before, the following addons are included: Kobo Start Menu, koreader, coolreader, pbchess, ssh access, custom dictionaries, and some side-loaded fonts. What I dropped this time is most of the kobohack part, as the libraries seem to get outdated. But I included the dropbear ssh server from the kobohack package.

So what are all these items:

  • firmware (thread): the basic software of the device, shipped by Kobo company
  • Metazoa firmware patches (thread): fix some layout options and functionalities, see below for details.
  • Kobo Start Menu (V07, update 4 thread): a menu that pops up before the reading software (nickel) starts, which allows to start alternative readers (like koreader) etc.
  • KOreader (koreader-nightly-20150808, thread): an alternative document reader that supports epub, azw, pdf, djvu and many more
  • pbchess and CoolReader (2015.7, thread): a chess program and another alternative reader, bundled together with several other games
  • kobohack (web site): I only use the ssh server
  • ssh access (old post: makes a full computer from your device by allowing you to log into it via ssh
  • custom dictionaries (thread): this fix updates dictionaries from the folder customdicts to the Kobo dictionary folder. For creating your own Japanese-English dictionary, see this blog entry
  • side-loaded fonts: GentiumBasic and GentiumBookBasic, Verdana, DroidSerif, and Charter-eInk

Install procedure Latest firmware

Warning: Sideloading or crossloading the incorrect firmware can break/brick your device. The link below is for Kobo GloHD ONLY.

The first step is to update the Kobo to the latest firmware. This can easily be done by just getting the latest firmware (thread, direct link for Kobo 3.17.0 for GloHD) and unpacking the zip file into the .kobo directory on your device. Eject and enjoy the updating procedure.

Mega update

Get the combined Kobo-3.17.0-combined/KoboRoot.tgz and put it into the .kobo directory, then eject and enjoy the updating procedure again.

After this the device should reboot and you will be kicked into KSM, from where after some time of waiting Nickel will be started. If you consider the fonts too small, select Configure, then the General, and add item, then select kobomenuFontsize=55 and save.

Remarks to some of the items included

The full list of included things is above, here are only some notes about what specific I have done.

  • Metazoa firmware patches

    I have included the experimental sickel patch to extend the waiting time until sickel reboots your device.

    The patches I have enabled in this build are:

    • libadobe.so.patch: none
    • librmsdk.so.1.0.0.patch: default (‘Fix page breaks bug’, ‘Default ePub monospace font (Courier)’)
    • libnickel.so.1.0.0.patch: ‘Custom reading footer style’, ‘My 15 line spacing values’, ‘Custom left & right margins’, ‘Brightness fine control’, ‘Search in Library by default’, ‘Disable pinch-to-zoom font resizing’
    • sickel.patch: default (‘SickelService::Ping() timer’, ‘SickelService::Resume() timer’, ‘SickelService::SickelService() timer’)

    If you need any other patches, you need to do the patching yourself.

  • kobohack-h

    Kobohack (latest version 20150110) originally provided updated libraries and optimizations, but unfortunately it is now completely outdated and using it is not recommended for the library part. I only include the ssh server (dropbear) so that connections to the Kobo via ssh.

  • ssh fixes

    See the detailed instructions here, the necessary files are already included in the mega upload. It updates the /etc/inittab to run also /etc/init.d/rcS2, and this one again starts the inetd server and run user supplied commands in /mnt/onboard/run.sh which is where your documents are.

  • Custom dictionaries

    The necessary directories and scripts are already included in the above combined KoboRoot.tgz, so nothing to be done but dropping updated, fixed, changed dictionaries into your Kobo root into the directory customdict. After this you need to reboot to get the actual dictionaries updated. See this thread for more information. The adaptions and script mentioned in this post are included in the mega update.


After installing this patch, you need to fix the password for root and disable telnet. This is an important step, here are the steps you have to take (taken from this old post):

  1. Turn on Wifi on the Kobo and find IP address
    Go to Settings – Connect and after this is done, go to Settings – Device Information where you will see something like
    IP Address: 192.168.1.NN
    (numbers change!)
  2. telnet into your device
    telnet 192.168.1.NN
    it will ask you the user name, enter “root” (without the quotes) and no password
  3. (ON THE GLO) change home directory of root
    edit /etc/passwd with vi and change the entry for root by changing the 6th field from: “/” to “/root” (without the quotes). After this procedure the line should look like
    don’t forget to save the file
  4. (ON THE GLO) create ssh keys for dropbear
    [root@(none) ~]# mkdir /etc/dropbear
    [root@(none) ~]# cd /etc/dropbear
    [root@(none) ~]# dropbearkey -t dss -f dropbear_dss_host_key
    [root@(none) ~]# dropbearkey -t rsa -f dropbear_rsa_host_key
  5. (ON YOUR PERSONAL COMPUTER) check that you can log in with ssh
    ssh root@192.168.1.NN
    You should get dropped into your device again
  6. (ON THE GLO) log out of the telnet session (the first one you did)
    [root@(none) ~]# exit
  7. (ON THE GLO) in your ssh session, change the password of root
    [root@(none) ~]# passwd
    you will have to enter the new password two times. Remember it well, you will not be easily able to recover it without opening your device.
  8. (ON THE GLO) disable telnet login
    edit the file /etc/inetd.conf.local on the GLO (using vi) and remove the telnet line (the line starting with 23).
  9. restart your device

The combined KoboRoot.tgz is provided without warranty. If you need to reset your device, don’t blame me!

Categories: Elsewhere

ActiveLAMP: Drupal 8 - First Experiences

Planet Drupal - Sat, 15/08/2015 - 01:00

I recently had time to install and take a look at Drupal 8. I am going to share my first take on Drupal 8 and some of the hang-ups that I came across. I read a few other blog posts that mentioned not to rely too heavily on one source for D8 documentation with the rapid changing pace of D8 the information has become outdated rather quickly.

Categories: Elsewhere

Ben Hutchings: DebCamp 2015

Planet Debian - Fri, 14/08/2015 - 21:56

I've now spent nearly at week in Heidelberg, attending DebCamp and working on 'kernel stuff'. It's been very sunny and warm here, sometimes uncomfortably so. I've been sitting in a outdoor hacklab - covered but otherwise open to the elements. Right now it is mild and raining, and quite comfortable here.

Linux 3.2.71

Hot on the heels of Linux 3.2.70, I prepared another stable update for 3.2 which was announced today. I've now caught up with fixes in Linus's tree up to 4.2-rc5, though there are plenty of others still to consider.

Kernel packaging in git

I finally completed the scripts and configuration needed to convert the kernel team's Subversion repository to git, with most of the history of the relevant packages accurately represented. (Several packages are obsolete or had been separately moved to git.) Thanks to Subversion's flexibility with directory layout, this was quite challenging. There were a number of changes of layout in 2005-2006, tags that were copied from the wrong directory level, and merges that somehow didn't get recorded as such.

I pushed the new git repositories onto Alioth on Tuesday and disabled writes to the packaging in Subversion. So far no-one's complained about breakage, so I think I got it all right.

Autopkgtest for Linux

I've been worried for a while that I have so little ability to test for regressions in the kernel. I am mostly reliant on getting bug reports on versions in experimental and unstable to prevent these from getting to testing and stable. However this doesn't help with stable updates for security or general bug fixes.

The autopkgtest framework provides a way to (mostly) automatically test Debian packages, and I've been exploring how this can be used for the linux package. It already allows for tests that need a whole (virtual) machine to themselves and that may need to reboot the machine, so installing and running new kernel packages is fine. I don't want to have to reconfigure a boot loader, and there is no way to specify which boot loader should be used, so at least initially I'm using kexec to boot into the test kernel.

The linux source tree includes a growing number of self-test programs which roughly follow a single spec, although the requirements for some of them (kernel and machine configuration) are not all clearly documented. So far, I have most of the tests passing on 4.2-rc6 under autopkgtest, a few failures that are expected given the kernel configuration we use, and several more failures that I don't yet understand.

Once I have these self-tests integrated, I'd like to move on and add some separate test suites such as xfstests and LTP. Or I may add to the self-tests myself.


I reviewed the 2014 version of "What's new in the Linux kernel" in order to write the follow-up section at the start of this year's talk. Last year I mentioned liblockdep, a port of lockdep to userland that tests use of the pthreads locking APIs, which wasn't packaged. As this is part of the linux source tree, I probably should have worked on it myself.

To avoid embarrassment right at the start of my talk, I worked to add liblockdep binary packages to the linux-tools source package. These are now in the binary-NEW queue, headed for experimental. As lockdep is sadly lacking in documentation and liblockdep seems to be missing the ability to explicitly define lock classes, I'm not sure how useful it will be. Packaging it should make it easier for people to try out, though.

What's new in the Linux kernel

I'm about half way through writing the 2015 version of this talk, which I'm due to give on Monday afternoon. This year I think I've avoided having to spend most of the preceding day preparing it, which was why I skipped the day trip last year.

Categories: Elsewhere

Wouter Verhelst: Multi-pass transcoding to WebM with normalisation

Planet Debian - Fri, 14/08/2015 - 21:33

Transcoding video from one format to another seems to be a bit of a black art. There are many tools that allow doing this kind of stuff, but one issue that most seem to have is that they're not very well documented.

I ran against this a few years ago, when I was first doing video work for FOSDEM and did not yet have proper tools to do the review and transcoding workflow.

At the time, I just used mplayer to look at the .dv files, and wrote a text file with a simple structure to remember exactly what to do with it. That file was then fed to a perl script which wrote out a shell script that would use the avconv command to combine and extract the "interesting" data from the source DV files into a single DV file per talk, and which would then call a shell script which used gst-launch and sox to do a multi-pass transcode of those intermediate DV files into a WebM file.

While all that worked properly, it was a rather ugly hack, never cleaned up, and therefore I never really documented it properly either. Recently, however, someone asked me to do so anyway, so here goes. Before you want to complain about how this ate the videos of your firstborn child, however, note the above.

The perl script spent a somewhat large amount of code reading out the text file and parsing it into an array of hashes. I'm not going to reproduce that, since the actual format of the file isn't all that important anyway. However, here's the interesting bits:

foreach my $pfile(keys %parts) { my @files = @{$parts{$pfile}}; say "#" x (length($pfile) + 4); say "# " . $pfile . " #"; say "#" x (length($pfile) + 4); foreach my $file(@files) { my $start = ""; my $stop = ""; if(defined($file->{start})) { $start = "-ss " . $file->{start}; } if(defined($file->{stop})) { $stop = "-t " . $file->{stop}; } if(defined($file->{start}) && defined($file->{stop})) { my @itime = split /:/, $file->{start}; my @otime = split /:/, $file->{stop}; $otime[0]-=$itime[0]; $otime[1]-=$itime[1]; if($otime[1]<0) { $otime[0]-=1; $otime[1]+=60; } $otime[2]-=$itime[2]; if($otime[2]<0) { $otime[1]-=1; $otime[2]+=60; } $stop = "-t " . $otime[0] . ":" . $otime[1] . ":" . $otime[2]; } if(defined($file->{start}) || defined($file->{stop})) { say "ln " . $file->{name} . ".dv part-pre.dv"; say "avconv -i part-pre.dv $start $stop -y -acodec copy -vcodec copy part.dv"; say "rm -f part-pre.dv"; } else { say "ln " . $file->{name} . ".dv part.dv"; } say "cat part.dv >> /tmp/" . $pfile . ".dv"; say "rm -f part.dv"; } say "dv2webm /tmp/" . $pfile . ".dv"; say "rm -f /tmp/" . $pfile . ".dv"; say "scp /tmp/" . $pfile . ".webm video.fosdem.org:$uploadpath || true"; say "mv /tmp/" . $pfile . ".webm ."; }

That script uses avconv to read one or more .dv files and transcode them into a single .dv file with all the start- or end-junk removed. It uses /tmp rather than the working directory, since the working directory was somewhere on the network, and if you're going to write several gigabytes of data to an intermediate file, it's usually a good idea to write them to a local filesystem rather than to a networked one.

Pretty boring.

It finally calls dv2webm on the resulting .dv file. That script looks like this:

#!/bin/bash set -e newfile=$(basename $1 .dv).webm wavfile=$(basename $1 .dv).wav wavfile=$(readlink -f $wavfile) normalfile=$(basename $1 .dv)-normal.wav normalfile=$(readlink -f $normalfile) oldfile=$(readlink -f $1) echo -e "\033]0;Pass 1: $newfile\007" gst-launch-0.10 webmmux name=mux ! fakesink \ uridecodebin uri=file://$oldfile name=demux \ demux. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=1 threads=2 ! queue ! mux.video_0 \ demux. ! progressreport ! audioconvert ! audiorate ! tee name=t ! queue ! vorbisenc ! queue ! mux.audio_0 \ t. ! queue ! wavenc ! filesink location=$wavfile echo -e "\033]0;Audio normalize: $newfile\007" sox --norm $wavfile $normalfile echo -e "\033]0;Pass 2: $newfile\007" gst-launch-0.10 webmmux name=mux ! filesink location=$newfile \ uridecodebin uri=file://$oldfile name=video \ uridecodebin uri=file://$normalfile name=audio \ video. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=2 threads=2 ! queue ! mux.video_0 \ audio. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0 rm $wavfile $normalfile

... and is a bit more involved.

Multi-pass encoding of video means that we ask the encoder to first encode the file but store some statistics into a temporary file (/tmp/vp8-multipass, in our script), which the second pass can then reuse to optimize the transcoding. Since DV uses different ways of encoding things than does VP8, we also need to do a color space conversion (ffmpegcolorspace) and deinterlacing (deinterlace), but beyond that the video line in the first gstreamer pipeline isn't very complicated.

Since we're going over the file anyway and we need the audio data for sox, we add a tee plugin at an appropriate place in the audio line in the first gstreamer pipeline, so that we can later on pick up that same audio data an write it to a wav file containing linear PCM data. Beyond the tee, we go on and do a vorbis encoding, as is needed for the WebM format. This is not actually required for a first pass, but ah well. There's some more conversion plugins in the pipeline (specifically, audioconvert and audiorate), but those are not very important.

We next run sox --norm on the .wav file, which does a fully automated audio normalisation on the input. Audio normalisation is the process of adjusting volume levels so that the audio is not too loud, but also not too quiet. Sox has pretty good support for this; the default settings of its --norm parameter make it adjust the volume levels so that the highest peak will just about reach the highest value that the output format can express. As such, you have no clipping anywhere in the file, but also have an audio level that is actually useful.

Next, we run a second-pass encoding on the input file. This second pass uses the statistics gathered in the first pass to decide where to put its I- and P-frames so that they are placed at the most optimal position. In addition, rather than reading the audio from the original file, we now read the audio from the .wav file containing the normalized audio which we produced with sox, ensuring the audio can be understood.

Finally, we remove the intermediate audio files we created; and the shell script which was generated by perl also contained an rm command for the intermediate .dv file.

Some of this is pretty horrid, and I never managed to clean it up enough so it would be pretty (and now is not really the time). However, it Just Works(tm), and I am happy to report that it continues to work with gstreamer 1.0, provided you replace the ffmpegcolorspace by an equally simple videoconvert, which performs what ffmpegcolorspace used to perform in gstreamer 0.10.

Categories: Elsewhere

Dirk Eddelbuettel: Rblpapi: Connecting R to Bloomberg

Planet Debian - Fri, 14/08/2015 - 20:29

Whit, John and I are thrilled to announce Rblapi, a new CRAN package which connects R to the Bloomberg backends.

Rebuilt from scratch using only the Bloomberg C++ API and the Rcpp and BH packages, it offers efficient and direct access from R to a truly vast number of financial data series, pricing tools and more. The package has been tested on Windows, OS X and Linux. As is standard for CRAN packages, binaries for Windows and OS X are provided (or will be once the builders caught up). Needless to say, a working Bloomberg installation is required to use the package.

Please see the Rblapi package page for more details, including a large part of the introductory vignette document. As a teaser, here are just three of the core functions:

## Bloomberg Data Point Query bdp(c("ESA Index", "SPY US Equity"), c("PX_LAST", "VOLUME")) ## Bloomberg Data Set Query bds("GOOG US Equity", "TOP_20_HOLDERS_PUBLIC_FILINGS") ## Bloomberg Data History Query bdh("SPY US Equity", c("PX_LAST", "VOLUME"), start.date=Sys.Date()-31) ## Get OHLCV bars (by default hourly and just six of them) getBars("ES1 Index") ## Get Tick Data (by default last hour) getTicks("ES1 Index")

Source code for the package is at the Rblpapi GitHub repo where issue tickets can be filed as well. The sibbling blp GitHub repo contains the Bloomberg code required to build and link the package (which is automated during the build of the CRAN package). Last but not least the Rblpapi package page has more details about the package.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Tiago Bortoletto Vaz: c3video for debconf #5

Planet Debian - Fri, 14/08/2015 - 17:13

This is a follow-up to my previous post related the DebConf videoteam using a new software stack for the next conferences: http://acaia.ca/~tiago/posts/c3video-for-dc-take-4/.

This is about the encoding step from C3TT, mostly done by the script named D-encoding.

We can have many different encoding templates in the system. They're XSLT files which will generate the XML needed to create the local encoding commands. We can have more than one encoding template assigned for a conference.

XSLT encoding templates

A general comment: each meta ticket (say, the original ticket with meta info about the talk) will generate two children tickets over time, a recording one and a encoding one, with their own states. If things go wrong, a ingest ticket is created. More details can be checked here.

Children tickets

So I've got the proper encoded files in my Processing.Path.Output directory. The ticket is marked as encoded by the script. There's also a postencoding step, executed by E-postencoding. As I could understand, it's intended to be a general post-processing hook for encoded files. For instance, it can produces an audio file and make it available on Auphonic service. As we won't use that, we may want to set the property Processing.Auphonic.Enable to no.

The next step starts from the operator side. Just going to the Releasing tab in the web interface, choosing the ticket to check and doing a quick review in the encoded file:

Releasing tab

Then, if everything looks fine, click Everthing's fine:

Check encoded file

From this point the ticket will be marked as checked and the next script (F-postprocessing) will take care of pushing the video(s)/audio(s) to the target place, as defined by Publishing.UploadTarget. I had to set myself the encoding template property EncodingProfile.Extension. We can optionally set Publishing.UploadOptions. (keep that in mind, seems not documented elsewhere)

So I have now the first DebCamp encoded video file uploaded to an external host, entirely processed using the C3TT software stack. We may also want to use a very last script to release the videos (eg. as torrents, to different mirrors and other onlive services) if needed. This is script-G-release.pl, which unlike others, won't be run by the screen UI in the sequence. It has some parameters hardcoded on it, although code is very clear and ready to be hacked. It'll also mark the ticket as Released.


That's all for now! In summary: I've been able to install and set C3TT up during a few days in DebCamp, and will be playing with it during DebConf. In case things go well we'll probably be using this system as our video processing environment for the next events.

We can access most CCC VoC software from https://github.com/voc. By having a look at what they're developing I feel that we (DebConf and CCC) share quite the same technical needs. And most important, the same spirit of community work for bringing part of the conference to those unable to attend.

DebCamp was warm!

Categories: Elsewhere

Sooper Drupal Themes: Module Spotlight #2: Environment Indicator

Planet Drupal - Fri, 14/08/2015 - 16:56
Configuration > System > Backup and Migrate > Restore database..... Did I just load my database export on the live website? oops

If the stress of pressing the wrong button on a live website is familiar to you, you need the Environment Indicator module! On large Drupal projects you will ...

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, August 19

Planet Drupal - Fri, 14/08/2015 - 16:14
Start:  2015-08-19 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, August 19.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix/feature release on this date; the next window for a Drupal core bug fix/feature release is Wednesday, September 2.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Lullabot: Pausing the Drupalize.Me Podcast

Planet Drupal - Fri, 14/08/2015 - 14:00
This week instead of a regular podcast episode, this is a very short update about the podcast itself. We will not be publishing any more episodes for the rest of 2015. We’ve had a great run over the last three years, after reviving the old Lullabot podcast as the new Drupalize.Me podcast. We’re taking a pause now so that we can take a step back and rethink the podcast. We’d like to reboot it as a more focused and useful tool for our listeners. If you have ideas or thoughts about what would be an amazing podcast that we can provide, please do let us know! You can look for the new podcast to return to the scene in early 2016.
Categories: Elsewhere

Drupal core announcements: Recording from August 14th 2015 Drupal 8 critical issues discussion

Planet Drupal - Fri, 14/08/2015 - 13:46

We met again today to discuss critical issues blocking Drupal 8's release (candidate). (See all prior recordings). Here is the recording of the meeting video and chat from today in the hope that it helps more than just those who were on the meeting:

If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are CEST real time at the meeting):

[11:07am] alexpott: https://www.drupal.org/node/2501931
[11:07am] Druplicon: https://www.drupal.org/node/2501931 => Remove SafeMarkup::set in twig_render_template() and ThemeManager and FieldPluginBase:advancedRender [#2501931] => 117 comments, 34 IRC mentions
[11:08am] alexpott: https://www.drupal.org/node/2549943
[11:08am] Druplicon: https://www.drupal.org/node/2549943 => [plan] Remove as much of the SafeMarkup class's methods as possible [#2549943] => 16 comments, 6 IRC mentions
[11:14am] plach: https://www.drupal.org/node/2542748
[11:14am] Druplicon: https://www.drupal.org/node/2542748 => EntityDefinitionUpdateManager::applyUpdates() can fail when there's existing content, leaving the site's schema in an unpredictable state, so should not be called during update.php [#2542748] => 100 comments, 24 IRC mentions
[11:56am] catch: https://www.drupal.org/node/2551341
[11:56am] Druplicon: https://www.drupal.org/node/2551341 => Update test database dump should be based on beta 12 and contain content [#2551341] => 0 comments, 1 IRC mention
[11:58am] jibran: https://www.drupal.org/node/2464427
[11:58am] Druplicon: https://www.drupal.org/node/2464427 => Replace CacheablePluginInterface with CacheableDependencyInterface [#2464427] => 157 comments, 22 IRC mentions
[12:01pm] jibran: plach: is it bells time on the call? :P
[12:02pm] plach: jibran: yeah :)
[12:08pm] jibran: https://www.drupal.org/node/2349819
[12:08pm] Druplicon: https://www.drupal.org/node/2349819 => String field type doesn't consider empty string as empty value [#2349819] => 89 comments, 8 IRC mentions

Categories: Elsewhere

Alberto García: I/O limits for disk groups in QEMU 2.4

Planet Debian - Fri, 14/08/2015 - 12:22

QEMU 2.4.0 has just been released, and among many other things it comes with some of the stuff I have been working on lately. In this blog post I am going to talk about disk I/O limits and the new feature to group several disks together.

Disk I/O limits

Disk I/O limits allow us to control the amount of I/O that a guest can perform. This is useful for example if we have several VMs in the same host and we want to reduce the impact they have on each other if the disk usage is very high.

The I/O limits can be set using the QMP command block_set_io_throttle, or with the command line using the throttling.* options for the -drive parameter (in brackets in the examples below). Both the throughput and the number of I/O operations can be limited. For a more fine-grained control, the limits of each one of them can be set on read operations, write operations, or the combination of both:

  • bps (throttling.bps-total): Total throughput limit (in bytes/second).
  • bps_rd (throttling.bps-read): Read throughput limit.
  • bps_wr (throttling.bps-write): Write throughput limit.
  • iops (throttling.iops-total): Total I/O operations per second.
  • iops_rd (throttling.iops-read): Read I/O operations per second.
  • iops_wr (throttling.iops-write): Write I/O operations per second.


-drive if=virtio,file=hd1.qcow2,throttling.bps-write=52428800,throttling.iops-total=6000

In addition to that, it is also possible to configure the maximum burst size, which defines a pool of I/O that the guest can perform without being limited:

  • bps_max (throttling.bps-total-max): Total maximum (in bytes).
  • bps_rd_max (throttling.bps-read-max): Read maximum.
  • bps_wr_max (throttling.bps-write-max): Write maximum.
  • iops_max (throttling.iops-total-max): Total maximum of I/O operations.
  • iops_rd_max (throttling.iops-read-max): Read I/O operations.
  • iops_wr_max (throttling.iops-write-max): Write I/O operations.

One additional parameter named iops_size allows us to deal with the case where big I/O operations can be used to bypass the limits we have set. In this case, if a particular I/O operation is bigger than iops_size then it is counted several times when it comes to calculating the I/O limits. So a 128KB request will be counted as 4 requests if iops_size is 32KB.

  • iops_size (throttling.iops-size): Size of an I/O request (in bytes).
Group throttling

All of these parameters I’ve just described operate on individual disk drives and have been available for a while. Since QEMU 2.4 however, it is also possible to have several drives share the same limits. This is configured using the new group parameter.

The way it works is that each disk with I/O limits is member of a throttle group, and the limits apply to the combined I/O of all group members using a round-robin algorithm. The way to put several disks together is just to use the group parameter with all of them using the same group name. Once the group is set, there’s no need to pass the parameter to block_set_io_throttle anymore unless we want to move the drive to a different group. Since the I/O limits apply to all group members, it is enough to use block_set_io_throttle in just one of them.

Here’s an example of how to set groups using the command line:

-drive if=virtio,file=hd1.qcow2,throttling.iops-total=6000,throttling.group=foo -drive if=virtio,file=hd2.qcow2,throttling.iops-total=6000,throttling.group=foo -drive if=virtio,file=hd3.qcow2,throttling.iops-total=3000,throttling.group=bar -drive if=virtio,file=hd4.qcow2,throttling.iops-total=6000,throttling.group=foo -drive if=virtio,file=hd5.qcow2,throttling.iops-total=3000,throttling.group=bar -drive if=virtio,file=hd6.qcow2,throttling.iops-total=5000

In this example, hd1, hd2 and hd4 are all members of a group named foo with a combined IOPS limit of 6000, and hd3 and hd5 are members of bar. hd6 is left alone (technically it is part of a 1-member group).

Next steps

I am currently working on providing more I/O statistics for disk drives, including latencies and average queue depth on a user-defined interval. The code is almost ready. Next week I will be in Seattle for the KVM Forum where I will hopefully be able to finish the remaining bits.

I will also attend LinuxCon North America. Igalia is sponsoring the event and we have a booth there. Come if you want to talk to us or see our latest demos with WebKit for Wayland.

See you in Seattle!

Categories: Elsewhere

Enrico Zini: expectations-needs

Planet Debian - Fri, 14/08/2015 - 12:13
Expectations and needs

All people ever say is: "thank you" (a celebration of life) and "please" (an opportunity to make life more wonderful). (Marshall Rosenberg)

Sometimes, when I see the word "expectation" I try to read it as "need" and see how things change.

I noticed that this tends to reframe situations in a way that makes me feel more comfortable.

I noticed that I tend to instinctively perceive "expectations" as "do this or there will be consequences", and I tend to instinctively perceive "needs" as "do this if you want to see me happy".

I noticed that my motivation to care for someone's expectations tend to be something close to fear, and my motivation to care for someone's needs tends to be something close to love.

This might give me a bit more hints on The art of asking: I will not expect you to do something for me, I'll just allow myself to be loved, liked or helped by you, and I'll try to be open about what I need.

I smile realising that since a long time, on the professional side of my life, I learnt to lead interaction with my customers along the same lines: "let's talk about what you need, not about what you expect of me".

Categories: Elsewhere

Microserve: Git Basic Principles & Concepts (for new developers)

Planet Drupal - Fri, 14/08/2015 - 11:57

One of the core tools used in many software development circles is Git - a 'version control system' which enables individuals and groups to have a complete record of all the code changes throguhout the life-cycle of a project. As you can imagine, starting to use version control to save progess, rather than just the usual ctrl + S takes a bit of getting used to!

As a new developer, learning all of this stuff is pretty intimidating along with all of the other new knowledge. So I wrote the following guide, which is the document I would've like to have been given when I started.


Have no fear, Git is here to help. Nothing is lost, once commited to a repository. And there are many ways to organise your work, although this flexibility does mean it can be more complicated to store/locate your work. Commits are the core concept though, so get used to them.

It is useful to append the idea of 'always commit your work' to the paradigm 'always save your work'.

Git stores different "versions of reality" and you can switch between them like a multiverse! If you want to branch off and do something weird and crazy this is perfectly fine - everything in the other universes (branches) is unharmed.

Here are a couple of good tutorial links to get you started:


A commit is a change to a set of files. It can consist of new files, altered files, deleted files and directories etc. In the real world a commit represents a logical/intuitive point to package some work you've done. After a certain piece of functionality has been completed, or everything required for a ticket has been done, or just at the end of the day (althought this isn't considered good practice by all as it results in incomplete work in the repository). Every commit is given it's own message, so you have the opportunity to show why this is a logical or intuitive place to be committing.

There can be a very large number of commits in a repository, which are connected in a variety of ways. One thing that is important to get your head around is that you can shift the whole filesystem to be the way it was at any commit. This is referred to as a "checkout" of a certain place in the structure (although the command is used slightly differently elsewhere). The process of checking out moves the "HEAD" to a different place in the repository and therefore looks at a version of the files from a different point in history.​

Staging Areas

The mechanism for creating commits is by placing the required changes, piece by piece if required, into the staging area. Git is aware of things that are in the 'work tree' (the filesystem you are looking at), that are different to the repository. It allows you to chose from these things by executing a command like these:

Put one file into the staging area:

~$ git add path/to/altered.file

Stage any changes within this directory:

~$ git add path/to/directory

This stages AND performs the next step - committing - all in one:

~$ git commit -am "useful message"

This is quicker but less flexible. Be sure to know which files are going into the commit. Also, this only adds files that are already in the repo. If you need to add a new file for the first time, you have to use 'git add' first.

After adding some files, you can do 'git status' to display everything staged for a commit in green. If you are happy with what is prepared (you can use 'git diff' to see the details of what has changed) you can go ahead and run:

~$ git commit -m "useful message"

That commit is now locked down in the repository. You will always be able to go back to it, event after future commits.


Branching in Git is the action of deviating a sequence of commits from any pre-existing sequence. This is like taking your code to another dimension - where it can't mess up anything in another one. The ability to branch gives lots of flexibility for different workflows and gives Git its characteristic look in diagrams.

A typical workflow may consist of using a branch for working on a particular ticket or developing a piece of functionality. This "feature branch" will most likely deviate from a main development branch. Once created this branch will contain any number of commits working towards a finalised unit of work. After the work is ready, this branch can be merged back into the dev branch and this becomes the new start point for future feature branches.

Branchy Commands

Create a branch (relative to the current branch):

~$ git branch name-of-branch

This Branch will deviate from wherever HEAD is in the repo. Though the new branch will not be checked out at this stage.

List all available branches:

~$ git branch

Lists available branches.

Creates a new branch and checks it out in a oner:

~$ git checkout -b name-of-branch Repositories

Most workflows will consist of interactions with different repositories. Typically you will use a hosted central repository (BitBucket, GitHub etc.) which multiple collaborators can send their work to. It is in the central repository that the main workflow (feature branching and managing merges) occurs.

It is good practice to keep your local repo as similar in structure to the central remote one, although this does not necessarily need to happen. In principle you can perform the command below between any arbitrary repos, and Git will do its best to handle any differences it encounters. For example, two repos may not have the same branch structure, i.e. branches with the same content may have different parent commits. This is not necessarily a problem, depending on the nature of the code changes, but it may explain issue with merges later on down the line.

Commands and concepts for moving changes between repositories.

fetch - Gets commits from a remote repository so they exists on a branch parallel to the local counterparts. So for example 'git fetch dev' gets a local version of the branch that is called "dev" in the remote repo (which is usually referred to as "origin"), but locally is called "origin/dev". This sounds confusing, but the idea of the "parallel" branches on fetching is illustrated in this interactive tool - just type "git fetch" in the command box!

pull - Does the above except it would also attempt to bring the local version of the "dev" branch up to date with any new stuff by performing a merge. The merge happens locally. The 'git pull' command acts on the current branch and is quite automatic. Here's a good summary of the differences between fetch and pull.

push - Attempts to send changes from local to remote. You may need to pull first to have the necessary information about the structure of the remote repo. Push requires Git to know what branch it's pushing to. You can specify that each time by executing 'git push origin branch-name', or you can set an "upstream branch" using 'git push -u origin branch-name' which lets Git know that it should always push between this pair of branches. Thereafter you can just do 'git push' and it knows where to go.


Sometimes you are asked to "stash" when switching branches. Stashing takes uncommited changes and puts them somewhere safe for you to re-apply later.

If you are asked to do this, simply do a 'git stash' - to put the changes in the stash. Then go to the new branch with 'git checkout new-branch-name'. Finally 'git stash apply' puts those uncommited files onto the new branch.

The stash never goes away. You can always access older stashed items even if new ones have been created.

Why does this happen? For unstaged changes to files, Git thinks about the last commit that involved a change to that file - a "parent commit" (this may be the initial commit). If you do a switch that makes it unclear which parent commit to use for a file, a stash becomes necessary.​

Merging (and conflicts)

Merging happens in various guises at various points in the workflow. Generally speaking, it involves making two branches into one so that the information in both is maximally represented. Most of the time the information can be matched up well, but sometimes it will be ambiguous as to which branch's change should be used. I.e. in some files there may be a different change to the same line(s) of code. When this happens Git will present the two options to you by writing them both into the file, using a certain convention to show which branch each is from. This is called a merge conflict. Fixing consists of simply editing the files and deleting the bits you don't need.

The basic command for merging is:

~$ git merge target-branch-name

This will bring any differences from the target branch into HEAD (current branch). If this is successful (usually) a new commit will be created which has two parent commits, although you can also create a fast-forward merge in some cases.

Hopefully these notes and basic principles will help you get to grips with Git!

Ashley George
Categories: Elsewhere

Neil Williams: The problem of SD mux

Planet Debian - Fri, 14/08/2015 - 09:19

I keep being asked about automated bootloader testing and the phrase which crops up is “SD mux” – hardware to multiplex SD card access (typically microSD). Each time, the questioner comes up with a simple solution which can be built over a weekend, so I’ve decided to write out the actual objective, requirements and constraints to hopefully illustrate that this is not a simple problem and the solution needs to be designed to a fully scalable, reliable and maintainable standard.

The objective

Support bootloader testing by allowing fully automated tests to write a custom, patched, bootloader to the principal boot media of a test device, hard reset the board and automatically recover if the bootloader fails to boot the device by switching the media from the test device to a known working support device with full write access to overwrite everything on the card and write a known working bootloader.

The environment

100 test devices, one SD mux each (potentially), in a single lab with support for all or any to be switched simultaneously and repeatedly (maybe a hundred times a day to and fro) with 99.99% reliability.

The history

First attempt was a simplistic solution which failed to operate reliably. Next attempt was a complex solution (LMP) which failed to operate as designed in a production environment (partially due to a reliance on USB) and has since suffered from a lack of maintenance. The most recent attempt was another simplistic solution which delivered three devices for test with only one usable and even that became unreliable in testing.

The requirements

(None of these are negotiable and all are born from real bugs or real failures of previous solutions in the above environment.)

  1. Ethernet – yes, really. Physical, cat5/6 RJ45 big, bulky, ugly gigabit ethernet port. No wifi. This is not about design elegance, this is about scalability, maintenance and reliability. Must have a fully working TCP/IP stack with stable and reliable DHCP client. Stable, predictable, unique MAC addresses for every single board – guaranteed. No dynamic MAC addresses, no hard coded MAC addresses which cannot be modified. Once modified, retain permanence of the required MAC address across reboots.
  2. No USB involement – yes, really. The server writing to the media to recover a bricked device usually has only 2 USB ports but supports 20 devices. Powered hubs are not sufficiently reliable.
  3. Removable media only – eMMC sounds useful but these are prototype development boards and some are already known to intermittently fry SD card controller chips causing permanent and irreversible damage to the SD card. If that happened to eMMC, the entire device would have to be discarded.
  4. Cable connections to the test device. This is a solved problem, the cables already exist due to the second attempt at a fix for this problem which resulted in a serviceable design for just the required cables. Do not consider any fixed connection, the height of the connector will never match all test device requirements and will be a constant source of errors when devices are moved around within the rack.
  5. Guaranteed unique, permanent and stable serial numbers for every device. With 100 devices in a lab, it is absolutely necessary that every single one is uniquely addressable.
  6. Interrogation – there must be an interface for the control device to query the status of the SD mux and be assured that the results reflect reality at all times. The device must allow the control device to read and write to the media without requiring the test device to acknowledge the switch or even be powered on.
  7. No feature creep. There is no need to make this be able to switch ethernet or HDMI or GPIO as well as SD. Follow the software principle of pick one job and do it properly.
  8. Design for scalability – this is not a hobbyist project, this is a serious task requiring genuine design. The problem is not simple, it is not acceptable to make a simple solution.
  9. Power – the device must boot directly from power-on without requiring manual intervention of any kind and boot into a default safe mode where the media is only accessible to the control device. 5V power with a barrel connector is preferable – definitely not power over USB. Device must raise the TCP/IP control interface automatically and be prepared to react to commands immediately that the interface is available.
  10. Software: some logic to prevent queued requests from causing repeated switching without any interval in between, e.g. if the device had to be power cycled.
  11. Ongoing support and maintenance of hardware, firmware and software. Test devices continue to develop and will require further changes or fixes as time goes on.
  12. Mounting holes – sounds obvious but the board needs to be mounted in a sensible manner. Dangling off the end of a cat5 cable is not acceptable.

If any of those seem insurmountable or awkward or unappealing, please go back to the drawing board or leave well alone.

Beyond the absolutes, there are other elements. The device is likely to need some kind of CPU and something ARM would be preferable, Cortex-M or Cortex-A if relevant, but creating a cape for a beaglebone-black is likely to be overkill. The available cables are short and the device will need to sit quite close to the test device. Test devices never put the SD card slot in the same place twice or in any location which is particularly accessible. Wherever possible, the components on the device should be commodity parts, replaceable and serviceable. The device would be best not densely populated – there is no need for the device to be any particular size and overly small boards tend to be awkward to position correctly once cables are connected. There are limits, of course, so boards which end up bigger than typical test devices would seem excessive.

So these are the reasons why we don’t have automated bootloader testing and won’t have it any time soon. If you’ve got this far, maybe there is a design which meets all the criteria so contact me and let’s see if this is a fixable problem after all.

Categories: Elsewhere

Norbert Preining: QMapShack – GPS and Maps on Linux

Planet Debian - Fri, 14/08/2015 - 07:11

In one of the comments on my last post on Windows 10, a friendly reader from my home country pointed me at QMapShack as a replacement for QuoVadis. I have been using QuoVadis now for many many years, and it contains my mountaineering history all the way back. All the guiding work, all the track. QuoVadis is a great program, able to work with various commercial digital maps as well as GPS receivers. It only has one disadvantage, it doesn’t work on Linux. QMapShack is very similar in the target audience, but very different in usage. The one big difference is of course the set of maps one can use with QMapShack. Fortunately, Garmin Maps are supported, and since all my maps regarding Japan are Garmin maps, I am now in the process to converting to QMapShack at least for my Japan GPS data.

Let us start with the program I was using since ages, and still are using, QuoVadis. QuoVadis is a great program, there is nothing to complain. It is extremely feature-rich, and above all it can work with many different maps. While working as mountain guide in Europe it was an essential tool for me. I could use the digital raster maps of Austria, Switzerland (SwissTOPO), Germany, France, Italy, as well as Garmin vector maps. For serious work as a professional it was indispensable for me.

Unfortunately, it never worked on Linux. For some time I had a VirtualBox installation of Windows, but that was more a clutch than anything else. So for most of the GPS work I had to reboot my laptop to Windows, do my stuff there, and then switch back to Linux.

QMapShack is far from as feature rich as QuoVadis, but as far as I see now, it does have enough features for me and my mountaineering in Japan. Dealing with GPS units is not a problem, this is the easy part. The difficult part are the digital maps. Fortunately, QMapShack supports Garmin vector maps, the only maps that are (as far as I know) available digitally for Japan.

Getting used to QMapShack is a bit a challenge coming from QuoVadis, but the Wiki help pages allowed a quick start. Within short time I had maps set up. Next step was to download a data from my last weekend trip to Yarigadake, Kitakama-ridge (unfortunately we had to turn back due to rain). Connecting the unit and mounting it was already enough to show up in QMapShack, and allowed me to copy the tracks to my local database.

Next I switched to Windows and exported some tracks from my Japan folder to a gpx file. After rebooting to Linux, I loaded the gpx file into QMapShack and without failure all the tracks and routes showed up, as it can be seen in the above screenshot (click on it to get a bigger version). Also the display of the Garmin map worked perfectly.

Update: Please see the comment section for explanation of the necessary functions. QMapShack does provide similar functionality. Thanks to Oliver for pointing me at it! After some time of working there are a few functions from QuoVadis I am missing for now at QMapShack:

  • Track Processor: it allows you to smoothen a track, reduce the number of points, etc. A very useful tool which I use on each track before uploading it to my blog. For a normally day-trip I usually boil down to 300 points a track, which completely suffices to show the actual track.
  • Visual track editor: In QuoVadis one can edit a track with a visual tool that allows to delete, move, shift points and line segments on the fly. This, too, is extremely useful to clear out wrong GPS points (due to bad reception etc).

Although these two functionalities are missing, all in all I am very happy for now with QMapShack. My next steps are:

  • Import all the Japan related data from QuoVadis to QMapShack – unfortunately there is no script or tool to do this automatically. Since I have a few years of history here, I need to create for each trip a new project and copy the tracks/waypoints/routes for that trip into the project. This is a bit painful, but I didn’t expect that it will work without some manual tweaking.
  • Next I also want to help QMapShack, first by trying to provide a translation into Japanese, and than see what kind of features I am missing or want to have.

Finally, above all, I see a future where I do not need to reboot into Windows for doing my GPS work. For now I will import all the data into both applications, but if QMapShack holds its promises from the first day of usage, I am confident that this will not be necessary in future.


Categories: Elsewhere

Valuebound: Configuring & Debugging XDebug with PHPStorm For Drupal 7 on Mac os X yosemite

Planet Drupal - Thu, 13/08/2015 - 20:11

I am running my machine with nginx & php 5.6, First make sure that you have already installed Xdebug. We can check this with php version command

$ php -v

If you see ‘Xdebug v’ that means XDebug installed. If you do not see then install XDebug using PECL

Categories: Elsewhere

Julien Danjou: Reading LWN.net with Pocket

Planet Debian - Thu, 13/08/2015 - 18:39

I've started to use Pocket a few months ago to store my backlog of things to read. It's especially useful as I can use it to read content offline since we still don't have any Internet access in places such as airplanes or the Paris metro. It's only 2015 after all.

I am also a LWN.net subscriber for years now, and I really like their articles from the weekly edition. Unfortunately, as the access is restricted to subscribers, you need to login: it makes it impossible to add these articles to Pocket directly. Sad.

Yesterday, I thought about that and decided to start hacking on it. LWN provides a feature called "Subscriber Link" that allows you to share an article with a friend. I managed to use that feature to share the articles with my friend… Pocket!

As doing that every week is tedious, I wrote a small Python program called lwn2pocket that I published on GitHub. Feel free to use it, hack it and send pull requests.

Categories: Elsewhere

ThinkShout: Peer-To-Peer Fundraising With Drupal - RedHen Raiser

Planet Drupal - Thu, 13/08/2015 - 17:00
The Origin of RedHen Raiser

Last summer, the Capital Area Food Bank of Washington, DC came to us with a great idea: The Food Bank wanted to increase its investment in team fundraising initiatives, but they didn’t necessarily want to continue to invest in a "pay as you go" service like StayClassy, TeamRaiser, or Razoo. Rather, they wanted to explore whether or not they could invest in the development of an open source alternative that would eliminate licensing costs, that they could customize more easily, and that they could then give back to the broader nonprofit community.

We’ve spoken a lot in the community regarding various strategies for how nonprofits can contribute to the development of open source tools. Most of the time, we recommend that nonprofits start off by solving their own organizational problems first, and then abstracting components of the resulting solutions at the end of the project.

However, in the case of the Capital Area Food Bank’s team fundraising needs, a competitive analysis of existing solutions provided us with a well-defined roadmap for how to meet the Food Bank’s goals, while simultaneously working on a suite of contributed tools that could be given back to the broader community. So, in coming up with their solution, we started out by developing RedHen Raiser as a stand-alone Drupal Distribution, and then implemented a customized instance of the Distribution to meet the Food Bank’s more specific design needs.

Architectural Planning for the RedHen Raiser Drupal Distribution

As mentioned above, we began our feature design process for this peer-to-peer fundraising platform by doing a competitive analysis of existing tools. The table below shows our recommended feature set ("Our MVP") compared to the functionality available as of June 2014 on eight of the leading platforms:

As a quick aside, it's interesting in comparing these products that there is a lack of consensus in the industry regarding how to refer to these sorts of fundraising tools. We often hear the terms "peer-to-peer fundraising," “team fundraising,” “viral fundraising,” and even “crowd fundraising” used interchangeably. With crowd fundraising being all the rage in the for-profit world of Kickstarter and the like, we feel that that descriptor could be a little misleading, but the other three terms all speak to different highlights and features of these tools.

With a target feature set identified, we began wireframing our proposed solution. Again, peer-to-peer fundraising has become somewhat of a well-trodden space. While we came up with a few small enhancements to the user experiences we’ve seen in the marketplace, we recognized that donors are beginning to anticipate certain design patterns when visiting these fundraising sites:

With the wireframes in place, we turned to technical architecture. Given the need to collect a broad range of constituent data, not just user account information, it was clear to us that this Drupal distribution should be built on top of a CRM platform native to Drupal. For obvious reasons, we chose to go with RedHen CRM.

We also saw many advantages to leveraging RedHen Donation, a flexible tool for building single-page donation forms. With RedHen Donation, a "Donation field" can be attached to an entity, and in so doing, it attaches a form to that entity for processing donations. The RedHen-specific piece to this module is that it is configurable, so that incoming donation data is used to “upsert” (create or update) RedHen contacts. RedHen Donation also integrates with Drupal Commerce to handle order processing and payment handling. Integrating with the Commerce Card on File and Commerce Recurring modules, RedHen Donation can be configured to support recurring gifts as well.

Missing from the underlying set of building blocks needed for building a peer-to-peer fundraising platform with Drupal was a tool for managing and tracking fundraising against campaign targets and goals. In the case of peer-to-peer fundraising, each team fundraising page needs to be connected to a larger campaign. To address this requirement, we released a module called RedHen Campaign. This module, coupled with with RedHen Donation, are at the core of RedHen Raiser, allowing us to create a hierarchy between fundraising campaigns, teams, and individual donors.

If you are interested in giving RedHen Raiser a spin, you can install the distribution using your own Drupal Make workflow, you can checkout our own RedHen Raiser development workflow, or you can just click a button to spin up a free instance on Pantheon’s hosting platform. (If you’re not a Drupal developer, this last approach is incredibly quick and easy. The distribution even ships with an example fundraising campaign to help you get started.)

Customizing RedHen Raiser to Meet the Food Bank’s Needs

With a strong base in place, customizing RedHen Raiser to meet the Capital Area Food Bank’s requirements was straightforward and comparably inexpensive. It was largely a matter of adding Drupal Commerce configuration to work with their payment gateway, developing a custom theme that matched the Food Bank’s overall brand, and then training their team to start building out their first peer-to-peer fundraising campaign:

The Success of the Food Bank’s First Peer-to-Peer Fundraising Campaign

The Capital Area Food Bank launched this tool around its May 2015 "Food From The Bar" campaign, targeting law firms in the D.C. Metro area. In just 30 days, the Food Bank raised close to $150,000 on RedHen Raiser.

The Food Bank’s Chief Digital Officer, Chris von Spiegelfield, had this to say about the project:

"The Capital Area Food Bank has been no stranger to peer-to-peer fundraising tools. For years, it has relied on third-party sites such as Causes.com, Razoo, Crowdrise, among others. However, these tools often came with considerable branding and messaging limitations as well as pretty stiff transactional fees. Users got confused about where their money was going and complained after learning a considerable portion of their donation didn’t make its way to the Food Bank. We wanted to provide greater unity of purpose beyond a donation form without all the hassle, which is how we decided to invest in our own crowdfunding platform.

After kicking the tires at a few SaaS options, we decided the best way forward was to build a customized website. Out of all the different frameworks proposed, the open-source Drupal and RedHen Raiser combo impressed us the most. We wouldn’t just be buying a website. We would be leveraging a vast network of programmers and community-minded architects who could start us off in a very good place, not to mention help our platform be secure and grow for the foreseeable future.

We launched the website this year and couldn’t be happier with what ThinkShout built for us. We’re already seeing it pay dividends across several campaigns. We continue to add new features and hope our site might be a benchmark that other nonprofits could benefit from and contribute to as well."

What’s Next for RedHen Raiser?

The obvious answer to this question is a port to Drupal 8. But that will take a little time, as many complex pieces, such as Drupal Commerce and RedHen CRM will need to be ported before we can migrate over these higher-level fundraising features. But a RedHen CRM port is on our short(er) term horizon. And frankly, the idea of being able to use RedHen as a "headless CRM" is incredibly exciting.

In the meantime, we are looking forward to collaborating with the community to make RedHen Raiser an even stronger open source competitor to the pay-as-you-go alternatives that are currently out there. So, please, give RedHen Raiser a test drive and send us your feedback!

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere