Craig Small: Be careful with errno

Planet Debian - Sat, 08/08/2015 - 12:54

I’m getting close to releasing version 3.3.11 of procps.  When it gets near that time, I generally browse again the Debian Bug Tracker for procps bugs. Bug number #733758 caught my eye.  With the free command if you used the s option before the c option, the s option failed, “seconds argument ‘N’ failed” where N was the number you typed in. The error should be for you trying to type letters for number of seconds. Seemed reasonably simple to test and simple to fix.

Take me to the code

The relevant code looks like this:

case 's': flags |= FREE_REPEAT; args.repeat_interval = (1000000 * strtof(optarg, &endptr)); if (errno || optarg == endptr || (endptr && *endptr)) xerrx(EXIT_FAILURE, _("seconds argument `%s' failed"), optarg);

Seems pretty stock-standard sort of function. Use strtof() to convert the string into the float.

You need to check both errno AND optarg == endptr because:

  • A valid but large float means errno = ERANGE
  • A invalid float (e.g. “FOO”) means optarg == endptr

At first I thought the logic was wrong, but tracing through it was fine.  I then compiled free using the upstream git source, the program worked fine with s flag with no c flag. Doing a diff between the upstream HEAD and Debian’s 3.3.10 source showed nothing obvious.

I then shifted the upstream git to 3.3.10 too and re-compiled. The Debian source failed, the upstream parsed the s flag fine. I ran diff, no change. I ran md5sum, the hashes matched; what is going on here?

I’ll set when I want

The man page says in the case of under/overflow “ERANGE is stored in errno”. What this means is if there isn’t and under/overflow then errno is NOT set to 0, but its just not set at all. This is quite useful when you have a chain of functions and you just want to know something failed, but don’t care what.

Most of the time, you generally would have a “Have I failed?” test and then check errno for why. A typical example is socket calls where anything less than 0 means failure. You check the return value first and then errno. strtof() is one of those funny ones where most people check errno directly; its simpler than checking for +/- HUGE_VAL. You can see though that there are traps.

What’s the difference?

OK, so a simple errno=0 above the call fixes it, but why would the Debian source tree have this failure and the upstream not? Even with the same code? The difference is how they are compiled.

The upstream compiles free like this:

gcc -std=gnu99 -DHAVE_CONFIG_H -I. -include ./config.h -I./include -DLOCALEDIR=\"/usr/local/share/locale\" -Iproc -g -O2 -MT free.o -MD -MP -MF .deps/free.Tpo -c -o free.o free.c mv -f .deps/free.Tpo .deps/free.Po /bin/bash ./libtool --tag=CC --mode=link gcc -std=gnu99 -Iproc -g -O2 ./proc/libprocps.la -o free free.o strutils.o fileutils.o -ldl libtool: link: gcc -std=gnu99 -Iproc -g -O2 -o .libs/free free.o strutils.o fileutils.o ./proc/.libs/libprocps.so -ldl


While Debian has some hardening flags:

gcc -std=gnu99 -DHAVE_CONFIG_H -I. -include ./config.h -I./include -DLOCALEDIR=\"/usr/share/locale\" -D_FORTIFY_SOURCE=2 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -MT free.o -MD -MP -MF .deps/free.Tpo -c -o free.o free.c mv -f .deps/free.Tpo .deps/free.Po /bin/bash ./libtool --tag=CC --mode=link gcc -std=gnu99 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security ./proc/libprocps.la -Wl,-z,relro -o free free.o strutils.o fileutils.o -ldl libtool: link: gcc -std=gnu99 -Iproc -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z -Wl,relro -o .libs/free free.o strutils.o fileutils.o ./proc/.libs/libprocps.so -ldl

It’s not the compiling of free itself that is doing it, but the library. Most likely something that is called before the strtof() is setting errno which this code then falls into. In fact if you run the upstream free linked to the Debian procps library it fails.

Moral of the story is to set errno before the function is called if you are going to depend on it for checking if the function succeeded.


Categories: Elsewhere

Wouter Verhelst: Backing up with tar

Planet Debian - Sat, 08/08/2015 - 10:45

The tape archiver, better known as tar, is one of the older backup programs in existence.

It's not very good at automated incremental backups (for which bacula is a good choice), but it can be useful for "let's take a quick snapshot of the current system" type of situations.

As I'm preparing to head off to debconf tomorrow, I'm taking a backup of my n-1 laptop (which still contains some data that I don't want to lose) so it can be reinstalled and used by the Debconf video team. While I could use a "proper" backup system, running tar to a large hard disk is much easier.

By default, however, tar won't preserve everything, so it is usually a good idea to add some extra options. This is what I' mrunning currently:

sudo tar cvpaSf player.local:carillon.tgz --rmt-command=/usr/sbin/rmt --one-file-system /

which breaks down to create tar archive, verbose output, preserve permissions, automatically determine compression based on file extension, handle Sparse files efficiently, write to a file on a remote host using /usr/sbin/rmt as the rmt program, don't descend into a separate filesystem (since I don't want /proc and /sys etc to be backed up), and back up my root partition.

Since I don't believe there's any value to separate file systems on a laptop, this will back up the entire contents of my n-1 laptop to the carillon.tgz in my home directory on player.local.

Categories: Elsewhere

Norbert Preining: Japanese-English dictionary for Kobo

Planet Debian - Sat, 08/08/2015 - 01:58

Since ever the Kobo firmwares also allowed downloading of a bunch of dictionaries, most of which I don’t need. As I am fluent in most languages I read and write, the only real dictionary I would like to see is a Japanese-English (I don’t dare asking for a Japanese-German). Unfortunately, Kobo never shipped one. OTOH, starting with firmware 3.16.10 they ships two different English-Japanese dictionaries, one excellent Japanese-Japanese dictionary, but not one Japanese-English. So I took the liberty to write a script that allows everyone to enrich the shipped Japanese-Japanese dictionary with English definitions.

Till now I have used Tshering’s excellent Japanese dictionary which was enriched with English definitions. It was based on the older Japanese-Japanese dictionary shipped with firmwares before 3.16.10. I didn’t want to loose the more complete dictionary from the new firmware, so here is a script that updates the dictionary for you…


The script is neither fool-proof nor does everything by itself. Furthermore, it requires a set of programs. In details:

  • Unix/Linux computer I haven’t tried anything of this on a Windows machine, but I am happy about feedback. I am working on a version that does not depend on external programs, and thus might be much more portable.
  • Dictionaries A copy of the Edict dictionary – see below for details on dictionaries.
  • 7z A standard zip/unzip program that also takes the locale into account when unpacking (different to unzip that I have access to)
  • Perl modules Various Perl modules that should be standard on most installations: Getopt::Long, File::Temp, File::Basename, and Cwd.
Supported dictionaries

At the moment the program can use the following two dictionaries as sources: Edict2 and Japanese3.


The Edict dictionary is a free dictionary which is the base of most other dictionaries. Created by the JMdict/EDICT Project it provides a very complete Japanese-English dictionary.

To use the dict with the current program, one need to download the edict2.gz file and unpack it with gunzip edict2.gz. If you put it into the same directory as the Perl script, nothing else needs to be done, otherwise one can use the command line option --edict PATH-TO-EDICT to specify the location of the edict2 file.


The Japanese3 is a dictionary application for iOS which provides another very complete Japanese-English dictionary. My feeling is that it is 90% and more based on Edict, so adding it will not buy you much, but still a bit (see below for how much!).

If you have purchased this dictionary/application, and you manage to get access to your iOS device (via jailbreaking or some other tools), then you need to get the file Japanese3.db from the application folder, and then generate a file via sqlite3 as follows:

$ sqlite3 Japanese3.db .output japanese3-data select Entry, Furigana, Summary from entries;

If you save the generated file japanese3-data into the same directory as the Perl script, nothing else needs to be done, otherwise one can use the command line option --japanese3 PATH-TO-J3-DATA to specify the location of the Japanese3 data file.

Command line options

The program supports the following command line options:

  • -h, --help Print this message and exit.
  • -i, --input location of the original Kobo GloHD dict, defaults to dicthtml-jaxxdjs.zip. This can of course point to one of the old dictionaries named dicthtml-ja.zip to enhance those.
  • -o, --output name of the output file, defaults to dicthtml-jaxxdjs-TIMESTAMP.zip
  • --dicts dictionaries to be used, can be given multiple times, possible values are edict2 and japanese3. The order determines the priority of the dictionary.
  • -e, --edict location of the edict2 file, defaults to edict2
  • -j, --japanese3 location of the japanese3 file, defaults to japanese3-data
  • –keep-input keep the unpacked directory
  • –keep-output keep the updated output directory
  • -u, –unpacked location of an already unpacked original dictionary

Note that in case you pass in the option --unpacked, the files should be properly named (encodings are a problem!). Furthermore, note that the unpacked directory contains lots of .gif and .html files that are actually gzip-compressed files, but lacking the .gz extension.

If you want to unpack the dictionary, by advised that the file names in the Zip directory are already encoded in UTF-8, but normal programs (unzip, 7z) assume that they are encoded in some other encoding. Thus, if your are using UTF-8 locale, it is necessary to set LC_CTYPE to C to make sure that the encoding is used, as in

LC_CTYPE=C 7z x ... Typical run

In the following example we use both dict2 and Japanese3 dictionaries, and prefer Edict2:

$ perl enhance-dictionary.pl --dicts edict2 --dicts japanese3 Using the following dictionaries as source for translations: edict2 japanese3 loading edict2 ... done loading Japanese3 data ... done unpacking original dictionary ... done loading dict files ... done searching for words and updating ... done total words 805521, matches: 130563 (edict: 123388, japanese3: 7175) creating output html ... done creating update dictionary in dicthtml-jaxxdjs-201508072201.zip ... done $ Installing the dictionary

After having created the enhanced dictionary, one can install the generated file as KOBO/.kobo/dicts/dicthtml-jaxxdjs.zip (where KOBO is the mount point of the eReader). The dictionary should be picked up automatically. And the next lookup should give you something like the following:

There is only one caveat: Syncing with Kobo will re-download the original dictionary and overwrite the enhanced one. There is at least one solution here, which I am employing, see this post on MobileRead.

Download and development place

I am using the github repository kobo-ja-dict-enhance to develop the program. Please report bugs, and feature requests there.

The file for the last released version can be downloaded from here: enhance-dictionary.pl

Future plans

I am planning to get rid of the 7z dependency and use Archive::Zip for all the unpacking and packing. This would also allow to do everything in memory and thus make it faster. There is a branch on github with ongoing work on this matter.

I also think about including translations for Hiragana words by providing the translations of all possible Kanji readings.


Even if Kobo does not provide us with a decent Japanese-English dictionary, adding at least a huge amount of translations to the current dictionary is now easily possible. For serious students of Japanese who are reading – or starting to read – Japanese eBooks, this will be of great help.

Enjoy, and don’t forget to give feedback, suggestions, and improvements either here or better via the Github page.

Categories: Elsewhere

Drupal Watchdog: Drupal Should Use an External Front End Framework

Planet Drupal - Fri, 07/08/2015 - 22:15

The purpose of this blog post is intended to be a comprehensive analysis and response to:
#2289619: [meta] Add a new framework base theme to Drupal core; to encapsulate how I believe core should move forward with a default theme. For brevity, the term "framework" in this post will always refer to "front end framework".

The Problem

Core has always provided its own themes, however the benefit of sustaining this model is becoming increasingly difficult to justify.

Fast Moving Front End Technologies

In regards to the front end, core has historically been more reactive than proactive. This isn’t all that surprising, nor a bad approach, when you take into account what core originally had to deal with.

Now consider for a moment, all the different technologies and techniques that are used on the front end today and compare that with when Bartik was created ~6 years ago. Things have changed rather drastically with the advent of Responsive Web Design, HTML5 and CSS preprocessors like SASS and LESS.

Web Components are possibly the next major milestone in front end development with an impact potentially just as large, if not larger, than that of HTML5 and Responsive Web Design. While the concept isn't necessarily new (frameworks have had "components" for years), it is definitely being sought to become a "web standard" with Google leading the way. Web Components are an encapsulation of multiple front end technologies supported directly by the browser. This could also likely be the future of what we now consider "frameworks" today: a consolidation of web components. Perhaps the web as a whole, only time can tell.

2015 DrupalCon Los Angeles Session: Drupal 9 Components Library: The next theme system

The 1% of the 1%

Generally known as the 1% rule, only ~1% of any given internet community is responsible for creation. In regards to Drupal, this figure is actually even more drastic with only about 0.02% that are core developers.

This fact is what makes core developers a "rare commodity". I would, however, like to take this a step further. Of that 0.02%, how many of those core developers are front end developers? I’m sure no one knows the exact ratio, but it is commonly accepted that core front end developers are an even "rarer commodity".

Back end developers are the majority. They have little to no interest in current front end technologies, techniques or "Best Practices™". I'm not discounting the fact that there are back end developers that cross over or vice versa, but there are just a handful of "unicorns" .

Without enough of a front end ratio in core development, it is impractical and unfair for everyone involved to expect any sort of "solid theme" from core. It is because of this fact that not having an external framework is actually hurting us from a community, development and maintainability standpoint.

"Getting off the Island"

Core has already adopted the "Proudly Found Elsewhere" mentality for a lot of its back end architecture. The benefits that this approach has brought to our community has proven predominantly fruitful. The front end should be no different.

Core really hasn't accepted this direction for front end themes, but doing so would allow core development to focus solely on the integration of an external framework. This would reduce a lot of the technical debt required to sustain the design and implementation of CSS and JS in a core theme.

Keynote: Angela Byron — Drupal 8: A Story of Growing Up and Getting Off the Island — php[world] 2014

Automated testing

While the automated testing infrastructure (DrupalCI) is definitely improving, it is still primarily focused on the back end side of things.

Core has wonderful unit testing. These unit tests include and ensure that a theme can implement the necessary hooks available to it via the theme system APIs. It is also great at ensuring that a theme's markup is correctly generated.

However, that is where the benefits of automated testing of core themes ends. Any patch that affects CSS or JS must undergo a very manual process which requires contributors to physically apply patches and "test" changes in multiple browsers. This often results in bucket loads of before and after screenshots on an issue. This is hardly ideal.

The reason many front end oriented projects live on GitHub is because of their ability to integrate amazing automated tests through tools like Travis CI and Sauce Labs. Being on GitHub allows front end projects to rigorously test around the specific areas for which their technologies implement and leads to the stability of their codebase.

Perhaps one day the core and contrib themes could leverage the same type of abilities on drupal.org, perhaps not.

Regardless, testing is just as paramount to the front end as it is for the back end.

Unless the testing infrastructure is willing to entertain the possibility of theme-based CSS and JS testing, an external framework is really our only alternative for stable front end code. At the very least, implementing an external framework allows us to defer this decision.

Popular Drupal-grown base themes

There will always be use cases for the popular Drupal-grown base themes. It really just depends on the project and capabilities of a front end developer. There's nothing wrong with them and I have used them all. They are without a doubt a very powerful and necessary part of our ecosystem.

There is often a lot of misunderstanding around what these base themes actually are, though. Many of them started out simply as a way to "reset" core. Over time, many have added structural components (grid systems), useful UI toggle and other tools. However, the foundation for many of them is simply to provide a starting point to create a completely custom sub-theme. Intended or not, they are essentially by front end Drupal developers for front end Drupal developers.

The priority for these base themes is not "out-of-the-box" visual consumption, but rather providing a blank canvas supported by a well groomed toolset. It is because of this "bare nature" that they can actually become more of an obstacle than a benefit for most. They essentially require an individual to possess knowledge of CSS or more, to get even the most basic of themes up and running.

Their target audience is not the other 99.9998% and not viable for core.

The Solution

Due to the complexity of the theme system, the work with Twig has been a daunting task. This alone has pushed a lot of what we thought we would "get to" into even later stages. I propose that the next stage for a default core theme is to think long term: adoption of an external framework.

Proudly Found Elsewhere

Core cannot continue to support in-house theme development, at least until there is a higher ratio of core front end developers. External frameworks live outside of Drupal and helps ensure fewer "Drupalisms" are added to our codebase. It also allows Drupal to engage more with the front end community on a level it never has before.

Vast and Thriving Ecosystems

Because frameworks typically live outside of larger projects, they usually have a vast and thriving ecosystem of their own. These often produce a flurry of additional and invaluable resources like: documentation, blog posts, in-depth how-to articles, specific Q&A sites, support sites, template sites, forums, custom plugins/enhancements. Drupal would instantly benefit from these existing resources and allow our community to offset some of the learning curves.

Back end developer friendly

These resources also allow a back end developer to focus more on how to implement existing patterns than worrying about having to create new ones. This would allow core developers to focus solely on the theme system itself and providing the necessary APIs for theme integration, rather than the more complicated front end CSS or JS implementations.

Why Bootstrap?

There are many frameworks out there and, quite frankly, attempting to find the one that is "better" than the other is futile; frameworks are simply opinionated "standards". You may agree with one’s opinion or you may not. It does not change the fact that they all work.

The question that remains is: Which framework do we put in in core?

I strongly believe that it should be Bootstrap. A lot of individuals, including myself, have already put in a great deal of time and effort in contrib to solve this exact issue: how to use an external framework with Drupal.

Another advantage of using Bootstrap is that it is already backed by a massive external community.

The main strength of Bootstrap is its huge popularity. Technically, it’s not necessarily better than the others in the list, but it offers many more resources (articles and tutorials, third-party plug-ins and extensions, theme builders, and so on) than the other four frameworks combined. In short, Bootstrap is everywhere. And this is the main reason people continue to choose it. Ivaylo Gerchev

In just two and half years, the Drupal Bootstrap base theme has grown exponentially at a whopping 2596.67% (based on 7.x installs from: 1,070 on January 6, 2013 to: 70,531 on July 12, 2015*) and has become the third top most installed Drupal theme on drupal.org.
*Note: I have chosen to exclude the past two weeks of statistics as I believe they are in error due to #2509574: Project usage stats have probably gone bad (again).

While I cannot attest to the exact reason this rapid adoption has occurred, here is an educated guess: it's what the people want. I purposefully made something that was easy to install and worked right "out-of-the-box". Ensuring that the focus of the project was on the other 99.9998%.

No other Drupal project that implements an external framework can claim this or even come close to it.

Ease of use is paramount and often overlooked by developers. This "philosophy" is what has allowed sites like Dreditor to be born and this one, Drupal Watchdog, to be redesigned given some rather severe time constraints.

Conclusion: Drupal 8, 9, 10...

Adopting an external framework is just the logical next step in core's "Proudly Found Elsewhere" mission on the front end. Regardless of which framework is ultimately chosen, I think it is more important to see why Drupal needs an external framework.

We already have too many issues and tempers flaring around even the smallest of details on the front end. By outsourcing a theme's design (CSS & JS), we would allow our community to instead focus on the integrations of themes, like the future of components and much larger issues.

While this issue isn't about trying to add a framework to core just yet, I think it is very important to have this discussion early on. I do think that ultimately, a framework based theme in core should replace Bartik, but that won't and should not happen until D9.

Since adding an external framework base theme would be purely an API addition, there isn't anything that would prevent us from adding it in an 8.Y.0 release (solely opt-in, of course). In fact, I would strongly recommend that we add one before D9 so we can smooth out any remaining details before tackling something as gigantic as #1843798: [meta] Refactor Render API to be OO.

I have a feeling that D9 will be "the year(s) of the front end". While yes, Twig is awesome, the fact remains that the underlying theme system (and default theme) itself hasn't changed all that much and needs some serious tough love.

I believe integrating an external framework is an excellent way for us to, not only reduce both our technical debt and maintenance burden, but also focus how we should be developing our theme system. We have an opportunity to transform the first visual impression of Drupal.

Let's do it for the 99.9998%.

Tags:  Drupal 8 Drupal 9 Proudly Found Elsewhere Themes Theming Community Front End Framework Bootstrap
Categories: Elsewhere

Guido Günther: Debian work in July 2015

Planet Debian - Fri, 07/08/2015 - 19:00

July was the third month I contributed to Debian LTS under the Freexian umbrella. In total I spent eight hours working on:

  • lighttpd: Fixed CVE-2014-3566 by adding a option to disable SSLv3 (ssl.use-sslv3) compatible with the option added to newer versions. This resulted in DLA-282-1.

  • nss: research on CVE-2015-2730 and CVE-2015-2721. Work on the former is still ongoing since the bug#1125025 referenced in the mozilla advisory has restricted access and I did not manage to get it opened yet. I found commits in uptream's mercurial that reference the bug though and the comitter was nice enough to answer questions.

    The backported changes for CVE-2015-2721 involve lots of changes to the internal state machine when accepting SSL connections so I'm currently looking into backporting the test suite for that on non LTS time.

Besides that I did CVE triaging of 11 CVEs to check if and how they affect oldoldstable security as part of my LTS front desk work.

Categories: Elsewhere

Christoph Egger: Systemd pitfalls

Planet Debian - Fri, 07/08/2015 - 18:24
logind hangs

If you just updated systemd and ssh to that host seems to hang, that's just a known Bug (Debian Bug #770135). Don't panic. Wait for the logind timeout and restart logind.

restart and stop;start

One thing that confused me several times and still confuses people is systemctl restart doing more than systemctl stop ; systemctl start. You will notice the difference once you have a failed service. A restart will try to start the service again. Both stop and start however will just ignore it. Rumors have it this has changed post jessie however.

sysvinit-wrapped services and stop

While there are certainly bugs with sysvinit services in general (I found myself several times without a local resolver as unbound failed to be started, haven't managed to debug further), the stop behavior of wrapped services is just broken. systemctl stop will block until the sysv initscript finished. It will even note the result of the action in its state. However systemctl will return with exitcode 0 and not output anything on stdout/stderr. This has been reported as Debian Bug #792045.

zsh helper

I found the following zshrc snipped quite helpful in dealing with non-reported systemctl failures. On root shells it will display a list of failed services as part of the prompt. This will give proper feedback whether your systemctl stop failed, it will give feedback if you still have type=simple services and if the sysv-init script or wrapper is broken.

precmd () { if [[ $UID == 0 && $+commands[systemctl] != 0 ]] then use_systemd=true systemd_failed="`systemctl --state=failed | grep failed | cut -d \ -f 2 | tr '\n' ' '`" fi } if [[ $UID == 0 && $+commands[systemctl] != 0 ]] then PROMPT=$'%{$fg[red]>> $systemd_failed$reset_color%}\n' else PROMPT="" fi PROMPT+=whateveryourpromptis zsh completion

Speaking of zsh, there's one problem that bothers me a lot and I don't have any solution for. Tab-completing the service name for service is blazing fast. Tab-completing the service name for systemctl restart takes ages. People traced down to truckloads of dbus communication for the completion but no further fix is known (to me).

type=simple services

As described in length by Lucas Nussbaum type=simple services are actively harmful. Proper type=forking daemons are strictly superior (they provide feedback of finished initialization and success thereof) and type=notify services are so simple there's no excuse for not using them even for private one-off hacks. Even if you're language doesn't provide libsystemd-daemon bindings:

(defun sd-notify (event-string) (let ((socket (make-instance 'sb-bsd-sockets:local-socket :type :datagram)) (name (posix-getenv "NOTIFY_SOCKET")) (bufferlen (length event-string))) (when name (sb-bsd-sockets:socket-connect socket name) (sb-bsd-sockets:socket-send socket event-string bufferlen))))

This is a stable API guaranteed to not break in the future and implemented in less than ten lines of code with just basic socket functions. And if your language has support it becomes actually trivial:

try: import systemd systemd.daemon.notify("READY=1") except ImportError: pass

Note that in both cases there is no drawback at all on systemd-free setups. It has the overhead of checking the process' environment for NOTIFY_SOCKET or for the systemd package and behaves like a simple service otherwise.

Actually the idea of separating the technical aspect (daemonizing) from the semantic aspect of signalizing "initialization finished, everything's fine" is a pretty good idea and hopefully has the potential to reduce the number of services signalizing the "everything's fine" too early. It could even be ported to non-systemd init systems easily given the API.

Categories: Elsewhere

Neil McGovern: Forty five hours

Planet Debian - Fri, 07/08/2015 - 14:53

As some may know, since October 2013, I’ve been studying to gain my Private Pilot Licence, and I finally achieved this goal. It’s actually taken quite a bit more than 45 hours – a total of around 60, but that does include a day trip to France (Le Touquet) and getting my night rating as well.

This basically means I can fly single engine piston aeroplanes on my own, with passengers, as soon as my paperwork gets processed by the Civil Aviation Authority anyway.

I’ve been flying Cessna 172s from Cambridge Aero Club, where they have four of them, G-SHWK, G-UFCB, G-HERC and G-MEGS, as well as a Extra 200, G-GLOC. It’s a great club, with fantastically maintained planes and great instructors, and Cambridge Airport has a full ATC service as well, so it’s been useful to get that experience, especially as the UK’s airspace is fairly contended with a lot of controlled and military airspace which needs permission to enter.

As for what next, I need to work that out. When you get your licence, it’s often described as a license to learn, so that’s what I intend to do. Apart from popping over to France for lunch every now and again, I’m probably going to have a go at aerobatics and farm strip flying, then probably look at my IMC rating.

So, if it’s a nice day, and you’re around in Cambridge, let me know if you want a trip up in the skies!

Categories: Elsewhere

Mike Gabriel: New plugin for GOsa²: gosa-plugin-mailaddress

Planet Debian - Fri, 07/08/2015 - 14:31

During last week, I hacked a new plugin together for GOsa².

Simply quoting parts from debian/control here to inform you on its functionality:

Package: gosa-plugin-mailaddress
Architecture: all
 gosa (>= 2.7),
Description: Simple plugin to manage user mail addresses in GOsa²
 This plugin is a very light-weighted version of the GOsa² mail plugin.
 Whereas gosa-plugin-mail can be used to manage a complete mail server
 farm, this tiny plugin only provides means to modify the user's mail
 address via a text field.
 This plugin is useful for people that need to maintain users' email
 addresses via GOsa², but do not run their own mailserver(s).
 GOsa² is a combination of system-administrator and end-user web
 interface, designed to handle LDAP based setups.

Mike (aka sunweaver)

Categories: Elsewhere

Mike Gabriel: My FLOSS activities in July 2015

Planet Debian - Fri, 07/08/2015 - 14:28

July 2015 has been mainly dedicated to these five fields of endeavour:

  • Debian Edu rollout at a grammar school (Gymnasium) in Lübeck, Germany
  • GOsa² and Debian Edu testing and fixing
  • Upgrading a Debian Edu squeeze main server to jessie
  • Packaging started to get ILIAS into Debian
  • Work on Debian and Debian LTS
Debian Edu rollout at a grammar school (Gymnasium) in Lübeck, Germany

In spring 2015, we got contacted by the IT coordinator of a grammar school (Gymnasium) in Lübeck, Germany. He asked for some consultancy on the existing school network based on Debian and Linux Mint. The school has been running on Linux all-over for the past 5 years (at least, IIRC).

After several phone calls and a personal meeting, the decision was reached to switch over the educational segment of their school IT completely to Debian Edu / Skolelinux.

GOsa² and Debian Edu testing and fixing

This new customer gave us the opportunity of intensively testing Debian Edu jessie. Diverting from previous rollouts, we dropped LibVirt as virtualization technology and switched over to Ganeti. The Debian Edu machines all run in KVM virtual machines.

Our Diskless Workstations and diskfull workstations have been running on Debian Edu jessie plus MATE desktop environment for a while already. But the main servers at other customers' are still on Debian Edu squeeze.

read more

Categories: Elsewhere

Ben Hutchings: Debian LTS work, July 2015

Planet Debian - Fri, 07/08/2015 - 13:52

This was my eighth month working on Debian LTS. I was assigned 14.75 hours of work by Freexian's Debian LTS initiative.


I didn't upload any new version of the kernel this month, but I did send all the recent security fixes to Willy Tarreau who maintains the 2.6.32.y branch at kernel.org. I also spent more time working on a fix for bug #770492 aka CVE-2015-1350, which is not yet fixed upstream. I now have a candidate patch for 2.6.32.y/squeeze, and automated tests covering many of the affected filesystems.

Front desk

The LTS 'front desk' role is now assigned on a rota, and I was my first turn in the third week of July. I investigated which new CVEs affected LTS-supported packages in squeeze, recorded this in the secure-testing repository, and mailed the package maintainers to give them a chance to handle the updates.


Groovy had a single issue (CVE-2015-3253) with a simple fix that I could apply to the version in squeeze. Unfortunately the previous version in squeeze had not been properly updated during the squeeze release cycle and could no longer be built from source. I eventually worked out what the build-dependencies should be, uploaded the fix and issued DLA-274-1.


Ruby 1.9.1 also had a single issue (CVE-2014-6438), though the fixes were more complicated and hard to find. (The original bug report for this is still not public in the upstream bug tracker.) I also had to find an earlier upstream change that they depended on. As I've mentioned before, Ruby has an extensive test suite so I could be quite confident in my backported changes. I uploaded and issued DLA-275-1.


The GNU library for Internationalized Domain Names, libidn, required applications to pass only valid UTF-8 strings as domain names. The Jabber instant messaging server turned out not to be validating untrusted domain names, leading to a security issue there (CVE-2015-2059). As there are likely to be other applications with similar bugs, this was resolved by adding UTF-8 validation to libidn.

The fix for this involved importing additional functions from the GNU portability library, gnulib, and there my difficulties began. Confusingly, libidn has two separate sets of functions imported from gnulib, and due to interdependencies it turned out that I would have to update both of these wholesale rather than just importing the new functions that were wanted. This resulted in a 35,000 line patch. Following that I needed to autoreconf the package (and debug that process when it failed), ending up with another 26,000 line patch. Finally, it turned out that the new gnulib code needed gperf to build a header file for Solaris (when building for Linux? huh?). I ended up adding that with another patch instead.

libidn has a decent test suite, so I could at least be confident in the result of my changes. I uploaded and issued DLA-277-1.

Dear upstream developers, please use sane libraries instead of gnulib.

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere