Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 54 min ago

Kevin Avignon: Become better, learn better, do code katas !

Sun, 19/06/2016 - 18:46
Hey guys Today, I’m here talking about how to better ourselves us developers. We’re always learning while working and that’s bad. We should instead learn in our own time and practice before trying to use our tools to resolve the bugs in the software and the features requests from the clients. Learning on the job … Continue reading Become better, learn better, do code katas ! →
Categories: Elsewhere

Paul Tagliamonte: Go Debian!

Sun, 19/06/2016 - 18:30

As some of the world knows full well by now, I've been noodling with Go for a few years, working through its pros, its cons, and thinking a lot about how humans use code to express thoughts and ideas. Go's got a lot of neat use cases, suited to particular problems, and used in the right place, you can see some clear massive wins.

Some of the things Go is great at: Writing a server. Dealing with asynchronous communication. Backend and front-end in the same binary. Fast and memory safe. Things Go is bad at: Having to rebuild everything for a CVE. Having if `err != nil` everywhere. "Better than C" being the excuse for bad semantics. No generics, cgo (enough said)

I've started writing Debian tooling in Go, because it's a pretty natural fit. Go's fairly tight, and overhead shouldn't be taken up by your operating system. After a while, I wound up hitting the usual blockers, and started to build up abstractions. They became pretty darn useful, so, this blog post is announcing (a still incomplete, year old and perhaps API changing) Debian package for Go. The Go importable name is This contains a lot of utilities for dealing with Debian packages, and will become an edited down "toolbelt" for working with or on Debian packages.

Module Overview

Currently, the package contains 4 major sub packages. They're a changelog parser, a control file parser, deb file format parser, dependency parser and a version parser. Together, these are a set of powerful building blocks which can be used together to create higher order systems with reliable understandings of the world.


The first (and perhaps most incomplete and least tested) is a changelog file parser.. This provides the programmer with the ability to pull out the suite being targeted in the changelog, when each upload was, and the version for each. For example, let's look at how we can pull when all the uploads of Docker to sid took place:

func main() { resp, err := http.Get("") if err != nil { panic(err) } allEntries, err := changelog.Parse(resp.Body) if err != nil { panic(err) } for _, entry := range allEntries { fmt.Printf("Version %s was uploaded on %s\n", entry.Version, entry.When) } }

The output of which looks like:

Version 1.8.3~ds1-2 was uploaded on 2015-11-04 00:09:02 -0800 -0800 Version 1.8.3~ds1-1 was uploaded on 2015-10-29 19:40:51 -0700 -0700 Version 1.8.2~ds1-2 was uploaded on 2015-10-29 07:23:10 -0700 -0700 Version 1.8.2~ds1-1 was uploaded on 2015-10-28 14:21:00 -0700 -0700 Version 1.7.1~dfsg1-1 was uploaded on 2015-08-26 10:13:48 -0700 -0700 Version 1.6.2~dfsg1-2 was uploaded on 2015-07-01 07:45:19 -0600 -0600 Version 1.6.2~dfsg1-1 was uploaded on 2015-05-21 00:47:43 -0600 -0600 Version 1.6.1+dfsg1-2 was uploaded on 2015-05-10 13:02:54 -0400 EDT Version 1.6.1+dfsg1-1 was uploaded on 2015-05-08 17:57:10 -0600 -0600 Version 1.6.0+dfsg1-1 was uploaded on 2015-05-05 15:10:49 -0600 -0600 Version 1.6.0+dfsg1-1~exp1 was uploaded on 2015-04-16 18:00:21 -0600 -0600 Version 1.6.0~rc7~dfsg1-1~exp1 was uploaded on 2015-04-15 19:35:46 -0600 -0600 Version 1.6.0~rc4~dfsg1-1 was uploaded on 2015-04-06 17:11:33 -0600 -0600 Version 1.5.0~dfsg1-1 was uploaded on 2015-03-10 22:58:49 -0600 -0600 Version 1.3.3~dfsg1-2 was uploaded on 2015-01-03 00:11:47 -0700 -0700 Version 1.3.3~dfsg1-1 was uploaded on 2014-12-18 21:54:12 -0700 -0700 Version 1.3.2~dfsg1-1 was uploaded on 2014-11-24 19:14:28 -0500 EST Version 1.3.1~dfsg1-2 was uploaded on 2014-11-07 13:11:34 -0700 -0700 Version 1.3.1~dfsg1-1 was uploaded on 2014-11-03 08:26:29 -0700 -0700 Version 1.3.0~dfsg1-1 was uploaded on 2014-10-17 00:56:07 -0600 -0600 Version 1.2.0~dfsg1-2 was uploaded on 2014-10-09 00:08:11 +0000 +0000 Version 1.2.0~dfsg1-1 was uploaded on 2014-09-13 11:43:17 -0600 -0600 Version 1.0.0~dfsg1-1 was uploaded on 2014-06-13 21:04:53 -0400 EDT Version 0.11.1~dfsg1-1 was uploaded on 2014-05-09 17:30:45 -0400 EDT Version 0.9.1~dfsg1-2 was uploaded on 2014-04-08 23:19:08 -0400 EDT Version 0.9.1~dfsg1-1 was uploaded on 2014-04-03 21:38:30 -0400 EDT Version 0.9.0+dfsg1-1 was uploaded on 2014-03-11 22:24:31 -0400 EDT Version 0.8.1+dfsg1-1 was uploaded on 2014-02-25 20:56:31 -0500 EST Version 0.8.0+dfsg1-2 was uploaded on 2014-02-15 17:51:58 -0500 EST Version 0.8.0+dfsg1-1 was uploaded on 2014-02-10 20:41:10 -0500 EST Version 0.7.6+dfsg1-1 was uploaded on 2014-01-22 22:50:47 -0500 EST Version 0.7.1+dfsg1-1 was uploaded on 2014-01-15 20:22:34 -0500 EST Version 0.6.7+dfsg1-3 was uploaded on 2014-01-09 20:10:20 -0500 EST Version 0.6.7+dfsg1-2 was uploaded on 2014-01-08 19:14:02 -0500 EST Version 0.6.7+dfsg1-1 was uploaded on 2014-01-07 21:06:10 -0500 EST control

Next is one of the most complex, and one of the oldest parts of go-debian, which is the control file parser (otherwise sometimes known as deb822). This module was inspired by the way that the json module works in Go, allowing for files to be defined in code with a struct. This tends to be a bit more declarative, but also winds up putting logic into struct tags, which can be a nasty anti-pattern if used too much.

The first primitive in this module is the concept of a Paragraph, a struct containing two values, the order of keys seen, and a map of string to string. All higher order functions dealing with control files will go through this type, which is a helpful interchange format to be aware of. All parsing of meaning from the Control file happens when the Paragraph is unpacked into a struct using reflection.

The idea behind this strategy that you define your struct, and let the Control parser handle unpacking the data from the IO into your container, letting you maintain type safety, since you never have to read and cast, the conversion will handle this, and return an Unmarshaling error in the event of failure.

I'm starting to think parsing and defining the control structs are two different tasks and should be split apart -- or the common structs ought to be removed entirely. More on this later.

Additionally, Structs that define an anonymous member of control.Paragraph will have the raw Paragraph struct of the underlying file, allowing the programmer to handle dynamic tags (such as X-Foo), or at least, letting them survive the round-trip through go.

The default decoder contains an argument, the ability to verify the input control file using an OpenPGP keyring, which is exposed to the programmer through the (*Decoder).Signer() function. If the passed argument is nil, it will not check the input file signature (at all!), and if it has been passed, any signed data must be found or an error will fall out of the NewDecoder call. On the way out, the opposite happens, where the struct is introspected, turned into a control.Paragraph, and then written out to the io.Writer.

Here's a quick (and VERY dirty) example showing the basics of reading and writing Debian Control files with go-debian.

package main import ( "fmt" "io" "net/http" "strings" "" ) type AllowedPackage struct { Package string Fingerprint string } func (a *AllowedPackage) UnmarshalControl(in string) error { in = strings.TrimSpace(in) chunks := strings.SplitN(in, " ", 2) if len(chunks) != 2 { return fmt.Errorf("Syntax sucks: '%s'", in) } a.Package = chunks[0] a.Fingerprint = chunks[1][1 : len(chunks[1])-1] return nil } type DMUA struct { Fingerprint string Uid string AllowedPackages []AllowedPackage `control:"Allow" delim:","` } func main() { resp, err := http.Get("") if err != nil { panic(err) } decoder, err := control.NewDecoder(resp.Body, nil) if err != nil { panic(err) } for { dmua := DMUA{} if err := decoder.Decode(&dmua); err != nil { if err == io.EOF { break } panic(err) } fmt.Printf("The DM %s is allowed to upload:\n", dmua.Uid) for _, allowedPackage := range dmua.AllowedPackages { fmt.Printf(" %s [granted by %s]\n", allowedPackage.Package, allowedPackage.Fingerprint) } } }

Output (truncated!) looks a bit like:

... The DM Allison Randal <> is allowed to upload: parrot [granted by A4F455C3414B10563FCC9244AFA51BD6CDE573CB] ... The DM Benjamin Barenblat <> is allowed to upload: boogie [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B] dafny [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B] transmission-remote-gtk [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B] urweb [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B] ... The DM أحمد المحمودي <> is allowed to upload: covered [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2] dico [granted by 6ADD5093AC6D1072C9129000B1CCD97290267086] drawtiming [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2] fonts-hosny-amiri [granted by BD838A2BAAF9E3408BD9646833BE1A0A8C2ED8FF] ... ... deb

Next up, we've got the deb module. This contains code to handle reading Debian 2.0 .deb files. It contains a wrapper that will parse the control member, and provide the data member through the archive/tar interface.

Here's an example of how to read a .deb file, access some metadata, and iterate over the tar archive, and print the filenames of each of the entries.

func main() { path := "/tmp/fluxbox_1.3.5-2+b1_amd64.deb" fd, err := os.Open(path) if err != nil { panic(err) } defer fd.Close() debFile, err := deb.Load(fd, path) if err != nil { panic(err) } version := debFile.Control.Version fmt.Printf( "Epoch: %d, Version: %s, Revision: %s\n", version.Epoch, version.Version, version.Revision, ) for { hdr, err := debFile.Data.Next() if err == io.EOF { break } if err != nil { panic(err) } fmt.Printf(" -> %s\n", hdr.Name) } }

Boringly, the output looks like:

Epoch: 0, Version: 1.3.5, Revision: 2+b1 -> ./ -> ./etc/ -> ./etc/menu-methods/ -> ./etc/menu-methods/fluxbox -> ./etc/X11/ -> ./etc/X11/fluxbox/ -> ./etc/X11/fluxbox/ -> ./etc/X11/fluxbox/ -> ./etc/X11/fluxbox/keys -> ./etc/X11/fluxbox/init -> ./etc/X11/fluxbox/system.fluxbox-menu -> ./etc/X11/fluxbox/overlay -> ./etc/X11/fluxbox/apps -> ./usr/ -> ./usr/share/ -> ./usr/share/man/ -> ./usr/share/man/man5/ -> ./usr/share/man/man5/fluxbox-style.5.gz -> ./usr/share/man/man5/fluxbox-menu.5.gz -> ./usr/share/man/man5/fluxbox-apps.5.gz -> ./usr/share/man/man5/fluxbox-keys.5.gz -> ./usr/share/man/man1/ -> ./usr/share/man/man1/startfluxbox.1.gz ... dependency

The dependency package provides an interface to parse and compute dependencies. This package is a bit odd in that, well, there's no other library that does this. The issue is that there are actually two different parsers that compute our Dependency lines, one in Perl (as part of dpkg-dev) and another in C (in dpkg).

I have yet to track it down, but it's shockingly likely that `apt` has another in `C++`, and maybe another in `aptitude`. I don't know this for a fact, so I'll assume nothing

To date, this has resulted in me filing three different bugs. I also found a broken package in the archive, which actually resulted in another bug being (totally accidentally) already fixed. I hope to continue to run the archive through my parser in hopes of finding more bugs! This package is a bit complex, but it basically just returns what amounts to be an AST for our Dependency lines. I'm positive there are bugs, so file them!

func main() { dep, err := dependency.Parse("foo | bar, baz, foobar [amd64] | bazfoo [!sparc], fnord:armhf [gnu-linux-sparc]") if err != nil { panic(err) } anySparc, err := dependency.ParseArch("sparc") if err != nil { panic(err) } for _, possi := range dep.GetPossibilities(*anySparc) { fmt.Printf("%s (%s)\n", possi.Name, possi.Arch) } }

Gives the output:

foo (<nil>) baz (<nil>) fnord (armhf) version

Right off the bat, I'd like to thank Michael Stapelberg for letting me graft this out of dcs and into the go-debian package. This was nearly entirely his work (with a one or two line function I added later), and was amazingly helpful to have. Thank you!

This module implements Debian version comparisons and parsing, allowing for sorting in lists, checking to see if it's native or not, and letting the programmer to implement smart(er!) logic based on upstream (or Debian) version numbers.

This module is extremely easy to use and very straightforward, and not worth writing an example for.

Final thoughts

This is more of a "Yeah, OK, this has been useful enough to me at this point that I'm going to support this" rather than a "It's stable!" or even "It's alive!" post. Hopefully folks can report bugs and help iterate on this module until we have some really clean building blocks to build solid higher level systems on top of. Being able to have multiple libraries interoperate by relying on go-debian will be a massive ease. I'm in need of more documentation, and to finalize some parts of the older sub package APIs, but I'm hoping to be at a "1.0" real soon now.

Categories: Elsewhere

Lars Wirzenius: New APT signing key for

Sun, 19/06/2016 - 12:51

For those who use my APT repository, please be advised that I've today replaced the signing key for the repository. The new key has the following fingerprint:

8072 BAD4 F68F 6BE8 5F01 9843 F060 2201 12B6 1C1F

I've signed the key with my primary key and sent the new key with signature to the key servers. You can also download it at

Categories: Elsewhere

Elena 'valhalla' Grandi: StickerConstructorSpec compliant swirl

Sun, 19/06/2016 - 10:15
StickerConstructorSpec compliant swirl

This evening I've played around a bit with the Sticker Constructor Specification and its, and this is the result:

Now I just have to:

* find somebody in Europe who prints good stickers and doesn't require illustrator (or other proprietary software) to submit files for non-rectangular shapes
* find out which Debian team I should contact to submit the files so that they can be used by everybody interested.

But neither will happen today, nor probably tomorrow, because lazy O:-)

Edit: now that I'm awake I realized I forgot to thank @Enrico Zini Zini and MadameZou for their help in combining my two proposals in a better design.

Source svg
Categories: Elsewhere

Debian Java Packaging Team: Wheezy LTS and the switch to OpenJDK 7

Sun, 19/06/2016 - 00:00

Wheezy's LTS period started a few weeks ago and the LTS team had to make an early support decision concerning the Java eco-system since Wheezy ships two Java runtime environments OpenJDK 6 and OpenJDK 7. (To be fair, there are actually three but gcj has been superseded by OpenJDK a long time ago and the latter should be preferred whenever possible.)

OpenJDK 6 is currently maintained by Red Hat and we mostly rely on their upstream work as well as on package updates from Debian's maintainer Matthias Klose and Tiago Stürmer Daitx from Ubuntu. We already knew that both intend to support OpenJDK 6 until April 2017 when Ubuntu 12.04 will reach its end-of-life. Thus we had basically two options, supporting OpenJDK 6 for another twelve months or dropping support right from the start. One of my first steps was to ask for feedback and advice on debian-java since supporting only one JDK seemed to be the more reasonable solution. We agreed on warning users via various channels about the intended change, especially about possible incompatibilities with OpenJDK 7. Even Andrew Haley, OpenJDK 6 project lead, participated in the discussion and confirmed that, while still supported, OpenJDK 6 security releases are "always the last in the queue when there is urgent work to be done".

I informed debian-lts about my findings and issued a call for tests later.

Eventually we decided to concentrate our efforts on OpenJDK 7 because we are confident that for the majority of our users one Java implementation is sufficient during a stable release cycle. An immediate positive effect in making OpenJDK 7 the default is that resources can be relocated to more pressing issues. On the other hand we were also forced to make compromises. The switch to a newer default implementation usually triggers a major transition with dozens of FTBFS bugs and the OpenJDK 7 transition was no exception. I pondered about the usefulness of fixing all these bugs for Wheezy LTS again and focussing on runtime issues instead and finally decided that the latter was both more reasonable and more economic.

Different from regular default Java changes, users will still be able to use OpenJDK 6 to compile their packages and the security impact for development systems is in general neglectable. More important was to avoid runtime installations of OpenJDK 6. I identified eighteen packages that strictly depended on the now obsolete JRE and fixed those issues on 4 May 2016 together with an update of java-common and announced the switch to OpenJDK 7 with a Debian NEWS file.

If you are not a regular reader of Debian news and also not subscribed to debian-lts, debian-lts-announce or debian-java, remember 26 June 2016 is the day when OpenJDK 7 will be made the default Java implementation in Wheezy LTS. Of course there is no need to wait. You can switch right now:

sudo update-alternatives --config java
Categories: Elsewhere

Dominique Dumont: An improved Perl API for cme and Config::Model

Sat, 18/06/2016 - 19:38


While hacking on a script to update build dependencies on a Debian package, it occured to me that using Config::Model in a Perl program should be no more complicated than using cme from a shell script. That was an itch that I scratched immediately.

Fast forward a few days, Config::Model now features new cme() and modify() functions that have a behavior similar to cme modify command.

For instance, the following program is enough to update popcon’s configuration file:

use strict; # let's not forget best practices ;-) use warnings; use Config::Model qw(cme); # cme function must be imported cme('popcon')->modify("PARTICIPATE=yes");

The object returned by cme() is a Config;:Model::Instance. All its methods are available for a finer control. For instance:

my $instance = cme('popcon'); $instance->load("PARTICIPATE=yes"); $instance->apply_fixes; $instance->say_changes; $instance->save;

When run as root, the script above shows:

Changes applied to popcon configuration: - PARTICIPATE: 'no' -> 'yes'

If need be, you can also retrieve the root node of the configuration tree to use Config;:Model::Node methods:

my $root_node = cme('popcon')->config_root; say "is popcon active ? ",$root_node->fetch_element_value('PARTICIPATE');

In summary, using cme in a Perl program is now as easy as using cme from a shell script.

To provide feedback, comments, ideas, patches or to report problems, please follow the instructions from CONTRIBUTING page on github.

All the best

Tagged: config-model, configuration, Perl
Categories: Elsewhere

Manuel A. Fernandez Montecelo: More work on aptitude

Sat, 18/06/2016 - 18:54

The last few months have been a bit of a crazy period of ups and downs, with a tempest of events beneath the apparent and deceivingly calm surface waters of being unemployed (still at it).

The daily grind

Chief activities are, of course, those related to the daily grind of job-hunting, sending applications, and preparing and attending interviews.

It is demoralising when one searches for many days or weeks without seeing anything suitable for one's skills or interests, or other more general life expectations. And it takes a lot of time and effort to put one's best in the applications for positions that one is really, really, interested in. And even for the ones which are meh, for a variety of reasons (e.g. one is not very suitable for what the offer demands).

After that, not being invited to interviews (or doing very badly at them) is bad, of course, but quick and not very painful. A swift, merciful end to the process.

But it's all the more draining when waiting for many weeks ─when not a few months─ with the uncertainty of not knowing if one is going to be lucky enough to be summoned for an interview; harbouring some hope ─one has to appear enthusiastic in the interviews, after all─, while trying to keep it contained ─lest it grows too much─; then in the interview hearing good words and some praises, and feeling the impression that one will fit in, that one did nicely and that chances are good ─letting the hope grow again─; start to think about life changes that the job will require ─to make a quick decision should the offer finally arrives─; perhaps make some choices and compromises based on the uncertain result; then wait for a week or two after the interview to know the result...

... only to end up being unsuccessful.

All the effort and hopes finally get squashed with a cold, short email or automatic response, or more often than not, complete radio silence from prospective employers, as an end to a multi-month-long process. An emotional roller coaster [1], which happened to me several times in the last few months.

All in a day's work

The months of preparing and waiting for a new job often imply an impasse that puts many other things that one cares about on hold, and one makes plans that will never come to pass.

All in a day's (half-year's?) work of an unemployed poor soul.

But not all is bad.

This period was also a busy time doing some plans about life, mid- and long-term; the usual ─and some really unusual!─ family events; visits to and from friends, old and new; attending nice little local Debian gatherings or the bigger gathering of Debian SunCamp2016, and other work for side projects or for other events that will happen soon...

And amidst all that, I managed to get some work done on aptitude.

Two pictures worth (less than) a thousand bugs

To be precise, worth 709 bugs ─ 488 bugs in the first graph, plus 221 in the second.

In 2015-11-15 (link to the post Work on aptitude):

In 2016-06-18:


The BTS numbers for aptitude right now are:

  • 221 (259 if counting all merged bugs independently)
  • 1 Release Critical (but it is an artificial bug to keep it from migrating to testing)
  • 43 (55 unmerged) with severity Important or Normal
  • 160 (182 unmerged) with severity Minor or Wishlist
  • 17 (21 unmerged) marked as Forwarded or Pending

Beyond graphs and stats, I am specially happy about two achievements in the last year:

  1. To have aptitude working today, first and foremost

    Apart from the abandon that suffered in previous years, I mean specifically the critical step of getting it through the troubles of the last summer, with the GCC-5/C++11 transition in parallel with a transition of the Boost library (explained in more detail in Work on aptitude).

    Without that, possibly aptitude would not have survived until today.

  2. Improvements to the suggestions of the resolver

    In the version 0.8, there were a lot of changes related with improving the order of the suggestions from the resolver, when it finds conflicts or other problems with the planned actions.

    Historically, but specially in the last few years, there have been many complaints about the nonsensical or dangerous suggestions from the resolver. The first solution offered by the resolver was very often regarded as highly undesirable (for example, removal of many packages), and preferable solutions like upgrades of one or only a handful of packages being offered only after many removals; and “keeps” only offered as last resort.

Perhaps these changes don't get a lot of attention, given that in the first case it's just to keep working (with few people realising that it could have collapsed on the spot, if left unattended), and the second can probably go unnoticed because “it just works” or “it started to work more smoothly” doesn't get as much immediate attention as “it suddenly broke!”.

Still, I wanted to mention them, because I am quite proud of those.


Even if I put a lot of work on aptitude in the last year, the results of the graph and numbers have not been solely achieved by me.

Special thanks go to Axel Beckert (abe / XTaran) and the apt team, David Kalnischkies and Julian Andres Klode ─ who, despite the claim in that page, does not mostly work python-apt anymore... but also in the main tools.

They help with fixing some of the issues directly, or changing things in apt that benefit aptitude, testing changes, triaging bugs or commenting on them, patiently explaining to me why something in libapt doesn't do what I think it does, and good company in general.

Not the least, for holding impromptu BTS group therapy / support meetings, for those cases when prolonged exposure to BTS activity starts to induce very bad feelings.

Thanks also to people who sent their translation updates, notified about corrections, sent or tested patches, submitted bugs, or tried to help in other ways. Change logs for details.


[1] ^ It's even an example in the Cambridge Dictionaries Online website, for the entry of roller coaster:

He was on an emotional roller coaster for a while when he lost his job.

Categories: Elsewhere

Kevin Avignon: GSOC 2015 : From NRefactory 6 to RefactoringEssentials

Sat, 18/06/2016 - 18:20
Hey guys, In spirit of my withdraw from the Google Summer of Code program this summer, I thought I’d do a piece of the project I successfully completed last summer. So what brought me to the program last year ? I spent a few weeks on working on a new thing in .NET called Roslyn. … Continue reading GSOC 2015 : From NRefactory 6 to RefactoringEssentials →
Categories: Elsewhere

Sune Vuorela: R is for Randa

Sat, 18/06/2016 - 11:23

This week I have been gathered with 38 KDE people in Randa, Switzerland. Randa is a place in a valley in the middle of the Alps close to various peaks like Matterhorn. It has been a week of intense hacking, bugfixing, brainstorming and a bit of enjoying the nature.

R is for Reproducible builds

I spent the first couple of days trying to get the Qt Documentation generation tool to reproducible generate documentation. Some of the fixes were of the usual ‘put data in an randomized datastructure, then iterate over it and create output’, where the fix is similar well known: Sort the datastructure first. Others were a bit more severe bugs that lead to the documentation to shuffle around the ‘obsolete’ bit, and the inheritance chains. Most of these fixes have been reviewed and submitted to the Qt 5.6 branch, one is still pending review, but that hopefully gets fixed soon. Then most of Qt (except things containing copies of (parts) of webkit and derivatives) should be reproducible.

R is for Roaming around in the mountains

Sleeping, hacking and dining in the same building sometimes leads to a enormous desire for fresh air. Luckily in the middle of the alps, it is readily available, and at least once a day many people went for a walk. To say hi to a sheep. Or to just go uphill until tired and then going back down. Or just finding a circle around. For this area, OpenStreetMap seems to have better maps than Google. We also went on a nice group trip to Zermatt and surroundings, sponsored by our friends in Edeltech.

R is for Releasing

One of the tasks I set myself for was to get my barcode generation library (prison. you know. being behind bars.) ready for release. A bit of api cleanup, including some future proofing, was done, and all users adapted. Hopefully it will be released as part of the next KDE Frameworks release.

R is for Reviewing code

When signing up for the sprint, one has to declare a couple of tasks to work on. One of the things I put myself up to was reviewing David Faure’s code changes. First, he is very productive, and second, he often gets into creating patches in code areas where many other contributors are scared to look. So someone has to do it, and code never scared me.

R is for Running

I planned on going running along the river monday, wednesday and friday. Fortunately that happened, but due to Switzerland having a bit more ups and downs than flat Denmark, it didn’t go that fast.

R is for Random bugfixing

When in the hacking mood surrounded by great developers, it is very easy to just fix minor bugs when you encounter them. There is likely someone around who knows the code in question. Or you are just in the mood to actually fix it, rather than living with a missing clock applet or a corner case crash.

R is for Rubber ducking

I am a brilliant person sized rubber duck. And I did get the opportunity to show off my skills a couple of times, as well as using some of the other people for that.

R is for Raising money

These sprints in Randa is only possible because of all the nice donations from people and companies around the world. The fundraiser is still running, and can be found at

Help us keep going, at this and many other sprints, clickety-click! :D
Categories: Elsewhere

John Goerzen: Mud, Airplanes, Arduino, and Fun

Thu, 16/06/2016 - 06:00

The last few weeks have been pretty hectic in their way, but I’ve also had the chance to take some time off work to spend with family, which has been nice.

Memorial Day: breakfast and mud

For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty.

Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then…. water. They put the hose over the slide and made a “water slide” (more mud slide maybe).

Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours.

Flying to Petit Jean, Arkansas

Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it’s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain.

And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning:

And here’s a view of one of the trails:

The sunset views were pretty nice, too:

And finally, the plane we flew out in, parked all by itself on the ramp:

It was truly a relaxing, peaceful, re-invigorating place.

Flying to Atchison

Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too.

Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River.

I was amused to find this hanging in the county historical society museum:

One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum:

An Arduino Project with Jacob

One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his “police station”, so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn’t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off:

Categories: Elsewhere

Reproducible builds folks: Reproducible builds: week 59 in Stretch cycle

Thu, 16/06/2016 - 01:27

What happened in the Reproducible Builds effort between June 5th and June 11th 2016:

Media coverage

Ed Maste gave a talk at BSDCan 2016 on reproducible builds (slides, video).

GSoC and Outreachy updates

Weekly reports by our participants:

  • Scarlett Clark worked on making some packages reproducible, focusing on KDE backend and utility programs.
  • Ceridwen published an initial design for the interface for reprotest, including a discussion on different types of build variations and the difficulties of specifying certain types of variations.
  • Valerie Young improved documentation for building our tests website, began migrating Debian-specific pages into a new namespace, and planned future work around its navigation.
Documentation update

- Ximin Luo proposed a modification to our SOURCE_DATE_EPOCH spec explaining FORCE_SOURCE_DATE.

Some upstream build tools (e.g. TeX, see below) have expressed a desire to control which cases of embedded timestamps should obey SOURCE_DATE_EPOCH. They were not convinced by our arguments on why this is a bad idea, so we agreed on an environment variable FORCE_SOURCE_DATE for them to implement their desired behaviour - named generically, so that at least we can set it centrally. For more details, see the text just linked. However, we strongly urge most build tools not to use this, and instead obey SOURCE_DATE_EPOCH unconditionally in all cases.

Toolchain fixes
  • TeX Live 2016 released with SOURCE_DATE_EPOCH support for all engines except LuaTeX and original TeX.
  • Continued discussion (alternative archive) with TeX upstream, about SOURCE_DATE_EPOCH corner cases, eventually resulting in the FORCE_SOURCE_DATE proposal from above.
  • gcc-5/5.4.0-4 by Matthias Klose now avoids storing -fdebug-prefix-map in DW_AT_producer, thanks to original patch by Daniel Kahn Gillmor.
  • sphinx/1.4.3-1 by Dmitry Shachnev now drops Debian-specific patches relating to SOURCE_DATE_EPOCH applied upstream, original patch by Alexis Bienvenüe.
  • asciidoctor/1.5.4-2 by Cédric Boutillier now supports SOURCE_DATE_EPOCH, thanks to original patch by Alexis Bienvenüe.
  • dh-python/1.5.4-2 by Piotr Ożarowski now behaves better in some cases, thanks to original patch by Chris Lamb.
Packages fixed

The following 16 packages have become reproducible due to changes in their build-dependencies: apertium-dan-nor apertium-swe-nor asterisk-prompt-fr-armelle blktrace canl-c code-saturne coinor-symphony dsc-statistics frobby libphp-jpgraph proxycheck pybit spip tircd xbs

The following 5 packages are new in Debian and appear to be reproducible so far: golang-github-bowery-prompt golang-github-pkg-errors golang-gopkg-dancannon-gorethink.v2 libtask-kensho-perl sspace

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #806331 against xz-utils by Ximin Luo: make the selected POSIX shell stable accross build environments
  • #806494 against gnupg by intrigeri: Make man pages not embed a build-time dependent timestamp
  • #806945 against bash by Reiner Herrmann and Ximin Luo: Use the system man2html, and set PGRP_PIPE unconditionally.
  • #825857 against python-setuptools by Anton Gladky: sort libs in native_libs.txt
  • #826408 against brainparty by Reiner Herrmann: Sort object files for deterministic linking order
  • #826416 against blockout2 by Reiner Herrmann: Sort the list of source files
  • #826418 against xgalaga++ by Reiner Herrmann: Sort source files to get a deterministic linking order
  • #826423 against kraptor by Reiner Herrmann: Sort source files for deterministic linking order
  • #826431 against traceroute by Reiner Herrmann: Sort lists of libraries/source/object files
  • #826544 against doc-debian by intrigeri: make the created files stable regardless of the locale
  • #826676 against python-openstackclient by Chris Lamb: make the build reproducible
  • #826677 against cadencii by Chris Lamb: make the build reproducible
  • #826760 against dctrl-tools by Reiner Herrmann: Sort object files for deterministic linking order
  • #826951 against slicot by Alexis Bienvenüe: please make the build reproducible (fileordering)
  • #826982 against hoichess by Reiner Herrmann: Sort object files for deterministic linking order
Package reviews

68 reviews have been added, 19 have been updated and 28 have been removed in this week. New and updated issues:

26 FTBFS bugs have been reported by Chris Lamb, 1 by Santiago Vila and 1 by Sascha Steinbiss.

diffoscope development
  • Mattia Rizzolo uploaded diffoscope/54 to jessie-backports.
strip-nondeterminism development
  • Mattia uploaded strip-nondeterminism/0.018-1 to jessie-backports, to support a debhelper backport.
  • Andrew Ayer uploaded strip-nondeterminism/0.018-2 fixing #826700, a packaging improvement for Multi-Arch to ease cross-build situations.
  • 2 days later Andrew released strip-nondeterminism/0.019; now strip-nondeterminism is able to:
    • recursively normalize JAR files embedded within JAR files (#823917)
    • clamp the timestamp, the same way tar >=1.28-2.2 can (for now available only for gzip archives)
disorderfs development
  • Andrew Ayer released disorderfs/0.4.3, fixing a issue with umask handling (#826891)
  • Valerie Young namespaced the Debian-specific pages to /debian/ namespace, with redirects to for the previous URLs.
  • Holger Levsen improved the reliability of build jobs: the availability of both build nodes (for a given build) is now being tested when a build job is started, to better cope when one of the 25 build nodes go down for some reason.
  • Ximin Luo improved the index of identified issues to include the total popcon scores of each issue, which is now also used for sorting that page.

Steven Chamberlain submitted a patch to FreeBSD's makefs to allow reproducible builds of the kfreebsd installer.

Ed Maste committed a patch to FreeBSD's binutils to enable determinstic archives by default in GNU ar.

Helmut Grohne experimented with cross+native reproductions of dash with some success, using rebootstrap.

This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen, Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC.

Categories: Elsewhere

Enrico Zini: Verifying gpg keys

Wed, 15/06/2016 - 21:47

Suppose you have a gpg keyid like 9F6C6333 that corresponds to both key 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333 and 88BB08F633073D7129383EE71EA37A0C9F6C6333, and you don't know which of the two to use.

You go to and find out that the site uses short key IDs, so the two keys are indistinguishable.

Building on Clint's hopenpgp-tools, I made a script that screenscrapes for trust paths, downloads all the potentially connecting keys in a temporary keyring, and runs hkt findpaths on it:

$ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333 hkt (hopenpgp-tools) 0.18 Copyright (C) 2012-2016 Clint Adams hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. (4,[1,4,3,6]) (1,1793D6AB75663E6BF104953A634F4BD1E7AD5568) (3,F8921D3A7404C86E11352215C7197699B29B232A) (4,C331BA3F75FB723B5873785B06EAA066E397832F) (6,1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333) $ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 88BB08F633073D7129383EE71EA37A0C9F6C6333 hkt (hopenpgp-tools) 0.18 Copyright (C) 2012-2016 Clint Adams hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. (0,[])

This is a start: it could look in the local keyring for all ultimately trusted key finegrprints and use those as starting points. It could just take as an argument a short keyid and automatically check all matching fingerprints.

I'm currently quite busy with and at the moment verify-trust-paths scratches enough of my itch that I can move on with my other things.

Please send patches, or take it over: I'd like to see this grow.

Categories: Elsewhere

Steve Kemp: So I should document the purple server a little more

Wed, 15/06/2016 - 18:32

I should probably document the purple server I hacked together in Perl and mentioned in my last post. In short it allows you to centralise notifications. Send "alerts" to it, and when they are triggered they will be routed from that central location. There is only a primitive notifier included, which sends data to the console, but there are sample stubs for sending by email/pushover, and escalation.

In brief you create alerts by sending a JSON object via HTTP-POST. These objects contain a bunch of fields, but the two most important are:

  • id
    • A human-name for the alert. e.g. "disk-space", "heartbeat", or "unread-mail".
  • raise
    • When to raise the alert. e.g. "now", "+5m", "1466006086".

When an update is received any existing alert has its values updated, which makes heartbeat alerts trivial. Send a message with:

{ "id": "heartbeat", "raise": "+5m", .. }

The existing alert will be updated each time such a new event is submitted, which means that the time at which that alert will raise will be pushed back by five minutes. If you send this every 60 seconds then you'll get informed of an outage five minutes after your server explodes (because the "+5m" will have been turned into an absolute time, and that time will eventually become in the past - triggering a notification).

Alerts are keyed on the source IP which sent the submission and the id field, meaning you can send the same update from multiple hosts without causing any problems.

Notifications can be viewed in a reasonably pretty Web UI, so you can clear raised-alerts, see the pending ones, and suppress further notifications on something that has been raised. (By default notifications are issued every sixty seconds, until the alert is cleared. There is support for only raising an alert once, which is useful for services you might deliver events via, such as pushover which will repeat themselves.)

Anyway this is a fun project, which is a significantly simplified and less scalable version of a project which is open-sourced already and used at Bytemark.

Categories: Elsewhere

Andrew Shadura: Migrate to systemd without a reboot

Wed, 15/06/2016 - 13:51

Yesterday I was fixing an issue with one of the servers behind the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian’s flavour of System V init and djb’s dæmontools to keep things running. To make the hook asynchronous, I wrote a service to be managed to dæmontools, so that concurrency issued would be solved by it. However, I didn’t implement any timeouts, so when last week wget froze while pulling Weblate’s hook, there was nothing to interrupt it, so the hook stopped working since dæmontools thought it’s already running and wouldn’t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future.

I’ve been using systemd at work for the last year, so I am now confident I’m happier with systemd than with dæmontools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with dæmontools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don’t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who’s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong.

After studying the manpages of both systemd’s init and sysvinit’s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd’s init can’t talk to System V init, so before installing systemd I made a backup on it. It’s also important to stop all running services (except probably ssh) to make sure systemd doesn’t start second instances of each. And then: /tmp/init u — and we’re running systemd! A couple of additional checks, and it’s safe to reboot.

Only when I did all that I realised that in the case of systemd not working I’d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it’s not a good idea to perform such manipulations when you don’t have an alternative way to connect to the server :)

Categories: Elsewhere

Enrico Zini: On discomfort and new groups

Wed, 15/06/2016 - 10:14

I recenyly wrote:

When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop.

Last night I asked a group of friends what do they do if they start feeling uncomfortable when they are alone in a group of people they don't know. Here are the replies:

  • Wait outside the group until I figure the group out.
  • Find someone to talk for a while until you get comfortable.
  • If a person is making things uncomfortable for you, let them know, and leave if nobody cares.
  • Sit there in silence.
  • Work around unwelcome people by bearing them for a bit while trying to integrate with others.
  • Some people are easy to bribe into friendship, just bring cake.
  • While you don't know what is going on, you try to replicate what others are doing.
  • Spend time trying to get a feeling of what are the consequences of taking actions.
  • Purposefully disagree with people in a new environment to figure out if having a different opinion is accepted.
  • Once I was new and I was asked to be the person that invites everyone for lunch, that forced me to talk to everyone, and integrate.
  • When you are the first one to point something out, you'll probably soon find out you're not alone.
  • The reaction on the first time something is exposed, influences how often similar cases will be reported.

I think a lot of these point are good food for thought about barriers of entry, and about safety nets that a group has or might want to have.

Categories: Elsewhere

Russ Allbery: Review: Matter

Wed, 15/06/2016 - 05:37

Review: Matter, by Iain M. Banks

Publisher: Orbit Copyright: February 2008 ISBN: 0-316-00536-3 Format: Hardcover Pages: 593

Sursamen is an Arithmetic, Mottled, Disputed, Multiply Inhabited, Multi-million Year Safe, and Godded Shellworld. It's a constructed world with multiple inhabitable levels, each lit by thermonuclear "suns" on tracks, each level supported above the last by giant pillars. Before the recorded history of the current Involved Species, a culture called the Veil created the shellworlds with still-obscure technology for some unknown purpose, and then disappeared. Now, they're inhabited by various transplants and watched over by a hierarchy of mentor and client species. In the case of Sursamen, both the Aultridia and the Oct claim jurisdiction (hence "Disputed"), and are forced into an uneasy truce by the Nariscene, a more sophisticated species that oversees them both.

On Sursamen, on level eight to be precise, are the Sarl, a culture with an early industrial level of technology in the middle of a war of conquest to unite their level (and, they hope, the next level down). Their mentors are the Oct, who claim descendance from the mysterious Veil. The Deldeyn, the next level down, are mentored by the Aultridia, a species that evolved from a parasite on Xinthian Tensile Aranothaurs. Since a Xinthian, treated by the Sarl as a god, lives in the heart of Sursamen (hence "Godded"), tensions between the Sarl and the Aultridians run understandably high.

The ruler of the Sarl had three sons and a daughter. The oldest was killed by the people he is conquering as Matter starts. The middle son is a womanizer and a fop who, as the book opens, watches a betrayal that he's entirely unprepared to deal with. The youngest is a thoughtful, bookish youth pressed into a position that he also is not well-prepared for.

His daughter left the Sarl, and Sursamen itself, fifteen years previously. Now, she's a Special Circumstances agent for the Culture.

Matter is the eighth Culture novel, although (like most of the series) there's little need to read the books in any particular order. The introduction to the Culture here is a bit scanty, so you'll have more background and understanding if you've read the previous novels, but it doesn't matter a great deal for the story.

Sharp differences in technology levels have turned up in previous Culture novels (although the most notable example is a minor spoiler), but this is the first Culture novel I recall where those technological differences were given a structure. Usually, Culture novels have Special Circumstances meddling in, from their perspective, "inferior" cultures. But Sursamen is not in Culture space or directly the Culture's business. The Involved Species that governs Sursamen space is the Morthanveld: an aquatic species roughly on a technology level with the Culture themselves. The Nariscene are their client species; the Oct and Aultridia are, in turn, client species (well, mostly) of the Nariscene, while meddling with the Sarl and Deldeyn.

That part of this book reminded me of Brin's Uplift universe. Banks's Involved Species aren't the obnoxious tyrants of Brin's universe, and mentoring doesn't involve the slavery of the Uplift universe. But some of the politics are a bit similar. And, as with Uplift, all the characters are aware, at least vaguely, of the larger shape of galactic politics. Even the Sarl, who themselves have no more than early industrial technology. When Ferbin flees the betrayal to try to get help, he ascends out of the shellworld to try to get assistance from an Involved species, or perhaps his sister (which turns out to be the same thing). Banks spends some time here, mostly through Ferbin and his servant (who is one of the better characters in this book), trying to imagine what it would be like to live in a society that just invented railroads while being aware of interstellar powers that can do practically anything.

The plot, like the world on which it's set, proceeds on multiple levels. There is court intrigue within the Sarl, war on their level and the level below, and Ferbin's search for support and then justice. But the Sarl live in an artifact with some very mysterious places, including the best set piece in the book: an enormous waterfall that's gradually uncovering a lost city on the level below the Sarl, and an archaeological dig that proceeds under the Deldeyn and Sarl alike. Djan Seriy decides to return home when she learns of events in Sarl, originally for reasons of family loyalty and obligation, but she's a bit more in touch with the broader affairs of the galaxy, including the fact that the Oct are acting very strangely. There's something much greater at stake on Sursamen than tedious infighting between non-Involved cultures.

As always with Banks, the set pieces and world building are amazing, the scenery is jaw-dropping, and I have some trouble warming to the characters. Dramatic flights across tower-studded landscapes seeking access to forbidden world-spanning towers largely, but don't entirely, make up for not caring about most of the characters for most of the book. This did change, though: although I never particularly warmed to Ferbin, I started to like his younger brother, and I really liked his sister and his servant by the end of the book.

Unfortunately, the end of Matter is, if not awful, at least exceedingly abrupt. As is typical of Banks, we get a lot of sense of wonder but not much actual explanation, and the denouement is essentially nonexistent. (There is a coy epilogue hiding after the appendices, but it mostly annoyed me and provides only material for extrapolation about the characters.) Another SF author would have written a book about the Xinthian, the Veil, the purpose of the shellworlds, and the deep history of the galaxy. I should have known going in that Banks isn't that sort of SF author, but it was still frustrating.

Still, Banks is an excellent writer and this is a meaty, complex, enjoyable story with some amazing moments of wonder and awe. If you like Culture novels in general, you will like this. If you like set-piece-heavy SF on a grand scale, such as Alastair Reynolds or Kim Stanley Robinson, you probably like this. Recommended.

Rating: 8 out of 10

Categories: Elsewhere

Joey Hess: second system

Tue, 14/06/2016 - 23:15

Five years ago I built this, and it's worked well, but is old and falling down now.

mark I outdoor shower

The replacement is more minimalist and like any second system tries to improve on the design of the first. No wood to rot away, fully adjustable height. It's basically a shower swing, suspended from a tree branch.

mark II outdoor shower

Probably will turn out to have its own new problems, as second systems do.

Categories: Elsewhere

Olivier Grégoire: Fourth week at GSoC: let's integrate this new window!

Tue, 14/06/2016 - 18:57

I began by trying to integrate my window like a new view. That was not a good idea because my UI is actually integrated in the call view. So, I tried to integrate it like the buttons present on the view. Not a really good idea too, I have some issue with the clutter who do strange stuff. Finally, I work in a new window. I think that's a good idea because when you want to debug ring, it's more easier with a separate window.

I have meet some conflicts between my clean installation from the official website and my installation test. In response to this issue, I began to install my program test in another folder.

Categories: Elsewhere

Simon McVittie: GTK versioning and distributions

Tue, 14/06/2016 - 03:56

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

Categories: Elsewhere

Reproducible builds folks: First alpha release of reprotest

Tue, 14/06/2016 - 00:15

Author: ceridwen

The first, very-alpha release of reprotest is now out at PyPi. It should hit Debian experimental later this week. While it only builds on an existing system (as I'm still working on support for virtualization), it can now check its own reproducibility, which it does in its own tests, both using setuptools and debuild. Unfortunately, setuptools seems to generate file-order-dependent binaries, meaning python bdist creates unreproducible binaries. With debuild, reprotest probably would be reproducible with the modified packages from the Reproducible Builds project, though I haven't tested that yet. It tests 'captures_environment', 'fileordering' (renamed from 'filesystem'), 'home', 'kernel', 'locales', 'path', 'time', 'timezone', and 'umask'. The other variations require superuser privileges and modifications that would be unsafe to make to a running system, so they will only be enabled in the containers.

The next major part of the project is integrating autopkgtests's container management system into reprotest. For the curious, autopkgtest is composed of a main program, adt-run, which then calls other command-line programs, adt-virt-chroot, adt-virt-lxd, adt-virt-schroot, adt-virt-null, adt-virt-schroot, and adt-virt-qemu, that communicate with the containers. (The autopkgtest maintainer has since renamed the programs, but the underlying structure remains the same.) I think this is a bit of an odd design but it works well for my purposes since the container programs already have existing CLIs that reprotest can use.

Categories: Elsewhere