Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 43 min 47 sec ago

Francois Marier: Using DNSSEC and DNSCrypt in Debian

Tue, 26/04/2016 - 05:00

While there is real progress being made towards eliminating insecure HTTP traffic, DNS is a fundamental Internet service that still usually relies on unauthenticated cleartext. There are however a few efforts to try and fix this problem. Here is the setup I use on my Debian laptop to make use of both DNSSEC and DNSCrypt.


DNSCrypt was created to enable end-users to encrypt the traffic between themselves and their chosen DNS resolver.

To switch away from your ISP's default DNS resolver to a DNSCrypt resolver, simply install the dnscrypt-proxy package and then set it as the default resolver either in /etc/resolv.conf:


if you are using a static network configuration or in /etc/dhcp/dhclient.conf:

supersede domain-name-servers;

if you rely on dynamic network configuration via DHCP.

There are two things you might want to keep in mind when choosing your DNSCrypt resolver:

  • whether or not they keep any logs of the DNS traffic
  • whether or not they support DNSSEC

I have personally selected a resolver located in Iceland by setting the following in /etc/default/dnscrypt-proxy: DNSSEC

While DNSCrypt protects the confidentiality of our DNS queries, it doesn't give us any assurance that the results of such queries are the right ones. In order to authenticate results in that way and prevent DNS poisoning, a hierarchical cryptographic system was created: DNSSEC.

In order to enable it, I have setup a local unbound DNSSEC resolver on my machine and pointed /etc/resolv.conf (or /etc/dhcp/dhclient.conf) to my unbound installation at

Then I put the following in /etc/unbound/unbound.conf.d/dnscrypt.conf:

server: # Remove localhost from the donotquery list do-not-query-localhost: no forward-zone: name: "." forward-addr:

to stop unbound from resolving DNS directly and to instead go through the encrypted DNSCrypt proxy.


In my experience, unbound and dnscrypt-proxy are fairly reliable but they eventually get confused (presumably) by network changes and start returning errors.

The ugly but dependable work-around I have found is to create a cronjob at /etc/cron.d/restart-dns.conf that restarts both services once a day:

0 3 * * * root /usr/sbin/service dnscrypt-proxy restart 1 3 * * * root /usr/sbin/service unbound restart Captive portals

The one remaining problem I need to solve has to do with captive portals. This can be quite annoying when travelling because it requires me to use the portal's DNS resolver in order to connect to the splash screen that unlocks the wifi connection.

The dnssec-trigger package looked promising but when I tried it on my jessie laptop, it wasn't particularly reliable.

My temporary work-around is to comment out this line in /etc/dhcp/dhclient.conf whenever I need to connect to such annoying wifi networks:

#supersede domain-name-servers;

If you've found a better solution to this problem, please leave a comment!

Categories: Elsewhere

Dirk Eddelbuettel: RcppMgsPack 0.1.0

Tue, 26/04/2016 - 04:08

Over the last few months, I have been working casually on a new package to integrate MessagePack with R. What is MessagePack, you ask? To quote its website, "It's like JSON, but fast and small."

Or in more extended terms:

MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

Now, serialization formats are a dime a dozen: JSON, BSON, Protocol Buffers, CaptnProto, Flatbuffer. The list goes on and on. So why another? In a nutshell: "software ecosystems".

I happen to like working with Redis, and within the world of Redis, MessagePack is a first-class citizen supported by things close to the core like the embedded Lua interpreter, as well as fancy external add-ons such as the Redis Desktop Manager GUI. So nothing overly fundamentalist here, but a fairly pragmatic choice based on what happens to fit my needs. Plus, having worked on and off with Protocol Buffers for close to a decade, the chance of working with something not requiring a friggin' schema compiler seemed appealing for a chance.

So far, we have been encoding a bunch of data streams at work via MessagePack into Redis (and of course back). It works really well---header-only C++11 libraries for the win. I'll provide an updated RcppRedis which uses this (if present) in due course.

For now and the foreseeable future, this RcppMsgPack package will live only on the ghrr drat repository. To make RcppMsgPack work, I currently have to include the MessagePack 1.4.0 headers. A matching package for this version of the headers is in Debian but so far only in experimental. Once this hits the mainline repository I can depend on it, and upload a (lighter, smaller) RcppMsgPack to CRAN.

Until then, please just do

## install drat if not present if (!require(drat)) install.packages("drat") ## use drat to select ghrr repo drat::addRepo("ghrr") ## install RcppMsgPack install.packages("RcppMsgPack")

More details, issue tickets etc are at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Gunnar Wolf: Passover / Pesaj, a secular viewpoint, a different viewpoint... And slowly becoming history!

Mon, 25/04/2016 - 18:51

As many of you know (where "you" is "people reading this who actually know who I am), I come from a secular Jewish family. Although we have some religious (even very religious) relatives, neither my parents nor my grandparents were religious ever. Not that spirituality wasn't important to them — My grandparents both went deep into understanding by and for themselves the different spiritual issues that came to their mind, and that's one of the traits I most remember about them while I was growing up. But formal, organized religion was never much welcome in the family; again, each of us had their own ways to concile our needs and fears with what we thought, read and understood.

This week is the Jewish celebration of Passover, or Pesaj as we call it (for which Passover is a direct translation, as Pesaj refers to the act of the angel of death passing over the houses of the sons of Israel during the tenth plague in Egypt; in Spanish, the name would be Pascua, which rather refers to the ritual sacrifice of a lamb that was done in the days of the great temple)... Anyway, I like giving context to what I write, but it always takes me off the main topic I want to share. Back to my family.

I am a third-generation member of the Hashomer Hatzair zionist socialist youth movement; my grandmother was among the early Hashomer Hatzair members in Poland in the 1920s, both my parents were active in the Mexico ken in the 1950s-1960s (in fact, they met and first interacted there), and I was a member from 1984 until 1996. It was also thanks to Hashomer that my wife and I met, and if my children get to have any kind of Jewish contact in their lifes, I hope it will be through Hashomer as well.

Hashomer is a secular, nationalist movement. A youth movement with over a century of history might seem like a contradiction. Over the years, of course, it has changed many details, but as far as I know, the essence is still there, and I hope it will continue to be so for good: Helping shape integral people, with identification with Judaism as a nation and not as a religion; keeping our cultural traits, but interpreting them liberally, and aligned with a view towards the common good — Socialism, no matter how the concept seems passé nowadays. Colectivism. Inclusion. Peaceful coexistence with our neighbours. Acceptance of the different. I could write pages on how I learnt about each of them during my years in Hashomer, how such concepts striked me as completely different as what the broader Jewish community I grew up in understood and related to them... But again, I am steering off the topic I want to pursue.

Every year, we used to have a third Seder (that is, a third Passover ceremony) at Hashomer. A third one, because as tradition mandates two ceremonies to be held outside Israel, and a movement comprised of people aged between 7 and 21, having a seder competing with the familiar one would not be too successful, we held a celebration on a following day. But it would never be the same as the "formal" Pesaj: For the Seder, the Jewish tradition mandates following the Hagada — The Seder always follows a predetermined order (literally, Seder means order), and the Hagadá (which means both legend and a story that is spoken; you can find full Hagadot online if you want to see what rites are followed; I found a seemingly well done, modern, Hebrew and English version, a more traditional one, in Hebrew and Spanish, and Wikipedia has a description including its parts and rites) is, quite understandably, full with religious words, praises for God, and... Well, many things that are not in line with Hashomer's values. How could we be a secular movement and have a big celebration full with praises for God? How could we yearn for life in the kibbutz distance from the true agricultural meaning of the celebration?

The members of Hashomer Hatzair repeatedly took on the task (or, as many would see it, the heresy) of adapting the Hagada to follow their worldview, updated it for the twentieth century, had it more palatable for our peculiarities. Yesterday, when we had our Seder, I saw my father still has –together with the other, more traditional Hagadot we use– two copies of the Hagadá he used at Hashomer Hatzair's third Seder. And they are not only beautiful works showing what they, as very young activists thought and made solemn, but over time, they are becoming historic items by themselves (one when my parents were still young janijim, in 1956, and one when they were starting to have responsabilities and were non-formal teachers or path-showers, madrijim, in 1959). He also had a copy of the Hagadá we used in the 1980s when I was at Hashomer; this last one was (sadly?) not done by us as members of Hashomer, but prepared by a larger group between Hashomer Hatzair and the Mexican friends of Israeli's associated left wing party, Mapam. This last one, I don't know which year it was prepared and published on, but I remember following it in our ceremony.

So, I asked him to borrow me the three little books, almost leaflets, and scanned them to be put online. Of course, there is no formal licensing information in them, much less explicit authorship information, but they are meant to be shared — So I took the liberty of uploading them to the Internet Archive, tagging them as CC-0 licensed. And if you are interested in them, flowing over and back between Spanish and Hebrew, with many beautiful texts adapted for them from various sources, illustrated by our own with the usual heroic, socialist-inspired style, and lovingly hand-reproduced using the adequate technology for their day... Here they are:

I really enjoyed the time I took scanning and forming them, reading some passages, imagining ourselves and my parents as youngsters, remembering the beautiful work we did at such a great organization. I hope this brings this joy to others like it did to me.

פעם שומר, תמיד שומר. Once shomer, always shomer.

Categories: Elsewhere

Reproducible builds folks: This is just a test. Please ignore.

Mon, 25/04/2016 - 17:14

Test, please ignore.

Categories: Elsewhere

Ricardo Mones: Maximum number of clients reached Error: Can't open display: :0

Mon, 25/04/2016 - 10:20
Today it happened again: you try to open some program and nothing happens. Go to an open terminal, try again and it answers with the above message. Other days I used to reboot the session, but that's something I don't really think should be necessary.

First thought about X gone mad, but this one seems pretty well behaved:

$ lsof -p `pidof Xorg` | wc -l 5
Then noticed I had a long running chromium process (a jQuery page monitoring a remote service) so tried this one as well:

$ for a in `pidof chromium`; do echo "$a "`lsof -p $a | wc -l`; done 27914 5 26462 5 25350 5 24693 5 23378 5 22723 5 22165 5 21476 222 21474 1176 21443 5 21441 204 21435 546 11644 5 11626 5 11587 5 11461 5 11361 5 9833 5 9726 5
Wow, I'd bet you can guess next command ;-)

$ kill -9 21435 21441 21474 21476
This of course wiped out all chromium processes, but also fixed the problem. Suggestions for selective chromium killing welcome! But I'd better like to know why those files are not properly closed. Just relaunching chromium to write this post yields:

$ for a in `pidof chromium`; do echo "$a "`lsof -p $a | wc -l`; done 11919 5 11848 222 11841 432 11815 5 11813 204 11807 398
Which looks a bit exaggerated to me :-(
Categories: Elsewhere

Norbert Preining: Gödel and Daemons – an excursion into literature

Mon, 25/04/2016 - 02:31

Explaining Gödel’s theorems to students is a pain. Period. How can those poor creatures crank their mind around a Completeness and an Incompleteness Proof… I understand that. But then, there are brave souls using Gödel’s theorems to explain the world of demons to writers, in particular to answer the question:

You can control a Demon by knowing its True Name, but why?

Very impressive.

Found at, pointed to me by a good friend. I dare to full quote author Cort Ammon (nothing more is known), to preserve this masterpiece. Thanks!!!!

Use of their name forces them to be aware of the one truth they can never know.

Tl/Dr: If demons seek permanent power but trust no one, they put themselves in a strange position where mathematical truisms paint them into a corner which leaves their soul small and frail holding all the strings. Use of their name suggests you might know how to tug at those strings and unravel them wholesale, from the inside out!

Being a demon is tough work. If you think facing down a 4000lb Glabrezu without their name is difficult, try keeping that much muscle in shape in the gym! Never mind how many manicurists you go through keeping the claws in shape!

I don’t know how creative such demons truly are, but the easy route towards the perfect French tip that can withstand the rigors of going to the gym and benching ten thousand pounds is magic. Such a demon might learn a manicure spell from the nearby resident succubi. However, such spells are often temporary. No demon worth their salt is going to admit in front of a hero that they need a moment to refresh their mani before they can fight. The hero would just laugh at them. No, if a demon is going to do something, they’re going to do it right, and permanently. Not just nice french tips with a clear lacquer over the top, but razor sharp claws that resharpen themselves if they are blunted and can extend or retract at will!

In fact, come to think of it, why even go to the gym to maintain one’s physique? Why not just cast a magic spell which permanently makes you into the glorious Hanz (or Franz) that the trainer keeps telling you is inside you, just waiting to break free. Just get the spell right once, and think of the savings you could have on gym memberships.

Demons that wish to become more powerful, permanently, must be careful. If fairy tales have anything to teach is, it’s that one of the most dangerous things you can do is wish for something forever, and have it granted. Forever is a very long time, and every spell has its price. The demon is going to have to make sure the price is not greater than the perks. It would be a real waste to have a manicure spell create the perfect claws, only to find that they come with a peculiar perchance to curve towards one’s own heart in an attempt to free themselves from the demon that cast them.

So we need proofs. We need proofs that each spell is a good idea, before we cast it. Then, once we cast it, we need proof that the spell actually worked intended. Otherwise, who knows if the next spell will layer on top perfectly or not. Mathematics to the rescue! The world of First Order Logic (FOL, or herefter simply “logic”) is designed to offer these guarantees. With a few strokes of a pen, pencil, or even brush, it can write down a set of symbols which prove, without a shadow of a doubt, that not only will the spell work as intended, but that the side effects are manageable. How? So long as the demon can prove that they can cast a negation spell to undo their previous spell, the permanency can be reverted by the demon. With a few more fancy symbols, the demon can also prove that nobody else outside of the demon can undo their permanency. It’s a simple thing for mathematics really. Mathematics has an amazing spell called reductio ad infinitum which does unbelievable things.

However, there is a catch. There is always a catch with magic, even when that magic is being done through mathematics. In 1931, Kurt Gödel published his Incompleteness Theorems. These are 3 fascinating works of mathematical art which invoke the true names of First Order Logic and Set Theory. Gödel was able to prove that any system which is powerful enough to prove out all of algebra (1 + 1 = 2, 2 + 1 = 3, 3 * 5 = 15, etc.), could not prove its own validity. The self referential nature of proving itself crossed a line that First Order Logic simply could not return from. He proved that any system which tries must pick up one of these five traits:

  • Incomplete – they missed a detail when trying to prove everything
  • Incorrect – They got everything, but at least one point is wrong
  • Unprovable – They might be right, but they can never prove it
  • Intractable – If you’re willing to sit down and write down a proof that takes longer than eternity, you can prove a lot. Proofs that fit into eternity have limits.
  • Illogical – Throw logic to the wind, and you can prove anything!

If the demon wants itself to be able to cancel the spell, his proof is going to have to include his own abilities, creating just the kind of self referential effects needed to invoke Gödel’s incompleteness theorems. After a few thousand years, the demon may realize that this is folly.

A fascinating solution the demon might choose is to explore the “incomplete” solution to Gödel’s challenge. What if the demon permits the spell to change itself slightly, but in an unpredictable way. If the demon was a harddrive, perhaps he lets a single byte get changed by the spell in a way he cannot expect. This is actually enough to sidestep Gödel’s work, by introducing incompleteness. However, now we have to deal with pesky laws of physic and magics. We can’t just create something out of nothing, so if we’re going to let the spell change a single byte of us, there must be a single byte of information, its dual, that is unleashed into the world. Trying to break such conservation laws opens up a whole can of worms. Better to let that little bit go free into the world.

Well, almost. If you repeat this process a whole bunch of times, layering spells like a Matryoska doll, you’re eventually left with a “soul” that is nothing but the leftover bits of your spells that you simply don’t know enough about to use. If someone were collecting those bits and pieces, they might have the undoing of your entire self. You can’t prove it, of course, but its possible that those pieces that you sent out into the world have the keys to undo your many layers of armor, and then you know they are the bits that can nullify your soul if they get there. So what do you do? You hide them. You cast your spells only on the darkest of nights, deep in a cave where no one can see you. If you need assistants, you make sure to ritualistically slaughter them all, lest one of them know your secret and whisper it to a bundle of reeds, “The king has horns,” if you are familiar with the old fairy tale. Make it as hard as possible for the secret to escape, and hope that it withers away to nothingness before someone discovers it, leaving you invincible.

Now we come back to the name. The demon is going to have a name it uses to describe its whole self, including all of the layers of spellcraft it has acquired. This will be a great name like Abraxis, the Unbegotten Father or “Satan, lord of the underworld.” However, they also need to keep track of their smaller self, their soul. Failure to keep track of this might leave them open to an attack if they had missed a detail when casting their spells, and someone uncovered something to destroy them. This would be their true name, potentially something less pompous, like Gaylord Focker or Slartybartfarst. They would never use this name in company. Why draw attention to the only part of them that has the potential to be weak.

So when the hero calls out for Slartybartfarst, the demon truly must pay attention. If they know the name the demon has given over the remains of their tattered soul, might they know how to undo the demon entirely? Fear would grip their inner self, like a child, having to once again consider that they might be mortal. Surely they would wish to destroy the hero that spoke the name, but any attempt runs the risk of falling into a trap and exposing a weakness (surely their mind is racing, trying to enumerate all possible weaknesses they have). It is surely better for them to play along with you, once you use their true name, until they understand you well enough to confidently destroy you without destroying themselves.

So you ask for answers which are plausible. This one needs no magic at all. None of the rules are invalid in our world today. Granted finding a spell of perfect manicures might be difficult (believe me, some women have spent their whole life searching), but the rules are simply those of math. We can see this math in non-demonic parts of society as well. Consider encryption. An AES-256 key is so hard to brute force that it is currently believed it is impossible to break it without consuming 3/4 of the energy in the Milky Way Galaxy (no joke!). However, know the key, and decryption is easy. Worse, early implementations of AES took shortcuts. They actually left the signature of the path they took through the encryption in their accesses to memory. The caches on the CPU were like the reeds from the old fable. Merely observing how long it took to read data was sufficient to gather those reeds, make a flute, and play a song that unveils the encryption key (which is clearly either “The king has horns” or “1-2-3-4-5” depending on how secure you think your luggage combination is). Observing the true inner self of the AES encryption implementations was enough to completely dismantle them. Of course, not every implementation fell victim to this. You had to know the name of the implementation to determine which vulnerabilities it had, and how to strike at them.

Or, more literally, consider the work of Alfred Whitehead, Principia Mathematica. Principia Mathematica was to be a proof that you could prove all of the truths in arithmetic using purely procedural means. In Principia Mathematica, there was no manipulation based on semantics, everything he did was based on syntax — manipulating the actual symbols on the paper. Gödel’s Incompleteness Theorem caught Principia Mathematica by the tail, proving that its own rules were sufficient to demonstrate that it could never accomplish its goals. Principia Mathematica went down as the greatest Tower of Babel of modern mathematical history. Whitehead is no longer remembered for his mathematical work. He actually left the field of mathematics shortly afterwards, and became a philosopher and peace advocate, making a new name for himself there.

(by Cort Ammon)

Categories: Elsewhere

Bits from Debian: Debian welcomes its 2016 summer interns

Sun, 24/04/2016 - 21:00

We're excited to announce that Debian has selected 29 interns to work with us this summer: 4 in Outreachy, and 25 in the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

Android SDK tools in Debian:

APT - dpkg communications rework:

Continuous Integration for Debian-Med packages:

Extending the Debian Developer Horizon:

Improving and extending AppRecommender:

Improving the debsources frontend:

Improving voice, video and chat communication with Free Software:

MIPS and MIPSEL ports improvements:

Reproducible Builds for Debian and Free Software:

Support for KLEE in Debile:

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks.

Join us and help extend Debian! You can follow the students weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

Congratulations to all of them!

Categories: Elsewhere

Dominique Dumont: Automount usb devices with systemd

Sun, 24/04/2016 - 19:34


Ever since udisk-glue was obsoleted with udisk (the first generation), I’ve been struggling to find a solution to automatically mount a usb drive when such a device is connected to a kodi based home cinema PC. I wanted to avoid writing dedicated scripts or udev rules. Systemd is quite powerful and I thought that a simple solution should be possible using systemd configuration.

Actually, auto-mount notion covers 2 scenario :

  1. A device is mounted after being plugged in
  2. An already available device is mounted when a process accesses its mount point

The first case is the one that is needed with Kodi. The second may be usefull so is  also documented in this post.

For the first case, add a line like the following in /etc/fstab:

/dev/sr0 /mnt/br auto defaults,noatime,auto,nofail 0 2

and reload systemd configuration:

sudo systemctl daemon-reload

The important parameters are “auto” and “nofail”: with “auto”, systemd mounts the filesystem as soon as the device is available. This behavior is different from sysvinit where “auto” is taken into account only when “mount -a” is run by init scripts. “nofail” ensures that boot does not fail when the device is not available.

The second case is handled by a line like the following one (even if the line is split here to improve readability):

/dev/sr0 /mnt/br auto defaults,x-systemd.automount,\ x-systemd.device-timeout=5,noatime,noauto 0 2

With the line above in /etc/fstab, the file system is mounted when user does (for instance) “ls /mnt/br” (actually, the first “ls” fails and triggers the mount. A second “ls” gives the expected result. There’s probably a way to improve this behavior, but I’ve not found it…)

“x-systemd.*” parameters are documented in systemd.mount(5).

Last but not least, using a plain device file (like /dev/sr0) works fine to automount optical devices. But it is difficult to predict the name of a device file created for a usb drive, so a LABEL or a UUID should be used in /etc/fstab instead of a plain device file. I.e. something like:

LABEL=my_usb_drive /mnt/my-drive auto defaults,auto,nofail 0 2

All the best


Tagged: kodi, systemd
Categories: Elsewhere

Dirk Eddelbuettel: Brad Mehldau at the CSO, again

Sun, 24/04/2016 - 19:23

Almost seven years since the last time we saw him here, Brad Mehldau returned to the CSO for a concert on Friday eve in his standard trio setup with Larry Grenadier on bass and Jeff Ballard on drums.

The material mostly (all?) new and drawn from the upcoming album Blues and Ballads. The morning of the concert---which happened to be the final one in their tour---he retweeted a bit from this review in the Boston Globe

[Brad Mehldau] flashed facets of his renowned pianism: crystalline touch, deep lyricism, harmonic sophistication, adroit use of space, and the otherworldly independence of his right and left hands.

I cannot really describe his style any better than this. If you get a chance to see him, go!

Categories: Elsewhere

Norbert Preining: Armenia and Turkey – Erdoğan did it again

Sun, 24/04/2016 - 13:19

It is 101 years to the day that Turkey started the first genocide of the 20th century, the Armenian Genocide. And Recep Tayyip Erdoğan, the populistic and seemingly maniac president of Turkey, does not drop any chance to continue the shame of Turkey.

After having sued a German comedian of making fun of him – followed promptly by an as shameful cowtow of Merkel by allowing the jurisdiction to start prosecuting Jan Böhmermann, heis continuing suing other journalists, and above all putting pressure on the European Community to not support a concert tour of the Dresdner Sinfoniker in memoriam of the genocide.

European Values have disappeared, and politicians pay stupid tribute to a dictator-like Erdoğan who is destroying free speech and free media, not only in his country but all around the world. Must be a good friend of Abe, both are installing anti-freedom laws.

Shame on Europe for this. And Turkey, either vote Erdoğan out of office, or you should not (and hopefully will never) be allowed into the EC, because you don’t belong there.

Categories: Elsewhere

Daniel Pocock: LinuxWochen, MiniDebConf Vienna and Linux Presentation Day

Sun, 24/04/2016 - 08:23

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

Categories: Elsewhere

Scott Kitterman: Computer System Security Policy Debate (Follow-up)

Sun, 24/04/2016 - 00:12

As a follow-up to my recent post on the debate in the US over new encryption restrictions, I thought a short addition might be relevant.  This continues.

There was a recent Congressional hearing on the topic that featured mostly what you would expect.  Police always want access to any possible source of evidence and the tech industry tries to explain that the risks associated with mandates to do so are excessive with grandstanding legislators sprinkled throughout.   What I found interesting (and I use that word with some trepidation as it is still a multi-hour video of a Congressional hearing) is that there was rather less grandstanding and and less absolutism from some parties than I was expecting.

There is overwhelming consensus that these requirements [for exceptional access] are incompatible with good security engineering practice

Dr. Matthew Blaze

The challenge is that political people see everything as a political/policy issue, but this isn’t that kind of issue.  I get particularly frustrated when I read ignorant ramblings like this that dismiss the overwhelming consensus of the people that actually understand what needs to be done as emotional, hysterical obstructionism.  Contrary to what seems to be that author’s point, constructive dialogue and understanding values does nothing to change the technical risks of mandating exceptional access.  Of course the opponents of Feinstein-Burr decry it as technologically illiterate, it is technologically illiterate.

This doesn’t quite rise to the level of that time the Indiana state legislature considered legislating a new value (or in fact multiple values) for the mathematical constant Pi, but it is in the same legislative domain.

Categories: Elsewhere

Gergely Nagy: ErgoDox: Day 0

Fri, 22/04/2016 - 21:30

Today my ErgoDox EZ arrived, I flashed a Dvorak firmware a couple of times, and am typing this on the new keyboard. It's slow and painful, but the possibilities are going to be worth it in the end.

That is all. Writing even this much took ages.

Categories: Elsewhere

Matthew Garrett: Circumventing Ubuntu Snap confinement

Fri, 22/04/2016 - 03:51
Ubuntu 16.04 was released today, with one of the highlights being the new Snap package format. Snaps are intended to make it easier to distribute applications for Ubuntu - they include their dependencies rather than relying on the archive, they can be updated on a schedule that's separate from the distribution itself and they're confined by a strong security policy that makes it impossible for an app to steal your data.

At least, that's what Canonical assert. It's true in a sense - if you're using Snap packages on Mir (ie, Ubuntu mobile) then there's a genuine improvement in security. But if you're using X11 (ie, Ubuntu desktop) it's horribly, awfully misleading. Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty.

The problem here is the X11 windowing system. X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window. An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use curl to send your data to a remote site. As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security. Mir and Wayland both fix this, which is why Wayland is a prerequisite for the sandboxed xdg-app design.

I've produced a quick proof of concept of this. Grab XEvilTeddy from git, install Snapcraft (it's in 16.04), snapcraft snap, sudo snap install xevilteddy*.snap, /snap/bin/xevilteddy.xteddy . An adorable teddy bear! How cute. Now open Firefox and start typing, then check back in your terminal window. Oh no! All my secrets. Open another terminal window and give it focus. Oh no! An injected command that could instead have been a curl session that uploaded your private SSH keys to somewhere that's not going to respect your privacy.

The Snap format provides a lot of underlying technology that is a great step towards being able to protect systems against untrustworthy third-party applications, and once Ubuntu shifts to using Mir by default it'll be much better than the status quo. But right now the protections it provides are easily circumvented, and it's disingenuous to claim that it currently gives desktop users any real security.

Categories: Elsewhere

John Goerzen: Count me as a systemd convert

Thu, 21/04/2016 - 15:45

Back in 2014, I wrote about some negative first impressions of systemd. I also had a plea to debian-project to end all the flaming, pointing out that “jessie will still boot”, noting that my preference was for sysvinit but things are what they are and it wasn’t that big of a deal.

Although I still have serious misgivings about the systemd upstream’s attitude, I’ve got to say I find the system rather refreshing and useful in practice.

Here’s an example. I was debugging the boot on a server recently. It mounts a bunch of NFS filesystems and runs a third-party daemon that is started from an old-style /etc/init.d script.

We had a situation where the NFS filesystems the daemon required didn’t mount on boot. The daemon then was started, and unfortunately it basically does a mkdir -p on startup. So it started running and processing requests with negative results.

So there were two questions: why did the NFS filesystems fail to start, and how could we make sure the daemon wouldn’t start without them mounted? For the first, journalctl -xb was immensely helpful. It logged the status of each individual mount, and it turned out that it looked like a modprobe or kernel race condition when a bunch of NFS mounts were kicked off in parallel and all tried to load the nfsv4 module at the same time. That was easy enough to work around by adding nfsv4 to /etc/modules. Now for the other question: refusing to start the daemon if the filesystems weren’t there.

With systemd, this was actually trivial. I created /etc/systemd/system/mydaemon.service.requires (I’ll call the service “mydaemon” here), and in it I created a symlink to /lib/systemd/system/ Then systemctl daemon-reload, and boom, done. systemctl list-dependencies mydaemon will even show the the dependency tree, color-coded status of each item on it, and will actually show every single filesystem that remote-fs requires and the status of it in one command. Super handy.

In a non-systemd environment, I’d probably be modifying the init script and doing a bunch of manual scripting to check the filesystems. Here, one symlink and one command did it, and I get tools to inspect the status of the mydaemon prerequisites for free.

I’ve got to say, as someone that has occasionally had to troubleshoot boot ordering and update-rc.d symlink hell, troubleshooting this stuff in systemd is considerably easier and the toolset is more powerful. Yes, it has its set of poorly-documented complexity, but then so did sysvinit.

I never thought the “world is falling” folks were right, but by now I can be counted among those that feels like systemd has matured to the point where it truly is superior to sysvinit. Yes, in 2014 it had some bugs, but by here in 2016 it looks pretty darn good and I feel like Debian’s decision has been validated through my actual experience with it.

Categories: Elsewhere

Alessio Treglia: Corporate Culture in the Transformative Enterprise

Thu, 21/04/2016 - 10:40


The “accelerated” world of the Western or “Westernized” countries seems to be fed by an insidious food, which generates a kind of psychological dependence: anxiety. The economy of global markets cannot help it, it has a structural need of it to feed their iron logic of survival. The anxiety generated in the masses of consumers and in market competitors is crucial for Companies fighting each other and now they can only live if men are projected to objective targets continuously moving forward, without ever allowing them to achieve a stable destination.

The consumer is thus constantly maintained in a state of perpetual breathlessness, always looking for the fresh air of liberation that could eventually reduce his tension. It is a state of anxiety caused by false needs generated by advertising campaigns whose primary purpose is to create a need, to interpret to their advantage a still confused psychological demand leading to the destination decided by the market…

<Read More…[by Fabio Marzocca]>

Categories: Elsewhere

Mario Lang: Scraping the web with Python and XQuery

Thu, 21/04/2016 - 10:30

During a JAWS for Windows training, I was introduced to the Research It feature of that screen reader. Research It is a quick way to utilize web scraping to make working with complex web pages easier. It is about extracting specific information from a website that does not offer an API. For instance, look up a word in an online dictionary, or quickly check the status of a delivery. Strictly speaking, this feature does not belong in a screen reader, but it is a very helpful tool to have at your fingertips.

Research It uses XQuery (actually, XQilla) to do all the heavy lifting. This also means that the Research It Rulesets are theoretically also useable on other platforms. I was immediately hooked, because I always had a love for XPath. Looking at XQuery code is totally self-explanatory for me. I just like the syntax and semantics.

So I immediately checked out XQilla on Debian, and found #821329 and #821330, which were promptly fixed by Tommi Vainikainen, thanks to him for the really quick response!

Unfortunately, making xqilla:parse-html available and upgrading to the latest upstream version is not enough to use XQilla on Linux with the typical webpages out there. Xerces-C++, which is what XQilla uses to fetch web resources, does not support HTTPS URLs at the moment. I filed #821380 to ask for HTTPS support in Xerces-C to be enabled by default.

And even with HTTPS support enabled in Xerces-C, the xqilla:parse-html function (which is based on HTML Tidy) fails for a lot of real-world webpages I tried. Manually upgrading the six year old version of HTML Tidy in Debian to the latest from GitHub (tidy-html5, #810951) did not help a lot either.

Python to the rescue

XQuery is still a very nice language for extracting information from markup documents. XQilla just has a bit of a hard time dealing with the typical HTML documents out there. After all, it was designed to deal with well-formed XML documents.

So I decided to build myself a little wrapper around XQilla which fetches the web resources with the Python Requests package, and cleans the HTML document with BeautifulSoup (which uses lxml to do HTML parsing). The output of BeautifulSoup can apparently be passed to XQilla as the context document. This is a fairly crazy hack, but it works quite reliably so far.

Here is how one of my web scraping rules looks like:

from click import argument, group @group() def xq(): """Web scraping for command-line users.""" pass'') def github(): """Quick access to""" pass @github.command('code_search') @argument('language') @argument('query') def github_code_search(language, query): """Search for source code.""" scrape(get='', params={'l': language, 'q': query, 'type': 'code'})

The function scrape automatically determines the XQuery filename according to the callers function name. Here is how github_code_search.xq looks like:

declare function local:source-lines($table as node()*) as xs:string* { for $tr in $table/tr return normalize-space(data($tr)) }; let $results := html//div[@id="code_search_results"]/div[@class="code-list"] for $div in $results/div let $repo := data($div/p/a[1]) let $file := data($div/p/a[2]) let $link := resolve-uri(data($div/p/a[2]/@href)) return (concat($repo, ": ", $file), $link, local:source-lines($div//table), "---------------------------------------------------------------")

That is all I need to implement a custom web scraping rule. A few lines of Python to specify how and where to fetch the website from. And a XQuery file that specifies how to mangle the document content.

And thanks to the Python click package, the various entry points of my web scraping script can easily be called from the command-line.

Here is a sample invokation:

fx:~/xq% ./ Usage: [OPTIONS] COMMAND [ARGS]... Quick access to Options: --help Show this message and exit. Commands: code_search Search for source code. fx:~/xq% ./ code_search Pascal '"debian/rules"' prof7bit/LazPackager: frmlazpackageroptionsdeb.pas 230 procedure TFDebianOptions.BtnPreviewRulesClick(Sender: TObject); 231 begin 232 ShowPreview('debian/rules', EdRules.Text); 233 end; 234 235 procedure TFDebianOptions.BtnPreviewChangelogClick(Sender: TObject); --------------------------------------------------------------- prof7bit/LazPackager: lazpackagerdebian.pas 205 + 'mv ../rules debian/' + LF 206 + 'chmod +x debian/rules' + LF 207 + 'mv ../changelog debian/' + LF 208 + 'mv ../copyright debian/' + LF ---------------------------------------------------------------

For the impatient, here is the implementation of scrape:

from bs4 import BeautifulSoup from bs4.element import Doctype, ResultSet from inspect import currentframe from itertools import chain from os import path from os.path import abspath, dirname from subprocess import PIPE, run from tempfile import NamedTemporaryFile import requests def scrape(get=None, post=None, find_all=None, xquery_name=None, xquery_vars={}, **kwargs): """Execute a XQuery file. When either get or post is specified, fetch the resource and run it through BeautifulSoup, passing it as context to the XQuery. If find_all is given, wrap the result of executing find_all on the BeautifulSoup in an artificial HTML body. If xquery_name is not specified, the callers function name is used. xquery_name combined with extension ".xq" is searched in the directory where this Python script resides and executed with XQilla. kwargs are passed to get or post calls. Typical extra keywords would be: params -- To pass extra parameters to the URL. data -- For HTTP POST. """ response = None url = None context = None if get is not None: response = requests.get(get, **kwargs) elif post is not None: response =, **kwargs) if response is not None: response.raise_for_status() context = BeautifulSoup(response.text, 'lxml') dtd = next(context.descendants) if type(dtd) is Doctype: dtd.extract() if find_all is not None: context = context.find_all(find_all) url = response.url if xquery_name is None: xquery_name = currentframe().f_back.f_code.co_name cmd = ['xqilla'] if context is not None: if type(context) is BeautifulSoup: soup = context context = NamedTemporaryFile(mode='w') print(soup, file=context) cmd.extend(['-i',]) elif isinstance(context, list) or isinstance(context, ResultSet): tags = context context = NamedTemporaryFile(mode='w') print('<html><body>', file=context) for item in tags: print(item, file=context) print('</body></html>', file=context) context.flush() cmd.extend(['-i',]) cmd.extend(chain.from_iterable(['-v', k, v] for k, v in xquery_vars.items())) if url is not None: cmd.extend(['-b', url]) cmd.append(abspath(path.join(dirname(__file__), xquery_name + ".xq"))) output = run(cmd, stdout=PIPE).stdout.decode('utf-8') if type(context) is NamedTemporaryFile: context.close() print(output, end='')

The full source for xq can be found on GitHub. The project is just two days old, so I have only implemented three scraping rules as of now. However, adding new rules has been made deliberately easy, so that I can just write up a few lines of code whenever I find something on the web which I'd like to scrape on the command-line. If you find this "framework" useful, make sure to share your insights with me. And if you impelement your own scraping rules for a public service, consider sharing that as well.

If you have an comments or questions, send me mail. Oh, and by the way, I am now also on Twitter as @blindbird23.

Categories: Elsewhere

Jonathan Dowland: mount-on-demand backups

Wed, 20/04/2016 - 22:49

Last week, someone posted a request for help on the popular Server Fault Q&A site: they had apparently accidentally deleted their entire web hosting business, and all their backups. The post (now itself deleted) was a reasonably obvious fake, but mainstream media reported on it anyway, and then life imitated art and 123-reg went and did actually delete all their hosted VMs, and their backups.

I was chatting to some friends from $job-2 and we had a brief smug moment that we had never done anything this bad, before moving on to incredulity that we had never done anything this bad in the 5 years or so we were running the University web servers. Some time later I realised that my personal backups were at risk from something like this because I have a permanently mounted /backup partition on my home NAS. I decided to fix it.

I already use Systemd to manage mounting the /backup partition (via a backup.mount file) and its dependencies. I'll skip the finer details of that for now.

I planned to define some new Systemd units for each backup job which was previously scheduled via Cron in order that I could mark them as depending on the /backup mount. I needed to adjust that mount definition by adding StopWhenUnneeded=true. This ensures that /backup will be unmounted when it is not in use by another job, and not at risk of a stray rm -rf.

The backup jobs are all simple shell scripts that convert quite easily into services. An example:


[Unit] Requires=backup.mount After=backup.mount [Service] User=backupuser Group=backupuser ExecStart=/home/backupuser/bin/phobos-backup-home

To schedule this, I also need to create a timer:


[Timer] OnCalendar=*-*-* 04:01:00 [Install]

To enable the timer, you have to both enable and start it:

systemctl enable backup-home.timer
systemctl start backup-home.timer

I created service and timer units for each of my cron jobs.

The other big difference to driving these from Cron is that by default I won't get any emails if the jobs generate output - in particular, if they fail. I definitely do want mail if things fail. The Arch Wiki has an interesting proposed solution to this which I took a look at. It's a bit clunky, and my initial experiments with a derivation from this (using mail(1) not sendmail(1)) have not yet generated any mail.

Pros and Cons

The Systemd timespec is more intuitive than Cron's. It's a shame you need a minimum of three more lines of boilerplate for the simplest of timers. I think should probably be an implicit default for all .timer type units. Here I think clarity suffers in the name of consistency.

With timers, start doesn't kick-off the job, it really means "enable" in the context of timers, which is clumsy considering the existing enable verb, which seems almost superfluous, but is necessary for consistency, since Systemd units need to be enabled before they can be started As Simon points out in the comments, this is not true. Rather, "enable" is needed for the timer to be active upon subsequent boots, but won't enable it in the current boot. "Start" will enable it for the current boot, but not for subsequent ones.

Since I need a .service and a .unit file for each active line in my crontab, that's a lot of small files (twice as many as the number of jobs being defined) and they're all stored in system-wide folder because of the dependency on the necessarily system-level units defining the mount.

It's easy to forget the After= line for the backup services. On the one hand, it's a shame that After= doesn't imply Require=, so you don't need both; or alternatively there was a convenience option that did both. On the other hand, there are already too many Systemd options and adding more conjoined ones would just make it even more complicated.

It's a shame I couldn't use user-level units to achieve this, but they could not depend on the system-level ones, nor activate /backup. This is a sensible default, since you don't want any user to be able to start any service on-demand, but some way of enabling it for these situations would be good. I ruled out systemd.automount because a stray rm -rf would trigger the mount which defeats the whole exercise. Apparently this might be something you solve with Polkit, as the Arch Wiki explains, which looks like it has XML disease.

I need to get mail-on-error working reliably.

Categories: Elsewhere

Ben Hutchings: Experiments with signed kernels and modules in Debian

Wed, 20/04/2016 - 20:53

I've lately been working on support for Secure Boot in Debian, mostly in the packages maintained by the kernel team.

My instructions for setting up UEFI Secure Boot are based on OVMF running on KVM/QEMU. All 'Designed for Windows' PCs should allow reconfiguration of SB, but it may not be easy to do so. They also assume that the firmware includes an EFI shell.

Updated: Robert Edmonds pointed out that the 'Designed for Windows' requirements changed with Windows 10:

@benhutchingsuk "Hardware can be Designed for Windows 10 and can offer no way to opt out of the Secure Boot"

— Robert Edmonds (@rsedmonds) April 20, 2016

The ability to reconfigure SB is indeed now optional for devices which are designed to always boot with a specific Secure Boot configuration. I also noticed that the requirements say that OEMs should not sign an EFI shell binary. Therefore I've revised the instructions to use efibootmgr instead.


UEFI Secure Boot, when configured and enabled (which it is on most new PCs) requires that whatever it loads is signed with a trusted key. The one common trusted key for PCs is held by Microsoft, and while they will sign other people's code for a nominal fee, they require that it also validates the code it loads, i.e. the kernel or next stage boot loader. The kernel in turn is responsible for validating any code that could compromise its integrity (kernel modules, kexec images).

Currently there are no such signed boot loaders in Debian, though the shim and grub-signed packages included in many other distributions should be usable. However it's possible to load an appropriately configured Linux kernel directly from the UEFI firmware (typically through the shell) which is what I'm doing at the moment.

Packaging signed kernels

Signing keys obviously need to be protected against disclosure; the private keys can't be included in a source package. We also won't install them on buildds separately, and generating signatures at build time would of course be unreproducible. So I've created a new source package, linux-signed, which contains detached signatures prepared offline.

Currently the binary packages built from linux-signed also contain only detached signatures, which are applied as necessary at installation time. The signed kernel image (only on x86 for now) is named /boot/vmlinuz-kversion.efi.signed. However, since packages must not modify files owned by another package and I didn't want to dpkg-divert thousands of modules, the module signatures remain detached. Detached module signatures are a new invention of mine, and require changes in kmod and various other packages to support them. (An alternate might be to put signed modules under a different directory and drop a configuration file in /lib/depmod.d to make them higher priority. But then we end up with two copies of every module installed, which can be a substantial waste of space.)


The packages you need to repeat the experiment:

  • linux-image-4.5.0-1-flavour version 4.5.1-1 from unstable (only 686, 686-pae or amd64 flavours have signed kernels; most flavours have signed modules)
  • linux-image-4.5.0-1-flavour-signed version 1~exp3 from experimental
  • initramfs-tools version 0.125 from unstable
  • kmod and libkmod2 unofficial version 22-1.2 from

For Secure Boot, you'll then need to copy the signed kernel and the initrd onto the EFI system partition, normally mounted at /boot/efi.

SB requires a Platform Key (PK) which will already be installed on a real PC. You can replace it but you don't need to. If you're using OVMF, there are no persistent keys so you do need to generate your own:

openssl req -new -x509 -newkey rsa:2048 -keyout pk.key -out pk.crt \ -outform der -nodes

You'll also need to install the certificate for my kernel image signing key, which is under debian/certs in the linux-signed package. OVMF requires this in DER format:

openssl x509 -in linux-signed-1~exp3/debian/certs/ \ -out linux.crt -outform der

You'll need to copy the certificate(s) to a FAT-formatted partition such as the EFI system partition, so that the firmware can read it.

Use efibootmgr to add a boot entry for the kernel, for example:

efibootmgr -c -d /dev/sda -L linux-signed -l '\vmlinuz.efi' -u 'initrd=initrd.img root=/dev/sda2 ro quiet'

You should use the same kernel parameters as usual, except that you also need to specify the initrd filename using the initrd= parameter. The EFI stub code at the beginning of the kernel will load the initrd using EFI boot services.

Enabling Secure Boot
  1. Reboot the system and enter UEFI setup
  2. Find the menu entry for Secure Boot customisation (in OVMF, it's under 'Device Manager' for some reason)
  3. In OVMF, enrol the PK from pk.crt
  4. Add linux.crt to the DB (whitelist database)
  5. Ensure that Secure Boot is enabled and in 'User Mode'
Booting the kernel in Secure Boot

If all went well, Linux will boot as normal. You can confirm that Secure Boot was enabled by reading /sys/kernel/security/securelevel, which will contain 1 if it was.

Module signature validation

Module signatures are now always checked and unsigned modules will be given the 'E' taint flag. If Secure Boot is used or you add the kernel parameter module.sig_enforce=1, unsigned modules will be rejected. You can also turn on signature enforcement and turn off various other methods of modifying kernel code (such as kexec) by writing 1 to /sys/kernel/security/securelevel.

Categories: Elsewhere

Reproducible builds folks: Reproducible builds: week 51 in Stretch cycle

Wed, 20/04/2016 - 20:47

What happened in the reproducible builds effort between April 10th and April 16th 2016:

Toolchain fixes
  • Roland Rosenfeld uploaded transfig/1:3.2.5.e-6 which honors SOURCE_DATE_EPOCH. Original patch by Alexis Bienvenüe.
  • Bill Allombert uploaded gap/4r8p3-2 which makes honor SOURCE_DATE_EPOCH. Original patch by Jerome Benoit, duplicate patch by Dhole.
  • Emmanuel Bourg uploaded ant/1.9.7-1 which makes the Javadoc task use UTF-8 as the default encoding if none was specified and SOURCE_DATE_EPOCH is set.

Antoine Beaupré suggested that gitpkg stops recording timestamps when creating upstream archives. Antoine Beaupré also pointed out that git-buildpackage diverges from the default gzip settings which is a problem for reproducibly recreating released tarballs which were made using the defaults.

Alexis Bienvenüe submitted a patch extending sphinx SOURCE_DATE_EPOCH support to copyright year.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330, avis, brailleutils, charactermanaj, classycle, commons-io, commons-javaflow, commons-jci, gap-radiroot, jebl2, jetty, libcommons-el-java, libcommons-jxpath-java, libjackson-json-java, libjogl2-java, libmicroba-java, libproxool-java, libregexp-java, mobile-atlas-creator, octave-econometrics, octave-linear-algebra, octave-odepkg, octave-optiminterp, rapidsvn, remotetea, ruby-rinku, tachyon, xhtmlrenderer.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #820603 on viking by Alexis Bienvenüe: fix icon headers inclusion order.
  • #820661 on nullmailer by Alexis Bienvenüe: fix the order in which files are included in the static archive.
  • #820668 on sawfish by Alexis Bienvenüe: fix file ordering in theme archives, strip hostname and username from the config.h file, and honour SOURCE_DATE_EPOCH when creating the config.h file.
  • #820740 on bless by Alexis Bienvenüe: always use /bin/sh as shell.
  • #820742 on gmic by Alexis Bienvenüe: strip the build date from help messages.
  • #820809 on wsdl4j by Alexis Bienvenüe: use a plain text representation of the copyright character.
  • #820815 on freefem++ by Alexis Bienvenüe: fix the order in which files are included in the .edp files, and honour SOURCE_DATE_EPOCH when using the build date.
  • #820869 on pyexiv2 by Alexis Bienvenüe: honour the SOURCE_DATE_EPOCH environment variable through the ustrftime function, to get a reproducible copyright year.
  • #820932 on fim by Alexis Bienvenüe: fix the order in which files are joined in header files, strip the build date from fim binary, make the embeded vim2html script honour SOURCE_DATE_EPOCH variable when building the documentation, and force language to be English when using bison to make a grammar that is going to be parsed using English keywords.
  • #820990 on grib-api by Santiago Vila: always call dh-buildinfo.
diffoscope development

Zbigniew Jędrzejewski-Szmek noted in #820631 that diffoscope doesn't work properly when a file contains several cpio archives.

Package reviews

21 reviews have been added, 14 updated and 22 removed in this week.

New issue found: timestamps_in_htm_by_gap.

Chris Lamb reported 10 new FTBFS issues.


The video and the slides from the talk "Reproducible builds ecosystem" at LibrePlanet 2016 have been published now.

This week's edition was written by Lunar and Holger Levsen. h01ger automated the maintenance and publishing of this weekly newsletter via git.

Categories: Elsewhere