Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 3 hours 10 min ago

Bits from Debian: Debian is participating in the next round of Outreachy!

Sun, 09/10/2016 - 19:50

Following the success of the last round of Outreachy, we are glad to announce that Debian will take part in the program for the next round, with internships lasting from the 6th of December 2016 to the 6th of March 2017.

From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations.

Currently, internships are open internationally to women (cis and trans), trans men, and genderqueer people. Additionally, they are open to residents and nationals of the United States of any gender who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander.

If you want to apply to an internship in Debian, you should take a look at the wiki page, and contact the mentors for the projects listed, or seek more information on the (public) debian-outreach mailing-list. You can also contact the Outreach Team directly. If you have a project idea and are willing to mentor an intern, you can submit a project idea on the Outreachy wiki page.

Here's a few words on what the interns for the last round achieved within Outreachy:

  • Tatiana Malygina worked on Continuous Integration for Bioinformatics applications; She has pushed more than a hundred commits to the Debian Med SVN repository over the last months, and has been sponsored for more than 20 package uploads.

  • Valerie Young worked on Reproducible Builds infrastructure, driving a complete overhaul of the database and software behind the website. Her blog contains regular updates throughout the program.

  • ceridwen worked on creating reprotest, an all-in-one tool allowing anyone to check whether a build is reproducible or not, replacing the string of ad-hoc scripts the reproducible builds team used so far. She posted regular updates on the Reproducible Builds team blog.

  • While Scarlett Clark did not complete the internship (as she found a full-time job by the mid-term evaluation!), she spent the four weeks she participated in the program providing patches for reproducible builds in Debian and KDE upstream.

Debian would not be able to participate in Outreachy without the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships. If you want to donate, please get in touch with one of our trusted organizations.

Debian is looking forward to welcoming new interns for the next few months, come join us!

Categories: Elsewhere

Guido Günther: Debian Fun in September 2016

Sun, 09/10/2016 - 16:59
Debian LTS

September marked the seventeenth month I contributed to Debian LTS under the Freexian umbrella. I spent 6 hours (out of 7) working on

  • updating Icedove to 45.3 resulting in DLA-640-1
  • finishing my work on bringing rails into shape security wise resulting in DLA-641-1 for ruby-activesupport-3.2 and DLA-642-1 for ruby-activerecord-3.2.
  • enhancing the autopkgtests for qemu a bit
Other Debian stuff
  • Uploaded libvirt 2.3.0~rc1 to experimental
  • Uploaded whatmaps to 0.0.12 in unstable.
  • Uploaded git-buildpackage 0.8.4 to unstable.
Other Free Software activities
  • Ansible: got the foreman callback plugin needed for foreman_ansible merged upstream.
  • Made several improvements to foreman_ansible_inventory (a ansible dynamic inventory querying Foreman): Fixing an endless loop when Foreman would miscalculate the number of hosts to process, flake8 cleaniness and some work on python3 support
  • ansible-module-foreman:
    • unbreak defining subnets by setting the default boot mode.
    • add support for configuring realms
  • Foreman: add some robustness to the nice rebuild host feature when DNS entries are already there
  • Released whatmaps 0.0.12.
    • Errors related to a single package don't abort the whole program but rather skip over it now.
    • Systemd user sessions are filtered out
    • The codebase is now checked with flake8.
Categories: Elsewhere

Ben Armstrong: Annual Hike with Ryan: Salt Marsh Trail, 2016

Sun, 09/10/2016 - 14:20

Once again, Ryan Neily and I met last month for our annual hike. This year, to give our aging knees a break, we visited the Salt Marsh Trail for the first time. For an added level of challenge and to access the trail by public transit, we started with the Shearwater Flyer Trail and finished with the Heritage Trail. It was a perfect day both for hiking and photography: cool with cloud cover and a refreshing coastal breeze. The entire hike was over 25 km and took the better part of the day to complete. Good times, great conversations, and I look forward to visiting these beautiful trails again!

Salt Marsh trail hike, 2016. Click to start the slideshow. We start here, on the Shearwater flyer trail. Couldn’t ID this bush. The berries are spectacular! A pond to the side of the trail. Different angle for dramatic lighting effect. Rail bridge converted to foot bridge. Cranberries! Reviewing our progress. From the start … Map of the Salt Marsh trail ahead. Off we go again! First glimpse through the trees. Appreciating the cloud cover today. Salt-marshy grasses. Never far from rocks in NS. Rocks all laid out in stripes. Lunch & selfie time. Ryan attacking his salad. Vantage point. A bit of causeway coast. Plenty of eel grass. Costal flora. We head for the bridge next. Impressed by the power of the flow beneath. Snapping more marsh shots. Ripples. Gulls, and if you squint, a copter. More ripples. Swift current along this channel. Until it broadens out and slows down. Nearly across. Heron! Sorry it’s so tiny. Heron again, before I lost it. Ducks at the head of the Atlantic View trail where we rested and then turned back. Attempt at artsy. Nodding ladies tresses on the way back. Several of them. Sky darkening, but we still have time. A lonely wild rose. The last gasp of late summer. Back across the marshes. A short breather on the Heritage Trail.

Here’s the Strava record of our hike:

Categories: Elsewhere

Norbert Preining: Reload: Android 7.0 Nougat – Root – Pokemon Go

Sun, 09/10/2016 - 13:01

Ok, it turned out that a combinations of updates has broken my previous guide on playing Pokemon GO on a rooted Android device. What has happened that the October security update of the Android Nougat has changed the SecurityNet that is used for checking for rooted devices, and at the same time the Magisk rooting system as catapulted itself (hopefully temporarily) into the complete irrelevance by removing the working version and providing an “improved” version that does neither have SuperSU installed, nor the ability to hide root – well done, congratulations.

But there is a way around, and I am now back at the latest security patch level, rooted, and playing Pokemon GO (not very often, no time, though!).

My previous guide used Magisk Version 6 to root and hide root. But the recent security updated of Andorid Nougat (October 2016) has rendered Magisk-v6 non-working. I first thought that Magisk-v7 could solve the problem, but I was badly disappointed: After reflashing my device to pristine state, installing Magisk-v7, I suddenly was left with: no SuperSU (that means, X-plore, Titanium Backup etc do not work anymore), nor the ability to hide root for Pokemon Go or banking apps. Great update.

Thus, I have decided to remove Magisk completely and make a clean start with SuperSU and suhide (and a GUI for suhide). And it turned out to be more convenient and more standard than Magisk, may it rest in peace (until they fix their stuff together).

In the following I assume you have a clean slate Android Nougat device, if not, please see one of the previous guides for hints how to flash back without loosing your user data.


One need the following few items:


Unzip the and either use the included programs (,, root-windows.bat) to root your device, or simply connect your device to your computer, and run (assuming you have adb and fastboot installed):

adb reboot bootloader sleep 10 fastboot boot image/CF-Auto-Root-angler-angler-nexus6p.img

After that your device will reboot a few times, and you will finally land in your normal Android screen and a new program SuperSU will be available. At this stage you will not be able to play Pokemon GO anymore.

Updating SuperSU

The version of SuperSU packaged with the CF-AutoRoot is unfortunately too old, so one needs to update using the zip file (or later). Here are two options: Either you use TWRP recovery system, or you install FlashFire (main page, app store page) which allows you to flash zip/ota directly from your Android screen. This time I used for the first time the FlashFire method, and it worked without any problem.

Just press the “+” button in FlashFire, than the “Flash zip/ota” button, select the, click two times yes, and then wait a bit and a few black screens (don’t do anything!) later you will be back in your Nougat environment. Opening the SuperSU app should show you on the Settings tag that the version has been updated to 2.78-SR1.

Installing suhide

As with the update of SuperSU, install the suhide zip file, smae procedure, nothing special.

After this you will be able to add an application (like Pokemon GO) from the command line (shell), but this is not very convenient. Better is to install the suhide GUI from the app store, start it, scroll for Pokemnon GO, add a tick, and you are settled.

After that you are free to play Pokemon GO again. At least until the next security update brings again problems. In the long run this is a loosing game, anyway. Enjoy it while you can.

Categories: Elsewhere

Craig Sanders: Converting to a ZFS rootfs

Sun, 09/10/2016 - 07:57

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdp: 537234768 sectors, 256.2 GiB Logical sector size: 512 bytes Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 537234734 Partitions will be aligned on 8-sector boundaries Total free space is 6 sectors (3.0 KiB) Number Start (sector) End (sector) Size Code Name 1 40 2047 1004.0 KiB EF02 BIOS boot partition 2 2048 2099199 1024.0 MiB EF00 EFI System 3 2099200 6293503 2.0 GiB 8300 Linux filesystem 4 6293504 14682111 4.0 GiB 8200 Linux swap 5 14682112 455084031 210.0 GiB BF07 Solaris Reserved 1 6 455084032 459278335 2.0 GiB BF08 Solaris Reserved 2 7 459278336 537234734 37.2 GiB BF09 Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

#! /bin/bash src='sdp' targets=( 'sdq' 'sdr' 'sds' ) for tgt in "${targets[@]}"; do sgdisk --replicate="/dev/$tgt" /dev/"$src" sgdisk --randomize-guids "/dev/$tgt" done 3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

#! /bin/bash exec &> ./create.log hn="$(hostname -s)" base='ata-Crucial_CT275MX300SSD1_' md='/dev/md0' md_part=3 md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) ) zfs_part=5 # 4 disks, so use the top half and bottom half for the two mirrors. zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) ) zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) ) # create /boot raid array mdadm "$md" --create \ --bitmap=internal \ --raid-devices=4 \ --level 1 \ --metadata=0.90 \ "${md_parts[@]}" mkfs.ext4 "$md" # create zpool zpool create -o ashift=12 "$hn" \ mirror "${zmirror1[@]}" \ mirror "${zmirror2[@]}" # create zfs rootfs zfs set compression=on "$hn" zfs set atime=off "$hn" zfs create "$hn/root" zpool set bootfs="$hn/root" # mount the new /boot under the zfs root mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \ -o primarycache=metadata ganesh/postgres zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \ -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).


hn="$(hostname -s)" time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

#! /bin/sh hn="$(hostname -s)" for i in proc sys dev dev/pts ; do mount -o bind "/$i" "/${hn}/root/$i" done chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root / zfs defaults 0 0 /dev/md0 /boot ext4 defaults,relatime,nodiratime,errors=remount-ro 0 2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

#! /bin/sh cd /dev ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

#! /bin/sh hn="$(hostname -s)" md="/dev/md0" for i in dev/pts dev sys proc ; do umount "/${hn}/root/$i" done umount "$md" zfs umount "${hn}/root" zfs umount "${hn}" zfs set mountpoint=/ "${hn}/root" zfs set canmount=off "${hn}" 8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes
  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \ mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \ ata-Crucial_CT275MX300SSD1_163313AB002C-part6

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

Categories: Elsewhere

Nathan Handler: Ohio Linux Fest

Sun, 09/10/2016 - 02:00

This weekend, I traveled to Columbus, Ohio to attend Ohio Linux Fest. I departed San Francisco early on Thursday. It was interesting getting to experience the luxurious side of flying as I enjoyed a mimosa in the American Express Centurion lounge for the first time. I even happend to cross paths with Corey Quinn, who was on his way to [DevOpsDays Boise].

While connecting in Houston, I met up with the always awesome José Antonio Rey, who was to be my travel companion for this trip. The long day of travel took its toll on us, so we had a lazy Friday morning before checking in for the conference around lunch time. I was not that interested in the afternoon sessions, so I spent the majority of the first day helping out at the Ubuntu booth and catching up with friends and colleagues. The day ended with a nice Happy Hour sponsored by Oracle.

Saturday was the main day for the conference. Ethan Galstad, Founder and CEO of Nagios, started the day with a Keynote about Becoming the Next Tech Entrepreneur. Next up was Elizabeth K. Joseph with A Tour of OpenStack Deployment Scenarios. While I’ve read plenty about OpenStack, I’ve never actually used it before. As a result, this demo and introduction was great to watch. It was entertaining to watch her login to CirrOS with the default password of cubswin:), as the Chicago Cubs are currently playing the San Francisco Giants in the National League Divisional Series (and winning). Unfortunately, I was not able to win a copy of her new Common OpenStack Deployments book, but it was great getting to watch her signing copies for other attendees after all of the hard work that went into writing the book.

For lunch, José, Elizabeth, and Svetlana Belkin all gathered together for an informal Ubuntu lunch.

Finally, it was time for me to give my talk. This was the same talk I gave at FOSSCON, but this time, I had a significantly larger audience. Practice definitely makes perfect, as my delivery was a lot better the second time giving this talk. Afterwards, I had a number of people come up to me to let me know that they really enjoyed the presentation. Pro Tip: If you ever attend a talk, the speaker will really appreciate any feedback you send their way. Even if it is a simple, “Thank You”, it really means a lot. One of the people who came up to me after the talk was Unit193. We have known each other through Ubuntu for years, but there has never been an opportunity to meet in person. I am proud to be able to say with 99% confidence that he is not a robot, and is in fact a real person.

Next up was a lesson about the /proc filesystem. While I’ve explored it a bit on my own before, I still learned a few tips and tricks about information that can be gained from the files in this magical directory.

Following this was a talk about Leading When You’re Not the Boss. It was even partially taught by a dummy (the speaker was a ventriloquist). The last regular talk of the day was one of the more interesting ones I attended. It was a talk by Patrick Shuff from Facebook about how they have built a load balancer than can handle a billion users. The slide deck was well-made with very clear diagrams. The speaker was also very knowledgeable and dealt with the plethora of questions he received.

Prior to the closing keynote was a series of lightning talks. These served as a great means to get people laughing after a long day of talks. The closing keynote was given by father and daughter Joe and Lilly Born about The Democratization of Invention. Both of them had very interesting stories, and Lily was quite impressive given her age.

We skipped the Nagios After Party in favor of a more casual pizza dinner.

Overall, it was a great conference, and I am very glad to have had the opportunity to attend. A big thanks to Canonical and the Ubuntu Community for fudning my travel through the Ubuntu Community Fund and to the Ohio Linux Fest staff for allowing me the opportunity to speak at such a great conference.

Categories: Elsewhere

Norbert Tretkowski: Gajim plugins packaged for Debian

Sun, 09/10/2016 - 00:00

Wolfgang Borgert started to package some of the available Gajim plugins for Debian. At the time of writing, the OMEMO, HTTP Upload and URL Image Preview plugins are available in testing and unstable.

More plugins will follow.

Categories: Elsewhere

Joachim Breitner: T430s → T460s

Sat, 08/10/2016 - 23:22

Earlier this week, I finally got my new machine that came with my new position at the University of Pennsylvania: A shiny Thinkpad T460s that now replaces my T430s. (Yes, there is a pattern. It continues with T400 and T41p.) I decided to re-install my Debian system from scratch and copy over only the home directory – a bit of purification does not hurt. This blog post contains some random notes that might be useful to someone or alternative where I hope someone can tell me how to fix and improve things.


The installation (using debian-installer from a USB drive) went mostly smooth, including LVM on an encrypted partition. Unfortunately, it did not set up grub correctly for the UEFI system to boot, so I had to jump through some hoops (using the grub on the USB drive to manually boot into the installed system, and installing grub-efi from there) until the system actually came up.

High-resolution display

This laptop has a 2560×1440 high resolution display. Modern desktop environments like GNOME supposedly handle that quite nicely, but for reasons explained in an earlier post, I do not use a desktop envrionment but have a minimalistic setup based on Xmonad. I managed to get a decent setup now, by turning lots of manual knobs:

  • For the linux console, setting

    FONTFACE="Terminus" FONTSIZE="12x24"

    in /etc/default/console-setup yielded good results.

  • For the few GTK-2 applications that I am still running, I set

    gtk-font-name="Sans 16"

    in ~/.gtkrc-2.0. Similarly, for GTK-3 I have

    [Settings] gtk-font-name = Sans 16

    in ~/.config/gtk-3.0/settings.ini.

  • Programs like gnome-terminal, Evolution and hexchat refer to the “System default document font” and “System default monospace font”. I remember that it was possible to configure these in the GNOME control center, but I could not find any way of configuring these using command line tools, so I resorted to manually setting the font for these. With the help from Alexandre Franke I figured out that the magic incarnation here is:

    gsettings set org.gnome.desktop.interface monospace-font-name 'Monospace 16' gsettings set org.gnome.desktop.interface document-font-name 'Serif 16' gsettings set org.gnome.desktop.interface font-name 'Sans 16'
  • Firefox seemed to have picked up these settings for the UI, so that was good. To make web pages readable, I set layout.css.devPixelsPerPx to 1.5 in about:config.

  • GVim has set guifont=Monospace\ 16 in ~/.vimrc. The toolbar is tiny, but I hardly use it anyways.

  • Setting the font of Xmonad prompts requires the sytax

    , font = "xft:Sans:size=16"

    Speaking about Xmonad prompts: Check out the XMonad.Prompt.Unicode module that I have been using for years and recently submitted upstream.

  • I launch Chromium (or rather the desktop applications that I use that happen to be Chrome apps) with the parameter --force-device-scale-factor=1.5.

  • Libreoffice seems to be best configured by running xrandr --dpi 194 before hand. This seems also to be read by Firefox, doubling the effect of the font size in the gtk settings, which is annoying. Luckily I do not work with Libreoffice often, so for now I’ll just set that manually when needed.

I am not quite satisfied. I have the impression that the 16 point size font, e.g. in Evolution, is not really pretty, so I am happy to take suggestions here.

I found the ArchWiki page on HiDPI very useful here.

Trackpoint and Touchpad

One reason for me to sticking with Thinkpads is their trackpoint, which I use exclusively. In previous models, I disabled the touchpad in the BIOS, but this did not seem to have an effect here, so I added the following section to /etc/X11/xorg.conf.d/30-touchpad.conf

Section "InputClass" Identifier "SynPS/2 Synaptics TouchPad" MatchProduct "SynPS/2 Synaptics TouchPad" Option "ignore" "on" EndSection

At one point I left out the MatchProduct line, disabling all input in the X server. Had to boot into recovery mode to fix that.

Unfortunately, there is something wrong with the trackpoint and the buttons: When I am moving the trackpoint (and maybe if there is actual load on the machine), mouse button press and release events sometimes get lost. This is quite annoying – I try to open a folder in Evolution and accidentially move it.

I installed the latest Kernel from Debian experimental (4.8.0-rc8), but it did not help.

I filed a bug report against libinput although I am not fully sure that that’s the culprit.

Update: According to Benjamin Tissoires it is a known firmware bug and the appropriate people are working on a work-around. Until then I am advised to keep my palm of the touchpad.

Also, I found the trackpoint too slow. I am not sure if it is simply because of the large resolution of the screen, or because some movement events are also swallowed. For now, I simply changed the speed by writing

SUBSYSTEM=="serio", DRIVERS=="psmouse", ATTRS{speed}="120"

to /etc/udev/rules.d/10-trackpoint.rules.

Brightness control

The system would not automatically react to pressing Fn-F5 and Fn-F6, which are the keys to adjust the brightness. I am unsure about how and by what software component it “should” be handled, but the solution that I found was to set

Section "Device" Identifier "card0" Driver "intel" Option "Backlight" "intel_backlight" BusID "PCI:0:2:0" EndSection

so that the command line tool xbacklight would work, and then use Xmonad keybinds to perform the action, just as I already do for sound control:

, ((0, xF86XK_Sleep), spawn "dbus-send --system --print-reply --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.Suspend") , ((0, xF86XK_AudioMute), spawn "ponymix toggle") , ((0, 0x1008ffb2 {- xF86XK_AudioMicMute -}), spawn "ponymix --source toggle") , ((0, xF86XK_AudioRaiseVolume), spawn "ponymix increase 5") , ((0, xF86XK_AudioLowerVolume), spawn "ponymix decrease 5") , ((shiftMask, xF86XK_AudioRaiseVolume), spawn "ponymix increase 5 --max-volume 200") , ((shiftMask, xF86XK_AudioLowerVolume), spawn "ponymix decrease 5") , ((0, xF86XK_MonBrightnessUp), spawn "xbacklight +10") , ((0, xF86XK_MonBrightnessDown), spawn "xbacklight -10")

The T460s does not actually have a sleep button, that line is a reminiscence from my T430s. I suspend the machine by pressing the power button now, thanks to HandlePowerKey=suspend in /etc/systemd/logind.conf.

Profile Weirdness

Something strange happend to my environment variables after the move. It is clearly not hardware related, but I simply cannot explain what has changed: All relevant files in /etc look similar enough.

I use ~/.profile to extend the PATH and set some other variables. Previously, these settings were in effect in my whole X session, which is started by lightdm with auto-login, followed by xmonad-session. I could find no better way to fix that than stating . ~/.profile early in my ~/.xmonad/xmonad-session-rc. Very strange.

Categories: Elsewhere

Charles Plessy: I just finished to read the Imperial Radch trilogy.

Sat, 08/10/2016 - 17:29

I liked it a lot. There are already many comments on Internet (thanks Russ for making me discover these novels), so I will not go into details. And it is hard to summarise without spoiling. In brief:

The first tome, Ancillary Justice, makes us visit various worlds and cultures, and give us an impression of what it feels to be a demigod. The main culture does not make a difference between the two sexes, and the grammar of its language does not have genders. This gives an original taste to the story, for instance when the hero speaks a foreign language, he has difficulties to correctly address people without risking to frown them. Unfortunately the English language itself does not use gender very much, so the literary effect is a bit weakened. Perhaps the French translation (which I have not read) could be more interesting in that respect?

The second tome, Ancillary Sword, shows us how one can communicate things in a surveillance society without privacy, by subtle variations on how to serve tea. Gallons of tea are drunk in this tome, of which the main interest is the relation between the characters and heir conversations.

In the third tome, Ancillary Mercy, asks the question of what makes us human. Among the most interesting characters, there is a kind of synthetic human, who acts as ambassador for an alien race. At first, he indeed behaves completely alien, but in the end, he is not very different from a newborn who would happen by miracle to know how to speak: in the beginning the World makes no sense, but step by step and by experimenting, he deduces how it works. This is how this character ends up understanding that what is called "war" is a complex phenomenon where one of the consequences is a shortage of fish sauce.

I was a bit surprised that no book lead us at the heart of the Radch empire, but I just see on Wikipedia that one more novel is in preparation... One can speculate that central Radch resembles to a future dystopian West, in which surveillance of everybody is total and constant, but where people think they are happy, and peace and well-being inside are kept possible thanks to military operations outside, mostly performed by killer robots controlled by artificial intelligences. A not so distant future ?

It is a matter of course that there does not seem to by any Free software in the Radch empire. That reminds me that I did not contribute much to Debian while I was reading...

Categories: Elsewhere

Norbert Preining: Debian/TeX update October 2016: all of TeX Live and Biber 2.6

Sat, 08/10/2016 - 06:45

Finally a new update of many TeX related packages: all the texlive-* including the binary packages, and biber have been updated to the latest release. This upload was delayed by my travels around the world, as well as the necessity to package a new Perl module (libdatetime-calendar-julian-perl) as required by new Biber. Also, my new job leaves me only the weekends for packaging. Anyway, the packages are now uploaded and should appear soon on your friendly local server.

There are several highlights: The binaries have been patched with several upstream fixes (tex4ht and XeTeX compatibility, as well as various Japanese TeX engine fixes), updated biber and biblatex, and as usual loads of new and updated packages.

Last but not least I want to thank one particular author: His package was removed from TeX Live due to the addition of a rather unusual clause in the license. Instead of simply uploading new packages to Debian with the rather important removed, I contacted the author and asked for clarification. And to my great pleasure he immediately answered with an update of the package with fixed license.

All of us user of these many packages should be grateful to the authors of the packages who invest loads of their free time into supporting our community. Thanks!

Enough now, here as usual the list of new and updated packages with links to their respective CTAN pages. Enjoy.

New packages

addfont, apalike-german, autoaligne, baekmuk, beamerswitch, beamertheme-cuerna, beuron, biblatex-claves, biolett-bst, cooking-units, cstypo, emf, eulerpx, filecontentsdef, frederika2016, grant, latexgit, listofitems, overlays, phonenumbers, pst-arrow, quicktype, revquantum, richtext, semantic-markup, spalign, texproposal, tikz-page, unfonts-core, unfonts-extra, uspace.

Updated packages

achemso, acmart, acro, adobemapping, alegreya, allrunes, animate, arabluatex, archaeologie, asymptote, attachfile, babel-greek, bangorcsthesis, beebe, biblatex, biblatex-anonymous, biblatex-apa, biblatex-bookinother, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-ieee, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-phys, biblatex-realauthor, biblatex-science, biblatex-true-citepages-omit, bibleref, bidi, chemformula, circuitikz, cochineal, colorspace, comment, covington, cquthesis, ctex, drawmatrix, ejpecp, erewhon, etoc, exsheets, fancyhdr, fei, fithesis, footnotehyper, fvextra, geschichtsfrkl, gnuplottex, gost, gregoriotex, hausarbeit-jura, ijsra, ipaex, jfontmaps, jsclasses, jslectureplanner, latexdiff, leadsheets, libertinust1math, luatexja, markdown, mcf2graph, minutes, multirow, mynsfc, nameauth, newpx, newtxsf, notespages, optidef, pas-cours, platex, prftree, pst-bezier, pst-circ, pst-eucl, pst-optic, pstricks, pstricks-add, refenums, reledmac, rsc, shdoc, siunitx, stackengine, tabstackengine, tagpair, tetex, texlive-es, texlive-scripts, ticket, translation-biblatex-de, tudscr, turabian-formatting, updmap-map, uplatex, xebaposter, xecjk, xepersian, xpinyin.


Categories: Elsewhere

Dirk Eddelbuettel: tint 0.0.2: Tint Is Not Tufte

Fri, 07/10/2016 - 14:38

The tint package is now on CRAN. Its name stands for Tint Is Not Tufte and it offers a fresh take on the excellent Tufte-style html (and now also pdf) presentations.

As a little teaser, here is what the html variant looks like:

and the full underlying document is available too.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Petter Reinholdtsen: Isenkram, Appstream and udev make life as a LEGO builder easier

Fri, 07/10/2016 - 09:50

The Isenkram system provide a practical and easy way to figure out which packages support the hardware in a given machine. The command line tool isenkram-lookup and the tasksel options provide a convenient way to list and install packages relevant for the current hardware during system installation, both user space packages and firmware packages. The GUI background daemon on the other hand provide a pop-up proposing to install packages when a new dongle is inserted while using the computer. For example, if you plug in a smart card reader, the system will ask if you want to install pcscd if that package isn't already installed, and if you plug in a USB video camera the system will ask if you want to install cheese if cheese is currently missing. This already work just fine.

But Isenkram depend on a database mapping from hardware IDs to package names. When I started no such database existed in Debian, so I made my own data set and included it with the isenkram package and made isenkram fetch the latest version of this database from git using http. This way the isenkram users would get updated package proposals as soon as I learned more about hardware related packages.

The hardware is identified using modalias strings. The modalias design is from the Linux kernel where most hardware descriptors are made available as a strings that can be matched using filename style globbing. It handle USB, PCI, DMI and a lot of other hardware related identifiers.

The downside to the Isenkram specific database is that there is no information about relevant distribution / Debian version, making isenkram propose obsolete packages too. But along came AppStream, a cross distribution mechanism to store and collect metadata about software packages. When I heard about the proposal, I contacted the people involved and suggested to add a hardware matching rule using modalias strings in the specification, to be able to use AppStream for mapping hardware to packages. This idea was accepted and AppStream is now a great way for a package to announce the hardware it support in a distribution neutral way. I wrote a recipe on how to add such meta-information in a blog post last December. If you have a hardware related package in Debian, please announce the relevant hardware IDs using AppStream.

In Debian, almost all packages that can talk to a LEGO Mindestorms RCX or NXT unit, announce this support using AppStream. The effect is that when you insert such LEGO robot controller into your Debian machine, Isenkram will propose to install the packages needed to get it working. The intention is that this should allow the local user to start programming his robot controller right away without having to guess what packages to use or which permissions to fix.

But when I sat down with my son the other day to program our NXT unit using his Debian Stretch computer, I discovered something annoying. The local console user (ie my son) did not get access to the USB device for programming the unit. This used to work, but no longer in Jessie and Stretch. After some investigation and asking around on #debian-devel, I discovered that this was because udev had changed the mechanism used to grant access to local devices. The ConsoleKit mechanism from /lib/udev/rules.d/70-udev-acl.rules no longer applied, because LDAP users no longer was added to the plugdev group during login. Michael Biebl told me that this method was obsolete and the new method used ACLs instead. This was good news, as the plugdev mechanism is a mess when using a remote user directory like LDAP. Using ACLs would make sure a user lost device access when she logged out, even if the user left behind a background process which would retain the plugdev membership with the ConsoleKit setup. Armed with this knowledge I moved on to fix the access problem for the LEGO Mindstorms related packages.

The new system uses a udev tag, 'uaccess'. It can either be applied directly for a device, or is applied in /lib/udev/rules.d/70-uaccess.rules for classes of devices. As the LEGO Mindstorms udev rules did not have a class, I decided to add the tag directly in the udev rules files included in the packages. Here is one example. For the nqc C compiler for the RCX, the /lib/udev/rules.d/60-nqc.rules file now look like this:

SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \ SYMLINK+="rcx-%k", TAG+="uaccess"

The key part is the 'TAG+="uaccess"' at the end. I suspect all packages using plugdev in their /lib/udev/rules.d/ files should be changed to use this tag (either directly or indirectly via 70-uaccess.rules). Perhaps a lintian check should be created to detect this?

I've been unable to find good documentation on the uaccess feature. It is unclear to me if the uaccess tag is an internal implementation detail like the udev-acl tag used by /lib/udev/rules.d/70-udev-acl.rules. If it is, I guess the indirect method is the preferred way. Michael asked for more documentation from the systemd project and I hope it will make this clearer. For now I use the generic classes when they exist and is already handled by 70-uaccess.rules, and add the tag directly if no such class exist.

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

To help out making life for LEGO constructors in Debian easier, please join us on our IRC channel #debian-lego and join the Debian LEGO team in the Alioth project we created yesterday. A mailing list is not yet created, but we are working on it. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: Elsewhere

Joey Hess: keysafe with local shares

Fri, 07/10/2016 - 00:37

If your gpg key is too valuable for you to feel comfortable with backing it up to the cloud using keysafe, here's an alternative that might appeal more.

Keysafe can now back up some shares of the key to local media, and other shares to the cloud. You can arrange things so that the key can't be restored without access to some of the local media and some of the cloud servers, as well as your password.

For example, I have 3 USB sticks, and there are 3 keysafe servers. So let's make 6 shares total of my gpg secret key and require any 4 of them to restore it.

I plug in all 3 USB sticks and look at mount to get the paths to them. Then, run keysafe, to back up the key spread amoung all 6 locations.

keysafe --backup --totalshares 6 --neededshares 4 \ --add-storage-directory /media/sdc1 \ --add-storage-directory /media/sdd1 \ --add-storage-directory /media/sde1

Once it's done, I can remove the USB sticks, and distribute them to secure places.

To restore, I need at least one of the USB sticks. (If some of the servers are down, more USB sticks will be needed.) Again I tell keysafe the paths where USB stick(s) are mounted.

keysafe --restore --totalshares 6 --neededshares 4 \ --add-storage-directory /media/sdb1

Using keysafe this way, physical access to the USB sticks is the first level of defense, and hopefully you'll know if that's breached. The keysafe password is the second level of defense, and cracking that will take a lot of work. Leaving plenty of time to revoke your key, etc, if it comes to that.

I feel this is better than the methods I've been using before to back up my most important gpg keys. With paperkey, physical access to the printout immediately exposes the key. With Shamir Secret Sharing and manual distribution of shares, the only second line of defense is the much easier to crack gpg passphrase. Using OpenPGP smartcards is still a more secure option, but you'd need 3 smartcards to reach the same level of redundancy, and it's easier to get your hands on 3 USB sticks than 3 smartcards.

There's another benefit to using keysafe this way. It means that sometimes, the data stored on the keysafe servers is not sufficient to crack a key. There's no way to tell, so an attacker risks doing a lot of futile work.

If you're not using an OpenPGP smartcard, I encourage you to back up your gpg key with keysafe as described above.

Two of the three necessary keysafe servers are now in operation, and I hope to have a full complement of servers soon.

(This was sponsored by Thomas Hochstein on Patreon.)

Categories: Elsewhere

Nathan Handler: FOSSCON

Thu, 06/10/2016 - 23:31

This post is long past due, but I figured it is better late than never. At the start of the year, I set a goal to get more involved with attending and speaking at conferences. Through work, I was able to attend the Southern California Linux Expo (SCALE) in Pasadena, CA in January. I also got to give a talk at O'Relly's Open Source Convention (OSCON) in Austin, TX in May. However, I really wanted to give a talk about my experience contributing in the Ubuntu community.

José Antonio Rey encouraged me to submit the talk to FOSSCON. While I've been aware of FOSSCON for years thanks to my involvement with the freenode IRC network (which has had a reference to FOSSCON in the /motd for years), I had never actually attended it before. I also wasn't quite sure how I would handle traveling from San Francisco, CA to Philadelphia, PA. Regardless, I decided to go ahead and apply.

Fast forward a few weeks, and imagine my surprise when I woke up to an email saying that my talk proposal was accepted. People were actually interested in me and what I had to say. I immediately began researching flights. While they weren't crazy expensive, they were still more money than I was comfortable spending. Luckily, José had a solution to this problem as well; he suggested applying for funding through the Ubuntu Community Donations fund. While I've been an Ubuntu Member for over 8 years, I've never used this resource before. However, I was happy when I received a very quick approval.

The conference itself was smaller than I was expecting. However, it was packed with lots of friendly and familiar faces of people I've interacted with online and in person over the years at various Open Source events.

I started off the day by learning from José how to use Juju to quickly setup applications in the cloud. While Juju has definitely come a long way over the last couple of years, and it appears t be quite easy to learn and use, it still appears to be lacking some of the features needed to take full control over how the underlying applications interact with each other. However, I look forward to continuing to watch it grow and mature.

Net up, we had a lunch break. There was no catered lunch at this conference, so we decided to get some cheesesteak at Abner's (is any trip to Philadelphia complete without cheesesteak?).

Following lunch, I took some time to make a few last minute changes to my presentation and rehearse a bit. Finally, it was time. I got up in front of the audience and gave my presentation. Overall, I was quite pleased. It was not perfect, but for the first time giving the talk, I thought it went pretty well. I will work hard to make it even better for next tme.

Following my talk was a series of brief lightning talks prior to the closing keynote. Another long time friend of mine, Elizabeth Krumbach Joseph, was giving the keynote about listening to the needs of your global open source community. While I have seen her speak on several other occassions, I really enjoyed this particular talk. It was full of great examples and anecdotes that were easy for the audience to relate to and start applying to their own communities.

After the conference, a few of us went off and played tourist, paying the Liberty Bell a visit before concluding our trip in Philadelpha.

Overall, I had a great time as FOSSCON. It was great being re-united with so many friends. A big thank you to José for his constant support and encouragement and to Canonical and the Ubuntu Community for helping to make it possible for me to attend this conference. Finally, thanks to the terrific FOSSCON staff for volunteering so much time to put on this great event.

Categories: Elsewhere

Ben Hutchings: Debian LTS work, September 2016

Thu, 06/10/2016 - 19:39

I was assigned 12.3 hours of work by Freexian's Debian LTS initiative and carried over 1.45 from last month. I was unwell for much of this month and only worked 6 hours on LTS. I returned 7 hours to the pool and carry over 0.75 hours.

I wrote and sent the DLA for linux 3.2.81-2, and I discussed various handling of various issues on the debian-lts mailing list. Most of my time was spent working on the long backlog of security issues in imagemagick. I hope to complete this and upload a fixed version this month.

Categories: Elsewhere

Arturo Borrero González: About Pacemaker HA stack in Debian Jessie

Thu, 06/10/2016 - 16:30

People keep ignoring the status of the Pacemaker HA stack in Debian Jessie. Most people think that they should stick to Debian Wheezy.

Why does this happen? Perhaps little or none publicity of the situation.

Since some time now, Debian contains a Pacemaker stack which is ready to use in both Debian Jessie and in Debian Stretch.

Anyway, let’s see what we have so far:

  1. The pacemaker stack was updated in Debian unstable around Feb 2016.
  2. They migrated to Debian testing by that time as well.
  3. Most of the key packages were backported to jessie-backports (if not all).
  4. Therefore, Stretch is ready for the HA stack, and so is Jessie (using backports).

The packages I’m refering to are those which I consider the key components of the stack, and by the time of this blogpost, the versions are:

package jessie-backports stretch sid upstream corosync 2.3.6 2.3.6 2.3.6 2.4.1 pacemaker 1.1.14 1.1.15 1.1.15 1.1.15 crmsh 2.2.0 2.2.1 2.2.1 2.4.1 libqb 1.0 1.0 1.0 1.0

How can you check this by yourself? Here some pointers:

  • Debian HA packaging team overview: link
  • Package tracker for corosync: link
  • Package tracker for pacemaker: link
  • Package tracker for crmsh: link
  • Package tracker for libqb: link

I’m sure we even have the chance to improve a bit the packages before the release of stretch. There are some packages which are a bit behind the upstream version.

In any case: Yes! you can move from Debian Wheezy to Debian Jessie!

Categories: Elsewhere

Reproducible builds folks: Reproducible Builds: week 75 in Stretch cycle

Thu, 06/10/2016 - 16:24

What happened in the Reproducible Builds effort between Sunday September 25 and Saturday October 1 2016:


For the first time, we reached 91% reproducible packages in our testing framework on testing/amd64 using a determistic build path. (This is what we recommend to make packages in Stretch reproducible.) For unstable/amd64, where we additionally test for reproducibility across different build paths we are at almost 76% again.

IRC meetings

We have a poll to set a time for a new regular IRC meeting. If you would like to attend, please input your available times and we will try to accommodate for you.

There was a trial IRC meeting on Friday, 2016-09-31 1800 UTC. Unfortunately, we did not activate meetbot. Despite this participants consider the meeting a success as several topics where discussed (eg changes to IRC notifications of tests.r-b.o) and the meeting stayed within one our length.

Upcoming events

Reproduce and Verify Filesystems - Vincent Batts, Red Hat - Berlin (Germany), 5th October, 14:30 - 15:20 @ LinuxCon + ContainerCon Europe 2016.

From Reproducible Debian builds to Reproducible OpenWrt, LEDE & coreboot - Holger "h01ger" Levsen and Alexander "lynxis" Couzens - Berlin (Germany), 13th October, 11:00 - 11:25 @ OpenWrt Summit 2016.

Introduction to Reproducible Builds - Vagrant Cascadian will be presenting at the Conference In Seattle (USA), November 11th-12th, 2016.

Previous events

GHC Determinism - Bartosz Nitka, Facebook - Nara (Japan), 24th September, ICPF 2016.

Toolchain development and fixes

Michael Meskes uploaded bsdmainutils/9.0.11 to unstable with a fix for #830259 based on Reiner Herrmann's patch. This fixed locale_dependent_symbol_order_by_lorder issue in the affected packages (freebsd-libs, mmh).

devscripts/2.16.8 was uploaded to unstable. It includes a debrepro script by Antonio Terceiro which is similar in purpose to reprotest but more lightweight; specific to Debian packages and without support for virtual servers or configurable variations.

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible in our testing framework after being fixed:

The following updated packages appear to be reproducible now for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • gkrellm/2.3.8-1 by Sandro Tosi
  • glassfish/1:2.1.1-b31g+dfsg1-4 by Emmanuel Bourg

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

77 package reviews have been added, 178 have been updated and 80 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been updated:

Weekly QA work

As part of reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (3)
  • Chris Lamb (12)
  • Lucas Nussbaum (3)
  • Sebastian Reichel (1)
diffoscope development

A new version of diffoscope 61 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Ximin Luo:
    • Improve the CLI --help text and add an --output-empty option.
  • Chris Lamb:
    • Add a progress bar and show it if stdout is a TTY. You can read more about it here. It can also be read by higher-level programs via the --status-fd CLI option.
  • Maria Glukhova:
    • Behaviour improvements in the case of OS-level errors.
  • Mattia Rizzolo:
    • Testing and packaging improvements.

Post-release there were further contributions from:

  • Chris Lamb:
    • Code architecture improvements.
  • Maria Glukhova:
    • Testing improvements.
reprotest development

A new version of reprotest 0.3.2 was uploaded to unstable by Ximin Luo. It included contributions from:

  • Ximin Luo:
    • Add a --diffoscope-arg CLI option to pass extra args to diffoscope.

Post-release there were further contributions from:

  • Chris Lamb:
    • Code quality improvements.
  • Hans-Christoph Steiner continued work on setting up reproducible tests for F-Droid.
  • Holger cleaned up the script creating the page showing breakages, so that it now also cleans up some of the breakage it finds.
  • IRC notifications about diffoscope crashes and artifacts available for investigations have been dropped; instead the breakages page has a permanent pointer. (h01ger)
  • IRC notifications from the automatic package scheduler and status changes for packages have been moved -- as a temporary trial -- to #debian-reproducible-changes on (Mattia).

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Categories: Elsewhere

Clint Adams: Drawers

Thu, 06/10/2016 - 07:24

Ria has the sprue. She keeps her cœliac disease a secret, though, because she works in food service, and customers knowing about her little gluten-sensitive enterology problem would, she feels, damage her credibility.

“The fried chicken is delicious,” she coos. There is nothing gluten-free on the menu, so she does not have first-hand knowledge of this. Instead she is proxying the amalgamated judgments of others.

Categories: Elsewhere

Joey Hess: battery bank refresh

Thu, 06/10/2016 - 00:12

My house entered full power saving mode with fall. Lantern light and all devices shutdown at bedtime.

But, it felt early to need to do this. Comparing with my logbook for last year, the batteries were indeed doing much worse.

I had added a couple of new batteries to the bank last winter, and they seemed to have helped at the time, although it's difficult to tell when you have a couple of good batteries amoung a dozen failing ones.

The bank was set up like this:

+---- house ---- | | +( 6v )-+( 6v )- | | +( 6v )-+( 6v )- | | +( 6v )-+( 6v )- | | +( 6v )-+( 6v )- | | +( 6v )-+( 6v )- | | +( new 12v )- | | +( new 12v )-

Tried as an experiement disconnecting all the bridges between the old 6v battery pairs. I expected this would mean only the new 12v ones would be in the circuit, and so I could see how well they powered the house. Instead, making this change left the house without any power at all!

On a hunch, I then reconnected one bridge, like this -- and power was restored.

+---- house ---- | | +( 6v )-+( 6v )- | | +( 6v ) ( 6v )- | | +( 6v ) ( 6v )- | | +( 6v ) ( 6v )- | | +( 6v ) ( 6v )- | | +( new 12v )- | | +( new 12v )-

My best guess of what's going on is that the wires forming the positive and negative rails are not making good connections (due to corrosion, rust, broken wires etc), and so batteries further down are providing less and less power. The new 12v ones may not have been able to push power up to the house at all.

(Or, perhaps having partially dead batteries hanging half-connected off the circuit has some effect that my meager electronics knowledge can't account for.)

So got longer cables to connect the new batteries directly to the house, bypassing all the old stuff. That's working great -- house power never dropped below 11.9v last night, vs 11.1v the night before.

The old battery bank might still be able to provide another day or so of power in a pinch, so I am going to keep them in there for now, but if I don't use them at all this winter I'll be recycling them. Astounding that those batteries were in use for 20 years.

Categories: Elsewhere

Gustavo Noronha Silva: Web Engines Hackfest 2016!

Wed, 05/10/2016 - 14:23

I had a great time last week and the web engines hackfest! It was the 7th web hackfest hosted by Igalia and the 7th hackfest I attended. I’m almost a local Galician already. Brazilian Portuguese being so close to Galician certainly helps! Collabora co-sponsored the event and it was great that two colleagues of mine managed to join me in attendance.

It had great talks that will eventually end up in videos uploaded to the web site. We were amazed at the progress being made to Servo, including some performance results that blew our minds. We also discussed the next steps for WebKitGTK+, WebKit for Wayland (or WPE), our own Clutter wrapper to WebKitGTK+ which is used for the Apertis project, and much more.

Zan giving his talk on WPE (former WebKitForWayland)

One thing that drew my attention was how many Dell laptops there were. Many collaborans (myself included) and igalians are now using Dells, it seems. Sure, there were thinkpads and macbooks, but there was plenty of inspirons and xpses as well. It’s interesting how the brand make up shifted over the years since 2009, when the hackfest could easily be mistaken with a thinkpad shop.

Back to the actual hackfest: with the recent release of Gnome 3.22 (and Fedora 25 nearing release), my main focus was on dealing with some regressions suffered by users experienced after a change that made putting the final rendering composited by the nested Wayland compositor we have inside WebKitGTK+ to the GTK+ widget so it is shown on the screen.

One of the main problems people reported was applications that use WebKitGTK+ not showing anything where the content was supposed to appear. It turns out the problem was caused by GTK+ not being able to create a GL context. If the system was simply not able to use GL there would be no problem: WebKit would then just disable accelerated compositing and things would work, albeit slower.

The problem was WebKit being able to use an older GL version than the minimum required by GTK+. We fixed it by testing that GTK+ is able to create GL contexts before using the fast path, falling back to the slow glReadPixels codepath if not. This way we keep accelerated compositing working inside WebKit, which gives us nice 3D transforms and less repainting, but take the performance hit in the final “blit”.

Introducing “WebKitClutterGTK+”

Another issue we hit was GTK+ not properly updating its knowledge of the window’s opaque region when painting a frame with GL, which led to some really interesting issues like a shadow appearing when you tried to shrink the window. There was also an issue where the window would not use all of the screen when fullscreen which was likely related. Both were fixed.

André Magalhães also worked on a couple of patches we wrote for customer projects and are now pushing upstream. One enables the use of more than one frontend to connect to a remote web inspector server at once. This can be used to, for instance, show the regular web inspector on a browser window and also use IDE integration for setting breakpoints and so on.

The other patch was cooked by Philip Withnall and helped us deal with some performance bottlenecks we were hitting. It improves the performance of painting scroll bars. WebKitGTK+ does its own painting of scrollbars (we do not use the GTK+ widgets for various reasons). It turns out painting scrollbars can be quite a hit when the page is being scrolled fast, if not done efficiently.

Emanuele Aina had a great time learning more about meson to figure out a build issue we had when a more recent GStreamer was added to our jhbuild environment. He came out of the experience rather sane, which makes me think meson might indeed be much better than autotools.

Igalia 15 years cake

It was a great hackfest, great seeing everyone face to face. We were happy to celebrate Igalia’s 15 years with them. Hope to see everyone again next year =)

Categories: Elsewhere