Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 5 hours 19 min ago

Enrico Zini: Monthly link collections with staticsite

Sat, 09/07/2016 - 19:23

A year ago, I wrote:

Instead of keeping substantial tabs open until I have read all of them, or losing them in the jungle of browser bookmarks, I have written a script that collects them into a file per month, and turns them into markdown files for my blog.

That script turned out to be quirky and overengineered, so much so that I stopped using it myself.

I've now rethought my approach, and downscaled it: instead of saving a copy of each page locally, I can blog a reference to https://archive.org or https://archive.is. I do not need to autogenerate a description from the site itself.

The result has been a nicely minimal set of changes to staticsite that resulted in a new version where adding a link to a monthly collection is as easy as typing ssite new -a links.

As long as I'll remember to rebuild the site 3 weeks from now, a new post should automagically appear in my blog.

Categories: Elsewhere

Charles Plessy: Congratulations, Marga!

Sat, 09/07/2016 - 14:43

For the first time in our history, a woman joins the Technical Committee. Congratulations, Marga, and thanks for volunteering.

Categories: Elsewhere

Joey Hess: twenty years of free software -- part 11 concurrent-output

Fri, 08/07/2016 - 19:06

concurrent-output is a more meaty Haskell library than the ones I've covered so far. Its interface is simple, but there's a lot of complexity under the hood. Things like optimised console updates, ANSI escape sequence parsing, and transparent paging of buffers to disk.

It developed out of needing to display multiple progress bars on the console in git-annex, and also turned out to be useful in propellor. And since it solves a general problem, other haskell programs are moving toward using it, like shake and stack.

Next: ?twenty years of free software -- part 12 propellor

Categories: Elsewhere

Reproducible builds folks: Managing container and environment state

Fri, 08/07/2016 - 15:58

Author: ceridwen

With some more help from Martin Pitt, it became clear to me that my previous mental model of how autopkgtest worked is very different from how it does work. I'll illustrate by borrowing my previous example. I know that schroot has the following behavior:

The default behaviour is as follows (all directory paths are inside the chroot). A login shell is run in the current working directory. If this is not available, it will try $HOME (when --preserve-environment is used), then the user's home directory, and / inside the chroot in turn. A command is always run in the current working directory inside the chroot. If none of the directories are available, schroot will exit with an error status.

I was naively thinking that the way autopkgtest would work is that it would set the current working directory of the schroot call and the ensuing subprocess call would thus take place in that directory inside the schroot. That is not how it works. If you want to change directories inside the virtual server, you have to use cd. The same is true of, at least, environment variables, which have their own specific handling in the adt_testbed.Testbed methods but have to be passed as strings, and umask. I'm assuming this is because the direct methods with qemu images or LXC containers don't work.

What this means is that I was thinking about the problem the wrong way: what reprotest needs to do is generate shell scripts. This is how autopkgtest works. If this goes beyond laying out commands linearly one after another, for instance if it demands conditionals or other nested constructs, the right way to do it is to build an abstract syntax tree representation of the shell script and then convert it to code.

Whether I need more complicated shell scripts depends on my approach to handling state in the containers. I need to know what state persists across separate command executions: if I call adt_testbed.Testbed.execute() twice, what if any changes I make to the container will carry forward from the first to the second? There are three categories here. First, some properties of a system aren't preserved even from one command execution to the next, like working directory and environment variables. (I thought working directory would be preserved, but it's not). The second is state that persists while the testbed is open and is then automatically reverted when it's closed, like files copied into temporary directories on the tesbed. The third is state that persists across different sessions on the same container and must be cleaned up by reprotest. It's worth noting that which state falls into which category may vary by the container in question, though for the most part I can either do unnecessary cleanup or issue unnecessary commands to handle the differences. autopkgtest itself has a very different approach to cleanup, as it relies almost entirely on the builtin reversion capabilities from some of its containers. I would prefer to avoid doing the same, partly because I know that some of the modifications I need to make, for instance creating new users or mounting disorderfs, can't be reverted by the faster, simpler containers like schroot.

From discussions with Lunar, I think that the variations that correspond to environment variables (captures_environment, home, locales, path, and timezone) fall into the first category, but because of the special handling for them they don't require sending a separate command. The shell (bash foo) accepts a script or command as an argument so it also doesn't need a separate command. Setting the working directory and umask require separate commands that have to be joined. On Linux, setarch also accepts a command/script as an argument so can be handled like the shell, but there's no unified POSIX protocol for mocking uname so other OSes will require different approaches. Users, groups, file ordering, host, and domain will require cleanup in all containers except for (maybe) qemu. If I want to handle the cleanup in the shell scripts themselves, I need conditionals so that for instance the shell script only tries to unmount disorderfs if disorderfs was successfully mounted. This approach would simplify the error handling problems I've had before, where when a build crashes cleanup code run from Python doesn't get run until after the testbed stop accepting commands.

Lunar suggested the Plumbum library to me, and I think I can use it to avoid writing my own shell AST library. It has a method that converts the Python representation of a shell script into a string that can be passed to Testbed.command(). Integrating Plumbum to generate the necessary scripts is where I'm going in the next week.

Any feedback on any of this is welcome. I'm also curious what other projects are using autopkgtest code. Holger brought to my attention piuparts, which brings the list up to four that I'm aware of, autopkgtest itself, sbuild, piuparts, and now reprotest.

Categories: Elsewhere

Mike Hommey: Are all integer overflows equal?

Fri, 08/07/2016 - 13:15

Background: I’ve been relearning Rust (more about that in a separate post, some time later), and in doing so, I chose to implement the low-level parts of git (I’ll touch the why in that separate post I just promised).

Disclaimer: It’s friday. This is not entirely(?) a serious post.

So, I was looking at Documentation/technical/index-format.txt, and saw:

32-bit number of index entries.

What? The index/staging area can’t handle more than ~4.3 billion files?

There I was, writing Rust code to write out the index.

try!(out.write_u32::<NetworkOrder>(self.entries.len()));

(For people familiar with the byteorder crate and wondering what NetworkOrder is, I have a use byteorder::BigEndian as NetworkOrder)

And the Rust compiler rightfully barfed:

error: mismatched types: expected `u32`, found `usize` [E0308]

And there I was, wondering: “mmmm should I just add as u32 and silently truncate or … hey what does git do?”

And it turns out, git uses an unsigned int to track the number of entries in the first place, so there is no truncation happening.

Then I thought “but what happens when cache_nr reaches the max?”

Well, it turns out there’s only one obvious place where the field is incremented.

What? Holy coffin nails, Batman! No overflow check?

Wait a second, look 3 lines above that:

ALLOC_GROW(istate->cache, istate->cache_nr + 1, istate->cache_alloc);

Yeah, obviously, if you’re incrementing cache_nr, you already have that many entries in memory. So, how big would that array be?

struct cache_entry **cache;

So it’s an array of pointers, assuming 64-bits pointers, that’s … ~34.3 GB. But, all those cache_nr entries are in memory too. How big is a cache entry?

struct cache_entry { struct hashmap_entry ent; struct stat_data ce_stat_data; unsigned int ce_mode; unsigned int ce_flags; unsigned int ce_namelen; unsigned int index; /* for link extension */ unsigned char sha1[20]; char name[FLEX_ARRAY]; /* more */ };

So, 4 ints, 20 bytes, and as many bytes as necessary to hold a path. And two inline structs. How big are they?

struct hashmap_entry { struct hashmap_entry *next; unsigned int hash; }; struct stat_data { struct cache_time sd_ctime; struct cache_time sd_mtime; unsigned int sd_dev; unsigned int sd_ino; unsigned int sd_uid; unsigned int sd_gid; unsigned int sd_size; };

Woohoo, nested structs.

struct cache_time { uint32_t sec; uint32_t nsec; };

So all in all, we’re looking at 1 + 2 + 2 + 5 + 4 32-bit integers, 1 64-bits pointer, 2 32-bits padding, 20 bytes of sha1, for a total of 92 bytes, not counting the variable size for file paths.

The average path length in mozilla-central, which only has slightly over 140 thousands of them, is 59 (including the terminal NUL character).

Let’s conservatively assume our crazy repository would have the same average, making the average cache entry 151 bytes.

But memory allocators usually allocate more than requested. In this particular case, with the default allocator on GNU/Linux, it’s 156 (weirdly enough, it’s 152 on my machine).

156 times 4.3 billion… 670 GB. Plus the 34.3 from the array of pointers: 704.3 GB. Of RAM. Not counting the memory allocator overhead of handling that. Or all the other things git might have in memory as well (which apparently involves a hashmap, too, but I won’t look at that, I promise).

I think one would have run out of memory before hitting that integer overflow.

Interestingly, looking at Documentation/technical/index-format.txt again, the on-disk format appears smaller, with 62 bytes per file instead of 92, so the corresponding index file would be smaller. (And in version 4, paths are prefix-compressed, so paths would be smaller too).

But having an index that large supposes those files are checked out. So let’s say I have an empty ext4 file system as large as possible (which I’m told is 2^60 bytes (1.15 billion gigabytes)). Creating a small empty ext4 tells me at least 10 inodes are allocated by default. I seem to remember there’s at least one reserved for the journal, and there’s lost+found ; there apparently are more. Obviously, on that very large file system, We’d have a git repository. git init with an empty template creates 9 files and directories, so that’s 19 more inodes taken. But git init doesn’t create an index, and doesn’t have any objects. We’d thus have at least one file for our hundreds of gigabyte index, and at least 2 who-knows-how-big files for the objects (a pack and its index). How many inodes does that leave us with?

The Linux kernel source tells us the number of inodes in an ext4 file system is stored in a 32-bits integer.

So all in all, if we had an empty very large file system, we’d only be able to store, at best, 2^32 – 22 files… And we wouldn’t even be able to get cache_nr to overflow.

… while following the rules. Because the index can keep files that have been removed, it is actually possible to fill the index without filling the file system. After hours (days? months? years? decades?*) of running

seq 0 4294967296 | while read i; do touch $i; git update-index --add $i; rm $i; done

One should be able to reach the integer overflow. But that’d still require hundreds of gigabytes of disk space and even more RAM.

* At the rate it was possible to add files to the index when I tried (yeah, I tried), for a few minutes, and assuming a constant rate, the estimate is close to 2 years. But the time spent reading and writing the index increases linearly with its size, so the longer it’d run, the longer it’d take.

Ok, it’s actually much faster to do it hundreds of thousand files at a time, with something like:

seq 0 100000 4294967296 | while read i; do j=$(seq $i $(($i + 99999))); touch $j; git update-index --add $j; rm $j; done

At the rate the first million files were added, still assuming a constant rate, it would take about a month on my machine. Considering reading/writing a list of a million files is a thousand times faster than reading a list of a billion files, assuming linear increase, we’re still talking about decades, and plentiful RAM. Fun fact: after leaving it run for 5 times as much as it had run for the first million files, it hasn’t even done half more…

One could generate the necessary hundreds-of-gigabytes index manually, that wouldn’t be too hard, and assuming it could be done at about 1 GB/s on a good machine with a good SSD, we’d be able to craft a close-to-explosion index within a few minutes. But we’d still lack the RAM to load it.

So, here is the open question: should I report that integer overflow?

Wow, that was some serious procrastination.

Categories: Elsewhere

Michal &#268;iha&#345;: wlc 0.4

Fri, 08/07/2016 - 12:00

wlc 0.4, a command line utility for Weblate, has been just released. This release doesn't bring much changes, but still worth announcing.

The most important change is that development repository has been moved under WeblateOrg organization at GitHub, you can now find it at https://github.com/WeblateOrg/wlc. Another important news is that Debian package is currently waiting in NEW queue and will hopefully soon hit unstable.

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Categories: Elsewhere

Russell Coker: Nexus 6P and Galaxy S5 Mini

Fri, 08/07/2016 - 07:47

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.

Security

The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.

Design

This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.

Cables

This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.

Related posts:

  1. Samsung Galaxy Note 3 In June last year I bought a Samsung Galaxy Note...
  2. Nexus 4 My wife has had a LG Nexus 4 for about...
  3. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...
Categories: Elsewhere

Steve Kemp: I've been moving and updating websites.

Fri, 08/07/2016 - 05:30

I've spent the past days updating several of my websites to be "responsive". Mostly that means I open the site in firefox then press Ctrl-alt-m to switch to mobile-view. Once I have the mobile-view I then fix the site to look good in small small space.

Because my general design skills are poor I've been fixing most sites by moving to bootstrap, and ensuring that I don't use headers/footers that are fixed-position.

Beyond the fixes to appearances I've also started rationalizing the domains, migrating content across to new homes. I've got a provisional theme setup at steve.fi, and I've moved my blog over there too.

The plan for blog-migration went well:

  • Setup a redirect to from https://blog.steve.org.uk to https://blog.steve.fi/
  • Replace the old feed with a CGI script which outputs one post a day, telling visitors to update their feed.
    • This just generates one post, but the UUID of the post has the current date in it. That means it will always be fresh, and always be visible.
  • Updated the template/layout on the new site to use bootstrap.

The plan was originally to setup a HTTP-redirect, but I realized that this would mean I'd need to keep the redirect in-place forever, as visitors would have no incentive to fix their links, or update their feeds.

By adding the fake-RSS-feed, pointing to the new location, I am able to assume that eventually people will update, and I can drop the dns record for blog.steve.org.uk entirely - Already google seems to have updated its spidering and searching shows the new domain already.

Categories: Elsewhere

Lior Kaplan: First uses of the PHP 5.4 security backports

Fri, 08/07/2016 - 02:07

I recently checked the Debian PHP 5.4 changelog and found out this message (5.4.45-0+deb7u3 and 5.4.45-0+deb7u4):

* most patches taken from https://github.com/kaplanlior/php-src
Thanks a lot to Lior Kaplan for providing them.

I was very pleased to see my work being used, and I hope this would save other time while providing PHP long term support.

Also, while others do similar work (e.g. Remi from RedHat), it seems I’m the only when that make this over GIT and with full references (e.g. commit info, CVE info and bug number).

Comments and suggestions are always welcome… either mail or even better – a pull request.


Filed under: Debian GNU/Linux, PHP
Categories: Elsewhere

Matthew Garrett: Bluetooth LED bulbs

Fri, 08/07/2016 - 00:38
The best known smart bulb setups (such as the Philips Hue and the Belkin Wemo) are based on Zigbee, a low-energy, low-bandwidth protocol that operates on various unlicensed radio bands. The problem with Zigbee is that basically no home routers or mobile devices have a Zigbee radio, so to communicate with them you need an additional device (usually called a hub or bridge) that can speak Zigbee and also hook up to your existing home network. Requests are sent to the hub (either directly if you're on the same network, or via some external control server if you're on a different network) and it sends appropriate Zigbee commands to the bulbs.

But requiring an additional device adds some expense. People have attempted to solve this in a couple of ways. The first is building direct network connectivity into the bulbs, in the form of adding an 802.11 controller. Go through some sort of setup process[1], the bulb joins your network and you can communicate with it happily. Unfortunately adding wifi costs more than adding Zigbee, both in terms of money and power - wifi bulbs consume noticeably more power when "off" than Zigbee ones.

There's a middle ground. There's a large number of bulbs available from Amazon advertising themselves as Bluetooth, which is true but slightly misleading. They're actually implementing Bluetooth Low Energy, which is part of the Bluetooth 4.0 spec. Implementing this requires both OS and hardware support, so older systems are unable to communicate. Android 4.3 devices tend to have all the necessary features, and modern desktop Linux is also fine as long as you have a Bluetooth 4.0 controller.

Bluetooth is intended as a low power communications protocol. Bluetooth Low Energy (or BLE) is even lower than that, running in a similar power range to Zigbee. Most semi-modern phones can speak it, so it seems like a pretty good choice. Obviously you lose the ability to access the device remotely, but given the track record on this sort of thing that's arguably a benefit. There's a couple of other downsides - the range is worse than Zigbee (but probably still acceptable for any reasonably sized house or apartment), and only one device can be connected to a given BLE server at any one time. That means that if you have the control app open while you're near a bulb, nobody else can control that bulb until you disconnect.

The quality of the bulbs varies a great deal. Some of them are pure RGB bulbs and incapable of producing a convincing white at a reasonable intensity[2]. Some have additional white LEDs but don't support running them at the same time as the colour LEDs, so you have the choice between colour or a fixed (and usually more intense) white. Some allow running the white LEDs at the same time as the RGB ones, which means you can vary the colour temperature of the "white" output.

But while the quality of the bulbs varies, the quality of the apps doesn't really. They're typically all dreadful, competing on features like changing bulb colour in time to music rather than on providing a pleasant user experience. And the whole "Only one person can control the lights at a time" thing doesn't really work so well if you actually live with anyone else. I was dissatisfied.

I'd met Mike Ryan at Kiwicon a couple of years back after watching him demonstrate hacking a BLE skateboard. He offered a couple of good hints for reverse engineering these devices, the first being that Android already does almost everything you need. Hidden in the developer settings is an option marked "Enable Bluetooth HCI snoop log". Turn that on and all Bluetooth traffic (including BLE) is dumped into /sdcard/btsnoop_hci.log. Turn that on, start the app, make some changes, retrieve the file and check it out using Wireshark. Easy.

Conveniently, BLE is very straightforward when it comes to network protocol. The only thing you have is GATT, the Generic Attribute Protocol. Using this you can read and write multiple characteristics. Each packet is limited to a maximum of 20 bytes. Most implementations use a single characteristic for light control, so it's then just a matter of staring at the dumped packets until something jumps out at you. A pretty typical implementation is something like:

0x56,r,g,b,0x00,0xf0,0x00,0xaa

where r, g and b are each just a single byte representing the corresponding red, green or blue intensity. 0x56 presumably indicates a "Set the light to these values" command, 0xaa indicates end of command and 0xf0 indicates that it's a request to set the colour LEDs. Sending 0x0f instead results in the previous byte (0x00 in this example) being interpreted as the intensity of the white LEDs. Unfortunately the bulb I tested that speaks this protocol didn't allow you to drive the white LEDs at the same time as anything else - setting the selection byte to 0xff didn't result in both sets of intensities being interpreted at once. Boo.

You can test this out fairly easily using the gatttool app. Run hcitool lescan to look for the device (remember that it won't show up if anything else is connected to it at the time), then do gatttool -b deviceid -I to get an interactive shell. Type connect to initiate a connection, and once connected send commands by doing char-write-cmd handle value using the handle obtained from your hci dump.

I did this successfully for various bulbs, but annoyingly hit a problem with one from Tikteck. The leading byte of each packet was clearly a counter, but the rest of the packet appeared to be garbage. For reasons best known to themselves, they've implemented application-level encryption on top of BLE. This was a shame, because they were easily the best of the bulbs I'd used - the white LEDs work in conjunction with the colour ones once you're sufficiently close to white, giving you good intensity and letting you modify the colour temperature. That gave me incentive, but figuring out the protocol took quite some time. Earlier this week, I finally cracked it. I've put a Python implementation on Github. The idea is to tie it into Ulfire running on a central machine with a Bluetooth controller, making it possible for me to control the lights from multiple different apps simultaneously and also integrating with my Echo.

I'd write something about the encryption, but I honestly don't know. Large parts of this make no sense to me whatsoever. I haven't even had any gin in the past two weeks. If anybody can explain how anything that's being done there makes any sense at all[3] that would be appreciated.

[1] typically via the bulb pretending to be an access point, but also these days through a terrifying hack involving spewing UDP multicast packets of varying lengths in order to broadcast the password to associated but unauthenticated devices and good god the future is terrifying

[2] For a given power input, blue LEDs produce more light than other colours. To get white with RGB LEDs you either need to have more red and green LEDs than blue ones (which costs more), or you need to reduce the intensity of the blue ones (which means your headline intensity is lower). Neither is appealing, so most of these bulbs will just give you a blue "white" if you ask for full red, green and blue

[3] Especially the bit where we calculate something from the username and password and then encrypt that using some random numbers as the key, then send 50% of the random numbers and 50% of the encrypted output to the device, because I can't even

comments
Categories: Elsewhere

Lior Kaplan: Anonymous CVE requests

Thu, 07/07/2016 - 19:21

A year ago I’ve blogged about people requesting CVE without letting upstream know. On the other hand, per requests from Debian, I’m working on improving PHP upstream CVE request process. For the last few release this means I ask the security list members which issues they think should have CVE and ask for them in parallel to the release being made (usually in the space between the release being tagged publicly and is actually announced).

In the last week, I’ve encountered a case where a few CVE were assigned to old PHP issues without any public notice. The fixes for these issues have been published a year ago (August 2015). And I find out about these assignment through warning published by the distributions (mostly Debian, which I’m close to).

Sometimes things fall between the chairs, and it’s perfectly OK to ask for CVE to make sure security issues do get attention even if time has passed. But after the issues (and fixes) are public, I don’t see a reason to do so without making the request itself public as well. And even if the request wasn’t public, at least notify upstream so this info can be added to the right places. Most of these bug were found out when I started to add sequential number into the CVE search after getting an a notice from Debian for two of the issues.

  • CVE-2015-8873 for PHP #69793
  • CVE-2015-8874 for PHP #66387
  • CVE-2015-8876 for PHP #70121
  • CVE-2015-8877 for PHP #70064
  • CVE-2015-8878 for PHP #70002
  • CVE-2015-8879 for PHP #69975
  • CVE-2015-8880 for PHP aa8cac57 (Dec 2015)

And while working on processing these issues for PHP, I also notice they weren’t updated for libGD where appropriate (including recent issues).

I hope this blog post will reach the anonymous people behind these CVE requests, and also the people assigning them. Without transparency and keeping things in synchronization, the idea of having a centralized location for security warning is not going to accomplish its goals.


Filed under: Debian GNU/Linux, PHP
Categories: Elsewhere

Michal &#268;iha&#345;: uTidylib 0.3

Thu, 07/07/2016 - 18:00

Several years ago I've complained about uTidylib not being maintained upstream. Since that time I've occasionally pushed some fixes to my GitHub repository with uTidylib code, but without any clear intentions to take it over.

Time has gone and there was still no progress and I started to consider becoming upstream maintainer as well. I quickly got approval from Cory Dodt, who was the original author of this code, unfortunately he is not owner of the PyPI entry and the claim request seems to have no response (if you know how to get in touch with "cntrlr" or how to take over PyPI module please let me know).

Anyway the amount of patches in my repository is big enough to warrant new release. Additionally Debian bug report about supporting new HTML tidy library came in and that made me push towards releasing 0.3 version of the uTidylib.

As you might guess, the amount of changes against original uTidylib is quite huge, to name the most important ones:

Anyway as I can not update PyPI entry, the downloads are currently available only on my website: https://cihar.com/software/utidylib/

Filed under: Debian English uTidylib | 0 comments

Categories: Elsewhere

Joey Hess: twenty years of free software -- part 10 shell-monad

Thu, 07/07/2016 - 15:23

shell-monad is a small project, done over a couple days and not needing many changes since, but I'm covering it separately because it was a bit of a milestone for me.

As I learned Haskell, I noticed that the libraries were excellent and did things to guide their users that libraries in other languages don't do. Starting with using types and EDSLs and carefully constrained interfaces, but going well beyond that, as far as applying category theory. Using these libraries push you toward good solutions.

shell-monad was a first attempt at building such a library. The shell script it generates should always be syntactically valid, and never forgets to quote a shell variable. That's only the basics. It goes further by making it impossible to typo the name of a shell variable or shell function. And it uses phantom types so that the Haskell type checker can check the types of shell variables and functions match up.

So I think shell-monad is pretty neat, and I certianly learned a lot about writing Haskell libraries making it. Including how much I still have to learn!

I have not used shell-monad much, but keep meaning to make propellor and git-annex use it for some of their shell script needs. And ponder porting etckeeper to generate its shell scripts using it.

Next: ?twenty years of free software -- part 11 concurrent-output

Categories: Elsewhere

Sean Whitton: Kant and tear gas

Thu, 07/07/2016 - 14:15

Over the past year I’ve been refining my understanding of the core claims of Kantian ethics. I’ve realised that I have deeply Kantian intuitions about a lot of issues, and I understand these intuitions better now that I can put them in Kantian terms. Consider two exercises of state power: riot police suppressing protesters by non-lethal means, and soldiers shooting protesters to death. I feel more uncomfortable thinking about the first: there’s something altogether more sinister about it than the second, even though the second is much more sad.

I think that the reason is that non-lethal weaponry is designed to take away people’s agency, and it often achieves this aim by means of emotional manipulation. Riot police use so-called “baton charges” to incite fearful retreat. Protesters have reasoned that in their political situation, they have a duty to resist the incumbent government. Riot police seek to unseat this conviction and cause fear to determine what the protesters will do. In Kantian terms, the riot police fail to respect the moral agency of the protesters by seeking to unseat the moral personality’s determination of what the protester will do.

A controversial example that Kant uses to make this point is the axe murderer case. Kant asks us to imagine that someone bangs on our front door and begs us to hide him in our house, because someone who wishes to kill him is coming up behind them. We do so. When the axe murderer arrives, he goes door-to-door and asks us whether the intended victim is in each house. Kant says that it is morally wrong to lie to the axe murderer and say that the victim is not in your house. How could this be? Surely there is a moral duty to protect the victim from being killed? Indeed there is, but when it comes into conflict with the duty not to lie, that second duty wins out. That’s because respecting the moral agency of individuals is paramount. In this case, we would fail to respect the murderer’s agency if we didn’t allow him to take the decision to murder or not murder the victim; by lying to him we (disrespectfully) bypass his choice as to whether to do it.

It’s obviously crazy to say that we are morally required to give up the victim. Kant gives the wrong answer in this case. However, the case definitely reveals a requirement to respect other people’s view of what they should do, and give them a chance to do it. Similarly it seems like we shouldn’t give semi-automatics to our riot police, but there’s something wrong with a lot of what they do.

During the recent campaigns about Britain’s EU referendum, some criticised Jeremy Corbyn’s campaigning on the grounds that he failed to make an emotional appeal, instead asking people to make their own rational decision when they come to cast their vote. So he lost out the emotional appeals other people were making. It seems that he was successfully respecting individual agency. You’ve got to give people a chance to live up to their own idea of what they should do.

I’m not sure how to reconcile these various ideas, and I’m not sure what it says about me that I find non-lethal weaponry as uncomfortable as I do.

Categories: Elsewhere

Petter Reinholdtsen: Unlocking HTC Desire HD on Linux using unruu and fastboot

Thu, 07/07/2016 - 11:30

Yesterday, I tried to unlock a HTC Desire HD phone, and it proved to be a slight challenge. Here is the recipe if I ever need to do it again. It all started by me wanting to try the recipe to set up an hardened Android installation from the Tor project blog on a device I had access to. It is a old mobile phone with a broken microphone The initial idea had been to just install CyanogenMod on it, but did not quite find time to start on it until a few days ago.

The unlock process is supposed to be simple: (1) Boot into the boot loader (press volume down and power at the same time), (2) select 'fastboot' before (3) connecting the device via USB to a Linux machine, (4) request the device identifier token by running 'fastboot oem get_identifier_token', (5) request the device unlocking key using the HTC developer web site and unlock the phone using the key file emailed to you.

Unfortunately, this only work fi you have hboot version 2.00.0029 or newer, and the device I was working on had 2.00.0027. This apparently can be easily fixed by downloading a Windows program and running it on your Windows machine, if you accept the terms Microsoft require you to accept to use Windows - which I do not. So I had to come up with a different approach. I got a lot of help from AndyCap on #nuug, and would not have been able to get this working without him.

First I needed to extract the hboot firmware from the windows binary for HTC Desire HD downloaded as 'the RUU' from HTC. For this there is is a github project named unruu using libunshield. The unshield tool did not recognise the file format, but unruu worked and extracted rom.zip, containing the new hboot firmware and a text file describing which devices it would work for.

Next, I needed to get the new firmware into the device. For this I followed some instructions available from HTC1Guru.com, and ran these commands as root on a Linux machine with Debian testing:

adb reboot-bootloader fastboot oem rebootRUU fastboot flash zip rom.zip fastboot flash zip rom.zip fastboot reboot

The flash command apparently need to be done twice to take effect, as the first is just preparations and the second one do the flashing. The adb command is just to get to the boot loader menu, so turning the device on while holding volume down and the power button should work too.

With the new hboot version in place I could start following the instructions on the HTC developer web site. I got the device token like this:

fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //'

And once I got the unlock code via email, I could use it like this:

fastboot flash unlocktoken Unlock_code.bin

And with that final step in place, the phone was unlocked and I could start stuffing the software of my own choosing into the device. So far I only inserted a replacement recovery image to wipe the phone before I start. We will see what happen next. Perhaps I should install Debian on it. :)

Categories: Elsewhere

Daniel Pocock: Can you help with monitoring packages in Debian and Ubuntu?

Thu, 07/07/2016 - 11:14

Debian (and consequently Ubuntu) contains a range of extraordinarily useful monitoring packages.

I've been maintaining several of them at a basic level but as more of my time is taken up by free Real-Time Communications software, I haven't been able to follow the latest upstream releases for all of the other packages I maintain. The versions we are distributing now still serve their purpose well, but as some people would like newer versions, I don't want to stand in the way.

Monitoring packages are for everyone. Even if you are a single user or developer with a single desktop or laptop and no servers, you may still find some of these packages useful.

For example, after doing an apt-get upgrade or dist-upgrade, it can be extremely beneficial to look at your logs in LogAnalyzer with all the errors and warnings colour-coded so you can see at a glance whether your upgrade broke anything. If you are testing new software before a release or trying to troubleshoot erratic behavior, this type of colour-coded feedback can also help you focus on possible problems without the eyestrain of tailing a logfile.

How to help

A good first step is simply looking over the packages maintained by the pkg-monitoring group and discovering whether any of them are useful for your own needs.

You may be familiar with alternatives that exist in Debian, if so, feel free to comment on whether you believe any of these packages should be dropped by cre
ating a wishlist bug against the package concerned.

The next step is joining the pkg-monitoring mailing list. If you are a Debian Developer or Debian Maintainer with upload rights already, you can join the group on alioth. If you are not at that level yet, you are still very welcome to test new versions of the packages and upload them on mentors.debian.net and then join the mentors mailing list to look for a member of the community who can help review your work and sponsor an upload for you.

Each of the packages should have a README.source file in the repository explaining more about how the package is maintained. Familiarity with Git is essential. Note that all of the packages keep their debian/* artifacts in a branch called debian/sid while the master branch tracks the upstream repository.

You can clone the Debian package repositories for any of these projects from alioth and build them yourself, try building packages of new upstream versions and try to investigate any of the bug reports submitted to Debian. Some of the bugs may have already been fixed by upstream changes and can be marked appropriately.

Integrating your monitoring systems

Two particular packages I would like to highlight are ganglia-nagios-bridge and syslog-nagios-bridge. They are not exclusively for Nagios and could also be used with Icinga or other monitoring dashboards. The key benefit of these packages is that all the alerting is consolidated in a single platform, Nagios, which is able to display them in a colour-coded dashboard and intelligently send notifications to people in a manner that is fully configurable. If you haven't integrated your monitoring systems already, these projects provide a simple and lightweight way to start doing so.

Categories: Elsewhere

Ben Hutchings: Debian LTS work, June 2016

Thu, 07/07/2016 - 10:24

I was assigned another 15 hours of work by Freexian's Debian LTS initiative and carried over 5 from last month. I worked a total of 19 hours, carrying over 1.

I spent a week in the Front Desk role and triaged many new security issues for wheezy.

I prepared the Linux 3.2.81 stable update, sent it out for review and finally released it. I then rebased the wheezy-security branch on top of that and added some later security fixes that were not yet suitable for a kernel.org update. I uploaded to wheezy-security and issued DLA-516-1.

I started working on the next Linux stable updates (3.2.82 and the next wheezy LTS update) and on an update for imagemagick, but haven't uploaded anything for them yet.

Categories: Elsewhere

Markus Koschany: My Free Software Activities in June 2016

Thu, 07/07/2016 - 03:06

My monthly report covers what I have been doing for Debian. I write it for Debian’s Long Term Support sponsors but also for the wider free software community in the hope that it might inspire people to get more involved with Debian or free software in general.

Debian Android Debian Games
  • I packaged CaveExpress and CavePacker for Debian. CaveExpress is a remake of the old Amiga classic Ugh! In this game you control a pedal-powered flying machine and pick up packages from your clients. An interesting aspect of CaveExpress is its physics-based gameplay. The packages must be delivered to a collection point and their movement is quite realistic thanks to the excellent Box2d physics engine. The other game, CavePacker, based on the same engine as CaveExpress is a Sokoban-like game. Both games feature dozens of levels and if you have nothing better to do, you should definitely check them out.
  • This month I also packaged a new upstream release of Netpanzer. Apparently there is new upstream activity.
  • Blockattack 2.0 was released and is now available in Debian.
  • I also updated the following packages: kball, pathogen, ceferino, slimevolley, pangzero and airstrike.
  • I adopted abe, berusky and berusky-data, updated the packages to use modern debian helpers and also packaged version 1.7 of berusky, a great Sokoban-like game by the way.
  • June also saw a new release of debian-games, several metapackages that make it much easier to install a subset of games or even the finest.
  • I sponsored RC-bug fixes for parsec47, tumiki-fighters, mu-cade and tatan all prepared by Peter De Wachter who keeps our D (yes, that’s a language) games alive. But we will face more issues in the post Stretch future. Apparently the D language people intend to remove parts of their API and of course all our D-based games are affected. Peter has announced more information about that. I think all these games are pretty unique and real gems. If you know a little D and want to help out, please get involved.
Debian Java Debian LTS

This was my fifth month as a paid contributor and I have been paid to work 19,75 hours on Debian LTS. In that time I did the following:

  • DLA-501-1. Salvatore Bonaccorso from Debian’s Security Team discovered that the original fix for CVE-2015-7552 (DLA-450-1) was incomplete. I prepared and uploaded a new revision of gdk-pixbuf and issued the DLA.
  • DLA-502-1. Issued a security update for graphicsmagick fixing 1 CVE.
  • DLA-504-1. Issued a security update for libxstream-java fixing 1 CVE which was prepared by Emmanuel Bourg.
  • DLA-505-1. Issued a security update for libpdfbox-java fixing 1 CVE.
  • DLA-508-1. Issued a security update for expat fixing 2 CVE.
  • DLA-511-1. Issued a security update for libtorrent-rasterbar fixing 1 CVE.
  • DLA-526-1. Issued a security update for mysql-connector-java fixing 1 CVE. I also prepared the update for Jessie which is still pending to be reviewed by the Security Team.
  • DLA-528-1. Issued a security update for libcommons-fileupload-java fixing 1 CVE.
  • DLA-529-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-530-1. As previously announced I switched the default Java implementation from OpenJDK 6 to OpenJDK 7.
  • DLA-537-1. Issued a security update for roundcube fixing 1 CVE. I triaged CVE-2016-5103, CVE-2015-2180 and CVE-2015-2181 and marked them as “not-vulnerable”.
  • I triaged 22 CVEs for libarchive and marked two of them as “not-vulnerable”. You can find my preliminary work for libarchive on the wheezy branch in Debian’s git repository. I expect a security update very soon.
  • From 13 June to 19. June I was responsible for Wheezy’s LTS frontdesk. It was a rather calm week on the debian-lts mailing list and in our IRC channel. I triaged CVE-2016-4970 (netty), CVE-2016-3189 (bzip2), CVE-2016-1621 (libvpx) and CVE-2016-4493, CVE-2016-4492, CVE-2016-4491, CVE-2016-4490, CVE-2016-4489, CVE-2016-4488, CVE-2016-4487, CVE-2016-2226 which were all minor issues in developer tools or in the gcc toolchain.
  • I commented on Ola’s question about open security issues in phpmyadmin.
QA uploads
  • I fixed pygccxml that threatened to remove spring.
  • I completely overhauled gl-117, fixed four bugs and closed two obsolete ones. gl-117 always reminds me a little of the Falcon series from the early 90ies.
Categories: Elsewhere

Bits from Debian: Debian Perl Sprint 2016

Wed, 06/07/2016 - 23:45

Six members of the Debian Perl team met in Zurich over the weekend from May 19 to May 22 to continue the development around perl for Stretch and to work on QA across 3000+ packages.

The participants had a good time, met friends from local groups and even found some geocaches. Obviously, the sprint was productive this time too:

  • 36 bugs were filed or worked on, 28 uploads were accepted.
  • The plan to get Perl 5.24 transition into Stretch was confirmed, and a test rebuild server was set up.
  • Cross building XS modules was demoed, and the conditions where it is viable were discussed.
  • Several improvements were made in the team packaging tools, and new features were discussed and drafted.
  • A talk on downstream distribution aimed at CPAN authors was proposed for YAPC::EU 2016.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the ETH Zurich for hosting us, and all donors to the Debian project who helped to cover a large part of our expenses.

Categories: Elsewhere

Joey Hess: twenty years of free software -- part 9 small projects

Wed, 06/07/2016 - 17:44

My dad sometimes asks when I'll finish git-annex. The answer is "I don't know" because software like that doesn't have a defined end point; it grows and changes in response to how people use it and how the wider ecosystem develops.

But other software has a well-defined end point and can be finished. Some of my smaller projects that are more or less done include ?myrepos, electrum-mnemonic, brainfuck-monad, scroll, yesod-lucid haskell-mountpoints.

Studies of free software projects have found that the average free software project was written entirely by one developer, is not very large, and is not being updated. That's often taken to mean it's a failed or dead project. But all the projects above look that way, and are not failures, or dead.

It's good to actually finish some software once in a while!

Next: ?twenty years of free software -- part 10 shell-monad

Categories: Elsewhere

Pages