Feed aggregator

Scarlett Clark: Debian: KDE: Reproducible Builds week 3, Randa Platforms Equals Busy times!

Planet Debian - Mon, 13/06/2016 - 18:41


I am a smidgen late on post due to travel, sorry!


For this I was able to come up with a patch for kconfig_compiler to encode generated files to utf-8.
Review request is here:

This has been approved and I will be pushing it as soon as I patch the qt5 frameworks version.


WIP this has been a steep learning curve, according to the notes it was an easy embedded kernel version, that was not the case! After grueling hours of
trying to sort out randomness in debug output I finally narrowed it down to cases where QStringLiteral was used and there were non letter characters eg. (” <") These were causing debug symbols to generate with ( lambda() ) which caused unreproducible symbol/debug files. It is now a case of fixing all of these in the code to use QString::fromUtf8 seems to fix this. I am working on a mega patch for upstream and it should be ready early in the week.

This last week I spent a large portion making my through a mega patch for kxmlgui, when it was suggested to me to write a small qt app
to test QStringLiteral isolated and sure enough two build were byte for byte identical. So this means that QStringLiteral may not be the issue at all. With some
more assistance I am to expand my test app with several QStringLiterals of varying lengths, we have suspicion it is a padding issue, which complicates things.

On the KDE front, I have arrived safe and sound in Randa and aside from some major jetlag, reproducible builds, I have been quite busy with the KDE CI. I am reworking
my DSL to use friendly yaml files to generate jobs for all platforms ( linux, android, osx, windows, snappy, flatpak ) and can easily be extended later.
Major workpoints so far for Randa:

  • I have delegated the windows backend to Hannah
  • Andreas has provided a docker build for Android, and upon initial testing it will work great.
  • I have recruited several nice folks to assist me with my snappy efforts.


  • Add all the nodes to sandbox
  • Finish yaml CI files
  • OSX re-setup with new macmini

Have a great day.

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, June 15, 2016

Planet Drupal - Mon, 13/06/2016 - 18:40
Start:  2016-06-15 00:00 - 23:30 America/Chicago Organizers:  xjm catch David_Rothstein mlhess Event type:  Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, June 15.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix or feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, July 06. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, October 5.

Drupal 6 is end-of-life and will not receive further security releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, May 2016

Planet Debian - Mon, 13/06/2016 - 16:15

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 166 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 20h.
  • Ben Hutchings did 10 hours (out of 15 hours allocated, keeping 5 extra hours for June).
  • Brian May did 15 hours.
  • Chris Lamb did 18 hours.
  • Guido Günther did 17.25 hours (out of 8 hours allocated + 9.25 remaining hours).
  • Markus Koschany did 30 hours (out of 31 hours allocated, thus keeping one extra hour for June).
  • Santiago Ruano Rincón did 20 hours (out of 20h allocated + 8 remaining, thus keeping 8 extra hours for June).
  • 8 hours that were initially affected to Scott Kitterman have been put back in the June pool after he resigned.
  • Thorsten Alteholz did 31 hours.
Evolution of the situation

The number of sponsored hours stayed the same over May but will likely increase a little bit the next month as we have two new Bronze sponsors being processed.

The security tracker currently lists 36 packages with a known CVE and the dla-needed.txt file lists 36 packages awaiting an update.

Despite the higher than usual number of work hours dispatched in May, we still have more open CVE than we used to have at the end of the squeeze LTS period. So more support is always needed…

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: Elsewhere

jfhovinne deleted branch hotfix/NEXTEUROPA-11356 at ec-europa/platform-dev

Devel - Mon, 13/06/2016 - 14:08
jfhovinne deleted branch hotfix/NEXTEUROPA-11356 at ec-europa/platform-dev Jun 13, 2016
Categories: Networks

jfhovinne pushed to 2.1-support at ec-europa/platform-dev

Devel - Mon, 13/06/2016 - 14:08
Jun 13, 2016 jfhovinne pushed to 2.1-support at ec-europa/platform-dev
  • 5b9056c Merge pull request #646 from ec-europa/hotfix/NEXTEUROPA-11356
  • 0aba21f Update apachesolr-changing_drupal_http_request_timeout_value.patch
  • 2 more commits »
Categories: Networks

jfhovinne merged pull request ec-europa/platform-dev#646

Devel - Mon, 13/06/2016 - 14:08
Jun 13, 2016 jfhovinne merged pull request ec-europa/platform-dev#646 NEXTEUROPA-11356: Adding patch for setting up default timeout for all… 3 commits with 16 additions and 0 deletions
Categories: Networks

2bits: Slow Queries In Drupal Can Often Be Cured By Indexes

Planet Drupal - Mon, 13/06/2016 - 13:44

Recently, we were reviewing the performance of a large site that has a significant portion of its traffic from logged in users. The site was suffering from a high load average during peak times.

We enabled slow query logging on the site for a entire week, using the following in my.cnf:

log_slow_queries               = 1
slow_query_log                 = 1
slow_query_log_file            = /var/log/mysql/slow-query.log
log-queries-not-using-indexes  = 1
long_query_time                = 0.100

Note that the parameter long_query_time can be a fraction of a second only on more recent versions on MySQL.

You should not set this value too low, otherwise the server's disk could be tied up in logging the queries. Nor should it be too high so as to miss most slow queries.

We then analyzed the logged queries after a week.

We found that the slow queries, on aggregate, examined a total of 150,180 trillion rows, and returned 838,930 million rows.

Out of the total types of queries analyzed, the top two had a disproportionate share of the total.

So these two queries combined were 63.7% of the total slow queries! That is very high, and if we were able to improve these two queries, it would be a huge win for performance and server resources.

Voting API Slow Query

The first query had to do with Voting API and Userpoints.

It was:

SELECT votingapi_vote.*
FROM votingapi_vote
WHERE  value_type = 'points'
AND tag = 'userpoints_karma'
AND uid = '75979'
AND value = '-1'
AND timestamp > '1464077478'

It hogged 45.3% of the total slow queries, and was called 367,531 times per week. It scanned over 213,000 rows every time it ran!

The query took an aggregate time for execution of 90,766, with an average of 247 milliseconds per execution.

The solution was simple: create an index on the uid column:

CREATE INDEX votingapi_vote_uid ON votingapi_vote (uid);

After that was done, the query used the index and scanned only one row, and returned instantly.

Private Messaging Slow Query

The second query had to do with Privatemsg. It is:

SELECT COUNT(pmi.recipient) AS sent_count
FROM pm_message pm
INNER JOIN pm_index pmi ON pm.mid = pmi.mid
WHERE  pm.author = '394106'
AND pm.timestamp > '1463976037'
AND pm.author <> pmi.recipient

This query accounted for 18.4% of the total slow queries, and was called 32,318 times per week. It scanned over 1,350,000 rows on each execution!

The query took an aggregate time for execution of 36,842, with an average of 1.14 seconds (yes, seconds!) per execution.

Again, the solution was simple: create an index on the author column.

CREATE INDEX pm_message_author ON pm_message (author);

Just like the first query, after creating the index, the query used the index and scanned only 10 rows and over a million! It returned instantly.

Results After Tuning

As with any analysis, comparison of the before and after data is crucial.

After letting the tuned top two offending queries run for another week, the results were extremely pleasing:

Before After Total rows examined 150.18 T 34.93 T Total rows returned 838.93 M 500.65 M

A marked improvement!


With performance, the 80/20 rule applies. There are often low hanging fruit that can easily be tuned.

Do not try to tune because of something you read somewhere, that may not apply to your site (including this and other articles on our site!)

Rather, you should do proper analysis, and reach a diagnosis based on facts and measurements, as to the cause(s) of the slowness. After that, tuning them will provide good results.

Tags: Contents: 
Categories: Elsewhere

Mark Brown: We show up

Planet Debian - Mon, 13/06/2016 - 11:50

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

Categories: Elsewhere

Cheppers blog: TCPDF module ported to Drupal 8

Planet Drupal - Mon, 13/06/2016 - 11:36

A few months ago, I decided to port the TCPDF module for Drupal 8. My first thought was that it would be an easy task, but I ran into my first problem early, when I tried to pull the TCPDF library into Drupal. 

Categories: Elsewhere

jfhovinne deleted branch HOTFIX/NEXTEUROPA-11451 at ec-europa/platform-dev

Devel - Mon, 13/06/2016 - 11:23
jfhovinne deleted branch HOTFIX/NEXTEUROPA-11451 at ec-europa/platform-dev Jun 13, 2016
Categories: Networks

jfhovinne pushed to 2.1-support at ec-europa/platform-dev

Devel - Mon, 13/06/2016 - 11:23
Jun 13, 2016 jfhovinne pushed to 2.1-support at ec-europa/platform-dev
Categories: Networks

Simon Désaulniers: [GSOC] Week 2 - Report

Planet Debian - Mon, 13/06/2016 - 06:22

I’ve been reworking the code for the queries I introduced in the first week.

What’s done
  • I have worked on value pagination and optimization of accnounce operations;
  • Fixed bugs like #72, #73;
  • I’ve split the Query into Select and Where strcutures. This change was explained here.
What’s still work in progress
  • Value pagination;
  • Optimizing announce operations;
Categories: Elsewhere

Iustin Pop: Elsa Bike Trophy 2016—my first bike race!

Planet Debian - Mon, 13/06/2016 - 01:09
Elsa Bike Trophy 2016—my first bike race!

So today, after two months of intermittent training using Zwift and some actual outside rides, I did my first bike race. Not of 2016, not of 2000+, but like ever.

Which is strange, as I learned biking very young, and I did like to bike. But as it turned out, even though I didn't like running as a child, I did participate in a number of running events over the years, but no biking ones.

The event

Elsa Bike Trophy is a mountain bike event—cross-country, not downhill or anything crazy; it takes part in Estavayer-le-Lac, and has two courses - one 60Km with 1'791m altitude gain, and a smaller one of 30Km with 845m altitude gain. I went, of course, for the latter. 845m is more than I ever did in a single ride, so it was good enough for a first try. The web page says that this smaller course “… est nerveux, technique et ne laisse que peu de répit”. I choose to think that's a bit of an exaggeration, and that it will be relatively easy (as I'm not too skilled technically).

The atmosphere there was like for the running races, with the exception of bike stuff being sold, and people on very noisy rollers. I'm glad for my trainer which sounds many decibels quieter…

The long race started at 12:00, and the shorter one at 12:20. While waiting for the start I had to concerns in mind: whether I'm able to do the whole course (endurance), and whether it will be too cold (the weather kept moving towards rain). I had a small concern about the state of the course, as it was not very nice weather recently, but a small one.

And then, after one hour plus of waiting, go!

Racing, with a bit of "swimming"

At first thing went as expected. Starting on paved roads, moving towards the small town exit, a couple of 14% climbs, then more flat roads, then a nice and hard 18% short climb (I'll never again complain about < 10%!), then… entering the woods. It became quickly apparent that the ground in the forest was in much worse state than I feared. Much worse as in a few orders of magnitude.

In about 5 minutes after entering the tree cover, my reasonably clean, reasonably light bike became a muddy, heavy monster. And the pace that until then went quite OK became walking pace, as the first rider that didn't manage to keep going up because the wheel turned out of the track blocked the one behind him, which had to stop, and repeat until we were one line (or two, depending on how wide the trail was) of riders walking their bikes up. While on dry ground walking your bike up is no problem, or hiking through mud with good hiking shoes is also no problem, walking up with biking shoes is a pain. Your foot slides and you waste half of your energy "swimming" in the mud.

Once the climb is over, you get on the bike, and of course the pedals and cleats are full of heavy mud, so it takes a while until you can actually clip in. Here the trail version of SPD was really useful, as I could pedal reasonably well without being clipped-in, just had to be careful and push too hard.

Then maybe you exit the trail and get on paved road, but the wheels are so full of mud that you still are very slow (and accelerate very slowly), until the shed enough of the mud to become somewhat more "normal".

After a bit of this "up through mud, flat and shedding mud", I came upon the first real downhill section. I would have been somewhat confident in dry ground, but I got scared and got off my bike. Better safe than sorry was the thing for now.

And after this is was a repetition of the above: climb, sometimes (rarely) on the bike, most times pushing the bike, fast flat sections through muddy terrain where any mistake of controlling the bike can send the front wheel flying due to the mud being highly viscous, slow flat sections through very liquid mud where it definitely felt like swimming, or any dry sections.

My biggest fear, uphill/endurance, was unfounded. The most gains I made were on the dry uphills, where I had enough stamina to overtake. On flat ground I mostly kept order (i.e. neither being overtaken nor overtaking), but on downhill sections, I lost lots of time, and was overtaken a lot. Still, it was a good run.

And then, after about 20 kilometres out of the 30, I got tired enough of getting off the bike, on the bike, and also tired mentally and not being careful enough, that I stopped getting off the bike on downhills. And the feeling was awesome! It was actually much much easier to flow through the mud and rocks and roots on downhill, even when it was difficult (for me) like 40cm drops (estimated), than doing it on foot, where you slide without control and the bike can come crashing down on you. It was a liberating feel, like finally having overcome the mud. I was soo glad to have done a one-day training course with Swiss Alpine Adventure, as it really helped. Thanks Dave!

Of course, people were still overtaking me, but I also overtook some people (who were on foot; he he, I wasn't the only one it seems). And being easier, I had some more energy so I was able to push a bit harder on the flats and dry uphill sections.

And then the remaining distance started shrinking, and the last downhill was over, I entered through familiar roads the small town, a passer-by cries "one kilometre left", I push hard (I mean, hard as I could after all the effort), and I reach the finish.

Oh, and my other concern, the rain? Yes it did rain somewhat, and I was glad for it (I keep overheating); there was a single moment I felt cold, when exiting a nice cosy forest into a field where the wind was very strong—headwind, of course.

Lessons learned

I did learn a lot in this first event.

  • indoor training sessions only help with endurance (but they do good on this); they don't help with technique, and most importantly, they don't teach how to handle the bike in inclement weather; biking to work on paved road also doesn't help.
  • nevertheless, indoor training does help with endurance ☺
  • mud guards…; before the race, I thought they'll help; during the race, I cursed at the extra weight and their seemingly uselessness; after the race, after I saw how other people looked, I realised that indeed they helped a lot—I was only dirty on my legs, mostly below the knee, but not on my upper body. Unsure whether I will use the again.
  • a drop seat is not needed if your seat is set in-between, but sure damn would have been more easy with one
  • installing your GPS on your handle-bar with elastic bands in a section of non-constant diameter is a very bad idea, as it lives in an unstable equilibrium: any move towards the thinner section makes the mount very loose, and you have to lose time fixing it.

So, how did I do after all? As soon as I reached the finish and recovered my items, among which the phone, I checked the datasport page: I was rank 59/68 in my category. Damn, I hoped (and thought) I would do better. Similar % in the overall ranking for this distance.

That aside, it was mighty fun. So much fun I'd do it again tomorrow! I forgot the awesome atmosphere of such events, even in the back of the rankings.

And then, after I reach drive home and open on my workstation the datasport page, I get very confused: the overall number of participants was different. And the I realised: not everybody finished the race when I first checked (d'oh)! Final ranking: 59 out of 84 in my category, and 247/364 in the overall 30km rankings. That makes it 70% and 67% respectively, which matches somewhat with my usual running results a few years back (but a bit worse). It is in any case better than what I thought originally, yay!

Also, Strava activity for some more statistics (note that my Garmin says it was not 800+ meters of altitude…):

I'd embed a Veloviewer nice 3D-map but I can't seem to get the embed option, hmm…

TODO: train more endurance, train more technique, train in more various conditions!

Categories: Elsewhere

Sune Vuorela: Randa day 0

Planet Debian - Mon, 13/06/2016 - 00:16

Sitting on Lake Zurich and reflecting over things was a great way to get started. http://manifesta.org/2015/11/pavillon-of-reflections-for-zurich-in-2016/

After spending a bit of time in a train, I climbed part of a mountain together with Adriaan – up to the snow where I could throw a snowball at him. We also designed a couple of new frameworks on our climbing trip. Maybe they will be presented later.

Categories: Elsewhere

Jeff Geerling's Blog: Hosted Apache Solr — now for Drupal 8!

Planet Drupal - Sun, 12/06/2016 - 23:42

After a few months of testing, I'm happy to announce Hosted Apache Solr now supports Search API Solr with Drupal 8! Both Search API and Search API Solr have been getting closer to stable releases, and more people have been requesting Drupal 8 search cores, so I decided to finish testing and updating support guides this weekend.

Categories: Elsewhere

Mario Lang: A Raspberry Pi Zero in a Handy Tech Active Star 40 Braille Display

Planet Debian - Sun, 12/06/2016 - 11:20

TL;DR: I put a $5 Raspberry Pi Zero, a Bluetooth USB dongle, and the required adapter cable into my new Handy Tech Active Star 40 braille display. An internal USB port provides the power. This has transformed my braille display into an ARM-based, monitorless, Linux laptop that has a keyboard and a braille display. It can be charged/powered via USB so it can also be run from a power bank or a solar charger, thus potentially being able to run for days, rather than just hours, without needing a standard wall-jack.


Some Background on Braille Display Form Factors

Braille displays come in various sizes. There are models tailored for desktop use (with 60 cells or more), models tailored for portable use with a laptop (usually with 40 cells), and, nowadays, there are even models tailored for on-the-go use with a smartphone or similar (with something like 14 or 18 cells).

Back in the old days, braille displays were rather massive. A 40-cell braille display was typically about the size of a 13" laptop. In modern times, manufacturers have managed to reduce the size of the internals such that a 40-cell display can be placed in front of a laptop or keyboard instead of placing the laptop on top of the braille display.

While this is a nice achievement, I personally haven't found it to be very convenient because you now have to place two physically separate devices on your lap. It's OK if you have a real desk, but, at least in my opinion, if you try to use your laptop as its name suggests, it's actually inconvenient to use a small form factor, 40-cell display.

For this reason, I've been waiting for a long-promised new model in the Handy Tech Star series. In 2002, they released the Handy Tech Braille Star 40, which is a 40-cell braille display with enough space to put a laptop directly on top of it. To accommodate larger laptop models, they even built in a little platform at the back that can be pulled out to effectively enlarge the top surface. Handy Tech has now released a new model, the Active Star 40, that has essentially the same layout but modernized internals.

You can still pull out the little platform to increase the space that can be used to put something on top.

But, most conveniently, they've designed in an empty compartment, roughly the size of a modern smartphone, beneath the platform. The original idea was to actually put a smartphone inside, but this has turned out (at least to me) to not be very feasible. Fortunately, they thought about the need for electricity and added a Micro USB cable terminating within the newly created, empty compartment.

My first idea was to put a conventional Raspberry Pi inside. When I received the braille display, however, we immediately noticed that a standard-sized rpi is roughly 3mm too high to fit into the empty compartment.

Fortunately, though, a co-worker noticed that the Raspberry Pi Zero was available for order. The Raspberry Pi Zero is a lot thinner, and fits perfectly inside (actually, I think there's enough space for two, or even three, of them). So we ordered one, along with some accessories like a 64GB SDHC card, a Bluetooth dongle, and a Micro USB adapter cable. The hardware arrived a few days later, and was immediately bootstrapped with the assistance of very helpful friends. It works like a charm!

Technical Details

The backside of the Handy Tech Active Star 40 features two USB host ports that can be used to connect devices such as a keyboard. A small form-factor, USB keyboard with a magnetic clip-on is included. When a USB keyboard is connected, and when the display is used via Bluetooth, the braille display firmware additionally offers the Bluetooth HID profile, and key press/release events received via the USB port are passed through to it.

I use the Bluetooth dongle for all my communication needs. Most importantly, BRLTTY is used as a console screen reader. It talks to the braille display via Bluetooth (more precisely, via an RFCOMM channel).

The keyboard connects through to Linux via the Bluetooth HID profile.

Now, all that is left is network connectivity. To keep the energy consumption as low as possible, I decided to go for Bluetooth PAN. It appears that the tethering mode of my mobile phone works (albeit with a quirk), so I can actually access the internet as long as I have cell phone reception. Additionally, I configured a Bluetooth PAN access point on my desktop machines at home and at work, so I can easily (and somewhat more reliably) get IP connectivity for the rpi when I'm near one of these machines. I plan to configure a classic Raspberry Pi as a mobile Bluetooth access point. It would essentially function as a Bluetooth to ethernet adapter, and should allow me to have network connectivity in places where I don't want to use my phone.

BlueZ 5 and PAN

It was a bit challenging to figure out how to actually configure Bluetooth PAN with BlueZ 5. I found the bt-pan python script (see below) to be the only way so far to configure PAN without a GUI.

It handles both ends of a PAN network, configuring a server and a client. Once instructed to do so (via D-Bus) in client mode, BlueZ will create a new network device - bnep0 - once a connection to a server has been established. Typically, DHCP is used to assign IP addresses for these interfaces. In server mode, BlueZ needs to know the name of a bridge device to which it can add a slave device for each incoming client connection. Configuring an address for the bridge device, as well as running a DHCP server + IP Masquerading on the bridge, is usually all you need to do.

A Bluetooth PAN Access Point with Systemd

I'm using systemd-networkd to configure the bridge device.


[NetDev] Name=pan Kind=bridge ForwardDelaySec=0


[Match] Name=pan [Network] Address= DHCPServer=yes IPMasquerade=yes

Now, BlueZ needs to be told to configure a NAP profile. To my surprise, there seems to be no way to do this with stock BlueZ 5.36 utilities. Please correct me if I'm wrong.

Luckily, I found a very nice blog post, as well as an accommodating Python script that performs the required D-Bus calls.

For convenience, I use a Systemd service to invoke the script and to ensure that its dependencies are met.


[Unit] Description=Bluetooth Personal Area Network After=bluetooth.service systemd-networkd.service Requires=systemd-networkd.service PartOf=bluetooth.service [Service] Type=notify ExecStart=/usr/local/sbin/pan [Install] WantedBy=bluetooth.target


#!/bin/sh # Ugly hack to work around #787480 iptables -F iptables -t nat -F iptables -t mangle -F iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE exec /usr/local/sbin/bt-pan --systemd --debug server pan

This last file wouldn't be necessary if IPMasquerade= were supported in Debian right now (see #787480).

After the obligatory systemctl daemon-reload and systemctl restart systemd-networkd, you can start your Bluetooth Personal Area Network with systemctl start pan.

Bluetooth PAN Client with Systemd

Configuring the client is also quite easy to do with Systemd.


[Match] Name=bnep* [Network] DHCP=yes


[Unit] Description=Bluetooth Personal Area Network client [Service] Type=notify ExecStart=/usr/local/sbin/bt-pan --debug --systemd client %I --wait

Now, after the usual configuration reloading, you should be able to connect to a specific Bluetooth access point with:

systemctl start pan@00:11:22:33:44:55 Pairing via the Command Line

Of course, the server and client-side service configuration require a pre-existing pairing between the server and each of its clients.

On the server, start bluetoothctl and issue the following commands:

power on agent on default-agent scan on scan off pair XX:XX:XX:XX:XX:XX trust XX:XX:XX:XX:XX:XX

Once you've set scan mode to on, wait a few seconds until you see the device you're looking for scroll by. Note its device address, and use it for the pair and (optional) trust commands.

On the client, the sequence is essentially the same except that you don't need to issue the trust command. The server needs to trust a client in order to accept NAP profile connections from it without waiting for manual confirmation by the user.

I'm actually not sure if this is the optimal sequence of commands. It might be enough to just pair the client with the server and issue the trust command on the server, but I haven't tried this yet.

Enabling Use of the Bluetooth HID Profile

Essentially the same as above also needs to be done in order to use the Bluetooth HID profile of the Active Star 40 on Linux. However, instead of agent on, you need to issue the command agent KeyboardOnly. This explicitly tells bluetoothctl that you're specifically looking for a HID profile.

Configuring Bluetooth via the Command Line Feels Vague

While I'm very happy that I actually managed to set all of this up, I must admit that the command-line interface to BlueZ feels a bit incomplete and confusing. I initially thought that agents were only for PIN code entry. Now that I've discovered that "agent KeyboardOnly" is used to enable the HID profile, I'm not sure anymore. I'm surprised that I needed to grab a script from a random git repository in order to be able to set up PAN. I remember, with earlier version of BlueZ, that there was a tool called pand that you could use to do all of this from the command-line. I don't seem to see anything like that for BlueZ 5 anymore. Maybe I'm missing something obvious?


The data rate is roughly 120kB/s, which I consider acceptable for such a low power solution. The 1GHz ARM CPU actually feels sufficiently fast for a console/text-mode person like me. I'll rarely be using much more than ssh and emacs on it anyway.

Console fonts and screen dimensions

The default dimensions of the framebuffer on the Raspberry Pi Zero are a bit unexpectedly strange. fbset reports that the screen dimension is 656x416 pixels (of course, no monitor connected). With a typical console font of 8x16, I got 82 columns and 26 lines.

With a 40 cell braille display, the 82 columns are very inconvenient. Additionally, as a braille user, I would like to be able to view Unicode braille characters in addition to the normal charset on the console. Fortunately, Linux supports 512 glyphs, while most console fonts do only provide 256. console-setup can load and combine two 256-glyph fonts at once. So I added the following to /etc/default/console-setup to make the text console a lot more friendly to braille users:

SCREEN_WIDTH=80 SCREEN_HEIGHT=25 FONT="Lat15-Terminus16.psf.gz brl-16x8.psf"


You need console-braille installed for brl-16x8.psf to be available.

Further Projects

There's a 3.5mm audio jack inside the braille display as well. Unfortunately, there are no converters from Mini-HDMI to 3.5mm audio that I know of. It would be very nice to be able to use the sound card that is already built into the Raspberry Pi Zero, but, unfortunately, this doesn't seem possible at the moment. Alternatively, I'm looking at using a Micro USB OTG hub and an additional USB audio adapter to get sound from the Raspberry Pi Zero to the braille display's speakers. Unfortunately, the two USB audio adapters I've tried so far have run hot for some unknown reason. So I have to find some other chipset to see if the problem goes away.

A little nuisance, currently, is that you need to manually power off the Raspberry, wait a few seconds, and then power down the braille display. Turning the braille display off cuts power delivery via the internal USB port. If this is accidentally done too soon then the Raspberry Pi Zero is shut down ungracefully (which is probably not the best way to do it). We're looking into connecting a small, buffering battery to the GPIO pins of the rpi, and into notifying the rpi when external power has dropped. A graceful, software-initiated shutdown can then be performed. You can think of it as being like a mini UPS for Micro USB.

The image

If you are a happy owner of a Handy Tech Active Star 40 and would like to do something similar, I am happy to share my current (Raspbian Stretch based) image. In fact, if there is enough interest by other blind users, we might even consider putting a kit together that makes it as easy as possible for you to get started. Let me know if this could be of interest to you.


Thanks to Dave Mielke for reviewing the text of this posting.

Thanks to Simon Kainz for making the photos for this article.

And I owe a big thank you to my co-workers at Graz University of Technology who have helped me a lot to bootstrap really quickly into the rpi world.


My first tweet about this topic is just five days ago, and apart from the soundcard not working yet, I feel like the project is already almost complete! By the way, I am editing the final version of this blog posting from my newly created monitorless ARM-based Linux laptop via an ssh connection to my home machine.

Categories: Elsewhere

Francois Marier: Cleaning up obsolete config files on Debian and Ubuntu

Planet Debian - Sat, 11/06/2016 - 23:40

As part of regular operating system hygiene, I run a cron job which updates package metadata and looks for obsolete packages and configuration files.

While there is already some easily available information on how to purge unneeded or obsolete packages and how to clean up config files properly in maintainer scripts, the guidance on how to delete obsolete config files is not easy to find and somewhat incomplete.

These are the obsolete conffiles I started with:

$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$' /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete /etc/apparmor.d/tunables/ntpd 5519e4c01535818cb26f2ef9e527f191 obsolete /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete /etc/apparmor.d/usr.sbin.ntpd a00aa055d1a5feff414bacc89b8c9f6e obsolete /etc/bash_completion.d/initramfs-tools 7eeb7184772f3658e7cf446945c096b1 obsolete /etc/bash_completion.d/insserv 32975fe14795d6fce1408d5fd22747fd obsolete /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf 8df3896101328880517f530c11fff877 obsolete /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf d81013f5bfeece9858706aed938e16bb obsolete

To get rid of the /etc/bash_completion.d/ files, I first determined what packages they were registered to:

$ dpkg -S /etc/bash_completion.d/initramfs-tools initramfs-tools: /etc/bash_completion.d/initramfs-tools $ dpkg -S /etc/bash_completion.d/insserv initramfs-tools: /etc/bash_completion.d/insserv

and then followed Paul Wise's instructions:

$ rm /etc/bash_completion.d/initramfs-tools /etc/bash_completion.d/insserv $ apt install --reinstall initramfs-tools insserv

For some reason that didn't work for the /etc/dbus-1/system.d/ files and I had to purge and reinstall the relevant package:

$ dpkg -S /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf system-config-printer-common: /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf $ dpkg -S /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf system-config-printer-common: /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf $ apt purge system-config-printer-common $ apt install system-config-printer

The files in /etc/apparmor.d/ were even more complicated to deal with because purging the packages that they come from didn't help:

$ dpkg -S /etc/apparmor.d/abstractions/evince evince: /etc/apparmor.d/abstractions/evince $ apt purge evince $ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$' /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete

I was however able to get rid of them by also purging the apparmor profile packages that are installed on my machine:

$ apt purge apparmor-profiles apparmor-profiles-extra evince ntp $ apt install apparmor-profiles apparmor-profiles-extra evince ntp

Not sure why I had to do this but I suspect that these files used to be shipped by one of the apparmor packages and then eventually migrated to the evince and ntp packages directly and dpkg got confused.

If you're in a similar circumstance, you want want to search for the file you're trying to get rid of on Google and then you might end up on http://apt-browse.org/ which could lead you to the old package that used to own this file.

Categories: Elsewhere

Simon Désaulniers: [GSOC] Week 1 - Report

Planet Debian - Sat, 11/06/2016 - 19:06

I have been working on writing serializable structure for remote filtering of values on the distributed hash table OpenDHT. This structure is called Query.

What’s done

The implementation of the base design with other changes have been made. You can see evolution on the matter here;

Changes allow to create a Query with a SQL-ish statement like the following

Query q("SELECT * WHERE id=5");

You can then use this query like so

get(hash, getcb, donecb, filter, "SELECT * WHERE id=5");

I verified the working state of the code with the dhtnode. I have also done some tests using our python benchmark scripts.

What’s next
  • Value pagination;
  • Optimization of put operations with query for ids before put, hence avoiding potential useless traffic.

The Query is the key part for optimizing my initial work on data persistence on the DHT. It will enhance the DHT on more than one aspects. I have to point out it would not have been possible to do that before our major refactoring we introduced in 0.6.0.

Categories: Elsewhere

Shirish Agarwal: The road to debconf 2016, tourism and arts.

Planet Debian - Sat, 11/06/2016 - 08:55

A longish blog post, please bear, a second part of the blog post would be published in few days from now. My fixed visa finally arrived, yeah But this story doesn’t start here, it starts about a year back. While I have been contributing to Debian in my free time over the years, and sometimes paid time as well, I had never thought of going overseas as the experiences I knew from friends and relatives, it isn’t easy to get all the permissions and paperwork done to say the least (bureaucracy @ work). But last year, when Debconf 15 was being launched, there are/were 2-3 friends of mine who are studying, doing their Ph.D. in some computer/web stuff, living in Germany currently that they goaded me to apply. The first few times I gave some standard excuses, but when they kept on for a while, just to shut them up I applied to the debconf team applying for food, accommodation and travel sponsorship.

I didn’t have high hopes as there obviously are many more talented peers around me who understand FOSS and Debian at a much more fundamental, philosophical as well as technical level than me. Much to my surprise though, about a month (and around two or three weeks just before the event was about to take place) I got the bursary/sponsorships for food, accommodation as well as travel. I was unsure that the remaining time was enough to get a visa hence declined that time around.

That whole episode gave me the confidence that perhaps my application would be accepted if I applied again. Using my previous years understanding, decided to give it a shot again as this would also enable me to get a feel of visa bureaucracy as well as gain a bit of novice understanding about what factors go in choosing a flight and believe me the latter part proved to be pretty confusing. While the visa business seems easy, the form at least is easy and what they ask, it can be troublesome as far as visa-processing process is concerned. It took a better part of the month to get the visa which I needed. The in-between time is and can be a bit stressful as you have committed funds for travel i.e. the airline tickets and are in a limbo as you didn’t know that if the visa is cancelled for some reason, your tickets would be refunded or not. A little history is needed and hopefully is helpful for anybody who’s applying for a short-stay Visa in South Africa.

Exactly, A month and day back I had applied for visa. The visa I had applied for is 17 days as the flights within my budget was for those days only. The visa I got was for 10 days only which was ending in the middle of the conference. I tried many avenues to get information and was told that I needed to write a correction letter telling/sharing the information about the correct time period and dates in BOLD with a heavier weight/point which is what I did and gave to VFS office without any further payment on my part. It took a bit more time than the first time around but the consulate co-operated. While I can understand the oversight as they probably get more visas requests along with special and urgent requests so such occurrences can happen. I am and was happy that there was a recourse rather than starting from scratch which probably would have made me more anxious due to the first experience .

Apparently I was lucky that I had done with Qatar Airways as later came to know that there are other airways which don’t refund money even after visa rejection . This information I got to know pretty late in the game otherwise I wouldn’t have been much stressed. As had committed knew had to go the whole hog and whatever barriers are there, have overcome at least as far as the visa part/process is concerned.

Now after few days, will probably start to worry about the actual travel, part nervousness, part excitement, nervousness as it will be an alien land, am obese so traveling economy on 787-8 and 777-300 ER will be tricky. The 787-8 will probably be a rough ride as it’s a 9 hr. journey and the seat are a mere 18″, the cattle class as shared by one of our esteemed politicians. This blame has to do with Boeing 787 rather than anybody else. Hopefully, if there is a next time, would make better choices.

Anyways,have selected an aisle seat so that will be able to walk every hr. or so to get the circulation in legs going as leg room in economy is not much from my domestic air travel experiences. If I do survive the travel, then will see South Africa and try to get some free time and explore South Africa and try to figure out how are they able to get one and half time tourists while we get around half figure for a whole year even though we are bigger (area-wise) to South Africa. I did find something positive for us as well, it’s not all doldrums all around.

Now, I had been thinking about if I know any South African music and movies. I had explored djembe while growing up in teens but other than that, not much. The only music I have heard is Harry Belafonte . So I hope to bring some Indian movies and music with me so that people if they have not explored Bollywood as well as Indian classical music could explore some of it, of course it will be pale imitation of what ‘Sawai Gandharva Music Festival‘ for instance gives. I also hope to hear and get some music and movies to learn more about South Africa.

Filed under: Miscellenous Tagged: #bollywood, #Debconf Germany 2015, #Debconf South Africa 2016, #Debconf15, #Debconf16, #djembe, #South-African consulate, music, tourism, visa
Categories: Elsewhere

Hideki Yamane: Which compression do Debian packages use?

Planet Debian - Sat, 11/06/2016 - 05:50
gzip: 4576
bzip2: 54xz: 46250none: 9
90% of packages use xz. Packages use bzip2 should migrate to xz.
Categories: Elsewhere


Subscribe to jfhovinne aggregator