Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 48 min 18 sec ago

Sven Hoexter: Free SSL/TLS snakeoil from wosign.com

Mon, 22/06/2015 - 21:39

I've been a proponet of CaCert.org for a long time and I'm still using those certificates in some places, but lately I gave in and searched for something that validates even on iOS. It's not that I strictly need it, it's more a favour to make life for friends and family easier.

I turned down startssl.com because I always manage to somehow lose the client certificate for the portal login. Plus I failed to generate several certificates for subdomains within the primary domain. I want to use different keys on purpose so SANs are not helpful, neither are wildcard certs for which you've to pay anyway. Another point against a wildcard cert from startssl is that I'd like to refrain from sending in my scanned papers for verification.

On a sidenote I'm also not a fan of random email address extractions from whois to sent validation codes to. I just don't see why the abuse desk of a registrar should be able to authorize on DV certificates for a domain under my control.

So I decided to pay the self proclaimed leader of the snakeoil industrie (Comodo) via cheapsslshop.com. That made 12USD for a 3 year Comodo DV certificate. Fair enough for the mailsetup I share with a few friends, and the cheapest one I could find at that time. Actually no hassle with logins or verification. It looks a bit like a scam but the payment is done via 2checkout if I remember correctly and the certificate got issued via a voucher code by Comodo directly. Drawback: credit card payment.

Now while we're all waiting for letsencrypt.org I learned about the free offer of wosign.com. The CA is issued by the StartSSL Root CA, so technically we're very close to step one. Beside of that I only had to turn off uBlock origin and the rest of the JavaScript worked fine with Iceweasel once I clicked on the validity time selection checkbox. They offer the certificate for up to 3 years, you can paste your own csr and you can add up to 100 SANs. The only drawback is that it took them about 12 hours to issue the certificate and the mails look a hell lot like spam if you sent them through Spamassassin.

That provides now a free and validating certificate for sven.stormbind.net in case you'd like to check out the chain. The validation chain is even one certificate shorter then the chain for the certificate I bought from Comodo. So in case anyone else is waiting for letsencrypt to start, you might want to check wosign until Mozilla et al are ready.

From my point of view the only reason to pay one of the major CAs is for the service of running a reliable OCSP system. I also pointed that out here. It's more and more about the service you buy and no longer just money for a few ones and zeroes.

Categories: Elsewhere

Niels Thykier: Introducing dak auto-decruft

Mon, 22/06/2015 - 15:11

Debian now have over 22 000 source packages and 45 500 binary packages.  To counter that, the FTP masters and I have created a dak tool to automatically remove packages from unstable!  This is also much more efficient than only removing them from testing! :)

 

The primary goal of the auto-decrufter is to remove a regular manual work flow from the FTP masters.  Namely, the removal of the common cases of cruft, such as “Not Built from Source” (NBS) and “Newer Version In Unstable” (NVIU).  With the auto-decrufter in place, such cruft will be automatically removed when there are no reverse dependencies left on any architecture and nothing Build-Depends on it any more.

Despite the implication in the “opening” of this post, this will in fact not substantially reduce the numbers of packages in unstable. :) Nevertheless, it is still very useful for the FTP masters, the release team and packaging Debian contributors.

The reason why the release team benefits greatly from this tool, is that almost every transition generates one piece of “NBS”-cruft.  Said piece of cruft currently must be  removed from unstable before the transition can progress into its final phase.  Until recently that removal has been 100% manual and done by the FTP masters.

The restrictions on auto-decrufter means that we will still need manual decrufts. Notably, the release team will often complete transitions even when some reverse dependencies remain on non-release architectures.  Nevertheless, it is definitely an improvement.

 

Omelettes and eggs: As an old saying goes “You cannot make an omelette without breaking eggs”.  Less so when the only “test suite” is production.  So here are some of the “broken eggs” caused by implementation of the auto-decrufter:

  • About 30 minutes of “dak rm” (without –no-action) would unconditionally crash.
  • A broken dinstall when “dak auto-decruft” was run without “–dry-run” for the first time.
  • A boolean condition inversion causing removals to remove the “override” for partial removals (and retain it for “full” removals).
    • Side-effect, this broke Britney a couple of times because dak now produced some “unexpected” Packages files for unstable.
  • Not to mention the “single digit bug closure” bug.

Of the 3, the boolean inversion was no doubt the worst.  By the time we had it fixed, at least 50 (unique) binary packages had lost their “override”.  Fortunately, it was possible to locate these issues using a database query and they have now been fixed.

Before I write any more non-trivial patches for dak, I will probably invest some time setting up a basic test framework for dak first.

 


Filed under: Debian, Release-Team
Categories: Elsewhere

Lunar: Reproducible builds: week 8 in Stretch cycle

Mon, 22/06/2015 - 14:48

What happened about the reproducible builds effort this week:

Toolchain fixes

Andreas Henriksson has improved Johannes Schauer initial patch for pbuilder adding support for build profiles.

Packages fixed

The following 12 packages became reproducible due to changes in their build dependencies: collabtive, eric, file-rc, form-history-control, freehep-chartableconverter-plugin , jenkins-winstone, junit, librelaxng-datatype-java, libwildmagic, lightbeam, puppet-lint, tabble.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #788747 on 0xffff by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
  • #788752 on analog by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
  • #788757 on jacktrip by akira: remove $datetime from the documentation footer.
  • #788868 on apophenia by akira: remove $date from the documentation footer.
  • #788920 on orthanc by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #788955 on rivet by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789040 on liblo by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789049 on mpqc by akira: remove $datetime from the documentation footer.
  • #789071 on libxkbcommon by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789073 on libxr by akira: remove $datetime from the documentation footer.
  • #789076 on lvtk by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789087 on lmdb by akira: pass HTML_TIMESTAMP=NO to Doxygen.
  • #789184 on openigtlink by akira: remove $datetime from the documentation footer.
  • #789264 on openscenegraph by akira: pass HTML_TIMESTAMP=NO to Doxygen.
  • #789308 on trigger-rally-data by Mattia Rizzolo: call dh_fixperms even when overriding dh_fixperms.
  • #789396 on libsidplayfp by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789399 on psocksxx by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789405 on qdjango by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789406 on qof by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789428 on qsapecng by akira: pass HTML_TIMESTAMP=NO to Doxygen.
reproducible.debian.net

Bugs with the ftbfs usertag are now visible on the bug graphs. This explain the recent spike. (h01ger)

Andreas Beckmann suggested a way to test building packages using the “funny paths” that one can get when they contain the full Debian package version string.

debbindiff development

Lunar started an important refactoring introducing abstactions for containers and files in order to make file type identification more flexible, enabling fuzzy matching, and allowing parallel processing.

Documentation update

Ximin Luo detailed the proposal to standardize environment variables to pass a reference source date to tools that needs one (e.g. documentation generator).

Package reviews

41 obsolete reviews have been removed, 168 added and 36 updated this week.

Some more issues affecting packages failing to build from source have been identified.

Meetings

Minutes have been posted for Tuesday June 16th meeting.

The next meeting is scheduled Tuesday June 23rd at 17:00 UTC.

Presentations

Lunar presented the project in French during Pas Sage en Seine in Paris. Video and slides are available.

Categories: Elsewhere

Enrico Zini: debtags-rewrite-python3

Sun, 21/06/2015 - 18:04
debtags rewritten in python3

In my long quest towards closing #540218, I have uploaded a new libept to experimental. Then I tried to build debtags on a sid+experimental chroot and the result runs but has libc's free() print existential warnings about whatevers.

At a quick glance, there are now things around like a new libapt, gcc 5 with ABI changes, and who knows what else. I figured how much time it'd take me to debug something like that, and I've used that time to rewrite debtags in python3. It took 8 hours, 5 of pleasant programming and the usual tax of another 3 of utter frustration packaging the results. I guess I gained over the risk of spending an unspecified amount of hours of just pure frustration.

So from now on debtags is going to be a pure python3 package, with dependencies on only python3-apt and python3-debian. 700 lines of python instead of several C++ files built on 4 layers of libraries. Hopefully, this is the last of the big headaches I get from hacking on this package. Also, one less package using libept.

Categories: Elsewhere

Steve Kemp: We're all about storing objects

Sun, 21/06/2015 - 02:00

Recently I've been experimenting with camlistore, which is yet another object storage system.

Camlistore gains immediate points because it is written in Go, and is a project initiated by Brad Fitzpatrick, the creator of Perlbal, memcached, and Livejournal of course.

Camlistore is designed exactly how I'd like to see an object storage-system - each server allows you to:

  • Upload a chunk of data, getting an ID in return.
  • Download a chunk of data, by ID.
  • Iterate over all available IDs.

It should be noted more is possible, there's a pretty web UI for example, but I'm simplifying. Do your own homework :)

With those primitives you can allow a client-library to upload a file once, then in the background a bunch of dumb servers can decide amongst themselves "Hey I have data with ID:33333 - Do you?". If nobody else does they can upload a second copy.

In short this kind of system allows the replication to be decoupled from the storage. The obvious risk is obvious though: if you upload a file the chunks might live on a host that dies 20 minutes later, just before the content was replicated. That risk is minimal, but valid.

There is also the risk that sudden rashes of uploads leave the system consuming all the internal-bandwith constantly comparing chunk-IDs, trying to see if data is replaced that has been copied numerous times in the past, or trying to play "catch-up" if the new-content is larger than the replica-bandwidth. I guess it should possible to detect those conditions, but they're things to be concerned about.

Anyway the biggest downside with camlistore is documentation about rebalancing, replication, or anything other than simple single-server setups. Some people have blogged about it, and I got it working between two nodes, but I didn't feel confident it was as robust as I wanted it to be.

I have a strong belief that Camlistore will become a project of joy and wonder, but it isn't quite there yet. I certainly don't want to stop watching it :)

On to the more personal .. I'm all about the object storage these days. Right now most of my objects are packed in a collection of boxes. On the 6th of next month a shipping container will come pick them up and take them to Finland.

For pretty much 20 days in a row we've been taking things to the skip, or the local charity-shops. I expect that by the time we've relocated the amount of possesions we'll maintain will be at least a fifth of our current levels.

We're working on the general rule of thumb: "If it is possible to replace an item we will not take it". That means chess-sets, mirrors, etc, will not be carried. DVDs, for example, have been slashed brutally such that we're only transferring 40 out of a starting collection of 500+.

Only personal, one-off, unique, or "significant" items will be transported. This includes things like personal photographs, family items, and similar. Clothes? Well I need to take one jacket, but more can be bought. The only place I put my foot down was books. Yes I'm a kindle-user these days, but I spent many years tracking down some rare volumes, and though it would be possible to repeat that effort I just don't want to.

I've also decided that I'm carrying my complete toolbox. Some of the tools I took with me when I left home at 18 have stayed with me for the past 20+ years. I don't need this specific crowbar, or axe, but I'm damned if I'm going to lose them now. So they stay. Object storage - some objects are more important than they should be!

Categories: Elsewhere

Joachim Breitner: Running circle-packing in the Browser, now using GHCJS

Sat, 20/06/2015 - 22:50

Quite a while ago, I wrote a small Haskell library called circle-packing to pack circles in a tight arrangement. Back then, I used the Haskell to JavaScript compiler fay to create a pretty online demo of that library, and shortly after, I create the identical demo using haste (another Haskell to JavaScript compiler).

The main competitor of these two compilers, and the most promising one, is GHCJS. Back then, it was too annoying to install. But after two years, things have changed, and it only takes a few simple commands to get GHCJS running, so I finally created the circle packing demo in a GHCJS variant.

Quick summary: Cabal integration is very good (like haste, but unline fay), interfacing JavaScript is nice and easy (like fay, but unlike haste), and a quick check seems to indicate that it is faster than either of these two. I should note that I did not update the other two demos, so they represent the state of fay and haste back then, respectively.

With GHCJS now available at my fingertips, maybe I will produce some more Haskell to be run in your browser. For example, I could port FrakView, a GUI program to render, expore and explain iterated function systems, from GTK to HTML.

Categories: Elsewhere

Russell Coker: BTRFS Status June 2015

Sat, 20/06/2015 - 06:47

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT hetz0/be0-mail@2015-03-10 2.88G – 387G – hetz0/be0-mail@2015-03-11 1.12G – 388G – hetz0/be0-mail@2015-03-12 1.11G – 388G – hetz0/be0-mail@2015-03-13 1.19G – 388G –

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

Related posts:

  1. BTRFS Status Dec 2014 My last problem with BTRFS was in August [1]. BTRFS...
  2. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  3. BTRFS Status July 2014 My last BTRFS status report was in April [1], it...
Categories: Elsewhere

Norbert Preining: Localizing a WordPress Blog

Sat, 20/06/2015 - 02:09

There are many translation plugins available for WordPress, and most of them deal with translations of articles. This might be of interest for others, but not for me. If you have a blog with visitors from various language background, because you are living abroad, or writing in several languages, you might feel tempted to provide visitors with a localized “environment”, meaning that as much as possible is translated into the native language of the visitor, without actually translating content – but allowing to.

In my case I am writing mostly in English and Japanese, but sometimes (in former times) in Italian and now and then in my mother tongue, German. Visitors from my site are from all over the world, but at least for Japanese visitors I wanted to provide a localized environment. This blog describes how to get as much as possible translated of your blog, and here I mean not the actual articles, because this is the easy part and most translation plugins handle that fine, but the things around the articles (categories, tags, headers, …).

Starting point and aims

My starting point was a blog where I already had language added as extra taxonomy, and have tagged all articles with a language. But I didn’t have any other translation plugin installed or used. Furthermore, I am using a child theme of the main theme in use (that is always a good idea anyway!). And of course, the theme you are using should be prepared for translation, that is that most literal strings in the theme source code are wrapped in __( ... ) or _e( ... ) calls. And by the way, if you don’t have the language taxonomy, don’t worry, that will come in automatically.

One more thing: The following descriptions are not for the very beginner. I expect certain fluency with WordPress, where for example themese and plugins keep their files, as well as PHP programming experience is needed for some of the steps.

With this starting point my aims were quite clear:

  • allow for translation of articles
  • translate as much as possible of the surroundings
  • auto-selection of language either depending on article or on browser language of visitor
  • by default show all articles independent of selected language
  • if possible, keep database clean as far as possible
Translation plugins

There is a huge bunch of translation plugins, localization plugins, or internationalization plugins out there, and it is hard to select one. I don’t say that what I propose here is the optimal solution, just one that I was pointed at by a colleague, namely utilizing the xili-language plugin.

Installation and initial setup

Not much to say here, just follow the usual procedure (search, install, activate), followed by the initial setup of xili-language. If you haven’t had a language taxonomy by now, you can add languages from the preference page of xili-language, first tab. After having added some languages you should have something similar to the above screen shot. Having defined your languages, you can assign a language to your articles, but for now nothing has actually changed on the blog pages.

As I already mentioned, I assume that you are using a child theme. In this case you should consult the fourth tab of the xili-language settings page, called Managing language files, where on the right you should see / set up things in a way that translations in the child theme override the ones in the main theme, see screen shot on the right. I just mention here that there is another xili plugin, xili-dictionary, that can do a lot of things for you when it comes to translation – but I couldn’t figure out its operation mode, so I switched back (i.e., uninstalled that plugin) and used normal .po/.mo files as described in the next section.

Adding translations – po and mo files

Translations are handled in normal (at least for the Unix world) gettext format. Matthias wrote about this in this blog. In principle you have to:

  • create a directory languages in your child theme folder
  • create there .po file named local-LL.po or local-LL_DD.po, where LL and LL_DD are the same as the values in the field ISO Names in the list of defined languages (see above)
  • convert the .po files to .mo files using msgfmt local-LL.po -o local-LL.mo

The contents of the po files are described in Matthias’ blog, and in the following when I say add a translation, then I mean: adding a stanza

msgid "some string" msgstr "translation of some string"

to the po file, and not forgetting to recompile it to mo file.

So let us go through a list of changes I made to translate various pieces of the blog appearance:

Translation of categories

This is the easiest part, simply throw in the names of your categories into the respective local-DD_LL.po file, and be ready. In my case I used local-ja.po which besides other categories contains stanzas like:

msgid "Travel" msgstr "旅行" Translation of widget titles

In most cases the widget titles are already automatically translated, if the plugin/widget author cared for it, meaning that he called the widget_title filter on the title. If this does not happen, please report this to the widget/plugin author. I have done this for example for the simple links plugin, which I use for various elements of the side-bar. The author was extremely responsive and the fix will be in the next release is already in the latest release – big thanks!

Translation of tags

This is a bit a problem, as the tags appear in various places on my blog: next to the title line and the footer of each blog, as well as in the tag cloud in the side bar.

Furthermore, I want to translate tags instead of having related tag groups as provided by xili tidy tags plugin, so we have to deal with the various appearances of tags one by one:

Tags on the main page – shown by the theme

This is the easier part – in my case I had already a customized content.php and content-single.php in my child theme folder. If not, you need to copy the one from the parent theme and change the appearance of it to translate tags. Since this is something that depends on the specific theme, I cannot give detailed advice, but if you see something like:

$tags_list = get_the_tag_list( '', __( ', ', 'mistylake' ) );

(here the get_the_tag_list is the important part), then you can replace this by the following code:

$posttags = get_the_tags(); $first = 1; $tag_list = ''; if ($posttags) { foreach($posttags as $tag) { if ($first == 1) { $first = 0; } else { $tag_list = $tag_list . __( ', ', 'mistylake' ); } $tag_list = $tag_list . '<a href="' . esc_url( home_url( '/tag/' . $tag->slug ) ) . '">' . __($tag->name, 'mistylake') . '</a>'; } }

(there are for sure simpler ways …) This code loops over the tags and translates them using the __ function. Note that the second parameter must be the text domain of the parent theme.

If you have done this right and the web site is still running (I recommend testing it on a test installation – I had white pages many times due to php programming errors), and of course you have actual translations available and are looking at a localized version of the web site, then the list of tags as shown by your theme should be translated.

Tag cloud widget

This one is a tricky one: The tag cloud widget comes by default with WordPress, but doesn’t translate the tags. I tried a few variants (e.g. creating a new widget as extension of the original tag cloud widget, and only changing the respective functions), but that didn’t work out at all. I finally resorted to a trick: Reading the code of the original widget, I saw that it applies the tag-sort-filter filter on the array of tags. That allows us to hook into the tag cloud creating and translate the tags.

You have to add the following code to your child theme’s functions.php:

function translate_instead_of_sort($tags) { foreach ( (array) $tags as $tag ) { $tag->name = __( $tag->name , 'mistylake' ); } return $tags; } add_action('tag_cloud_sort', 'translate_instead_of_sort');

(again, don’t forget to change the text domain in the __(.., ..) call!) There might be some more things one could do, like changing the priority to be used after the sorting, or sort directly, but I haven’t played around with that. Using the above code and translating several of the tags, the tag cloud now looks like the screenshot on the right – I know, it could use some tweaking. Also, now the untranslated tags are sorted all before the translated, things one probably can address with the priority of the filter.

Having done the above things, my blog page when Japanese is selected is now mostly in Japanese, with of course the exception the actual articles, which are in a variety of languages.

Open problems

There are a few things I haven’t managed till now to translate, and they are mostly related to the Jetpack plugin, but not only:

  • translation of the calendar – it is strange that although this is a standard widget of WordPress, the translation somehow does not work out there
  • transalation of the meta text entries (Log in, RSS feed, …) – interestingly, even adding the translation of these strings did not help get them translated
  • translation of simple links text fields – here I haven’t invested by now
  • translation of (Jetpack) subscribe to this blog widget

I have a few ideas how to tackle this problem: With Jetpack the biggest problem seems that all the strings are translated in a different text domain. So one should be able to add some code to the functions.php to override/add translations to the jetpack text domain. But somehow it didn’t work out in my case. The same goes for things that are in the WordPress core and use the translation functions without a text domain – so I guess the translation function will use the main WordPress translation files/text domain.

Conclusion

The good thing of the xili-language plugin is that it does not change the actual posts (some plugins save the translations in the the post text), and is otherwise not too intrusive IMHO. Still, it falls short of allowing to translate various parts of the blog, including the widget areas.

I am not sure whether there are better plugins for this usage scenario, I would be surprised if not, but all the plugins I have seen were doing a bit too much on the article translation side and not enough on the translation of the surroundings side.

In any case, I would like to see more separation between the functionality of localization (translating the interface) and translation (translating the content). But at the moment I don’t have enough impetus to write my own plugin for this.

If you have any suggestions for improvement, please let me know!

Enjoy.

Categories: Elsewhere

Bálint Réczey: Debian is preparing the transition to FFmpeg!

Thu, 18/06/2015 - 13:56

Ending an era of shipping Libav as default the Debian Multimedia Team is working out the last details of switching to FFmpeg. If you would like to read more about the reasons please read the rationale behind the change on the dedicated wiki page. If you feel so be welcome to test the new ffmpeg packages or join the discussion starting here. (Warning, the thread is loooong!)

 

Categories: Elsewhere

DebConf team: Striving for more diversity at DebConf15 (Posted by DebConf Team)

Wed, 17/06/2015 - 12:09

DebConf is not just for Debian Developers, we welcome all members of our community active in different areas, like translation, documentation, artwork, testing, specialized derivatives, and many other ways that help make Debian better.

In fact, we would like to open DebConf to an even broader audience, and we strongly believe that more diversity at DebConf and in the Debian community will significantly help us towards our goal of becoming the Universal Operating System.

The DebConf team is proud to announce that we have started designing a specific diversity sponsorship programme to attract people to DebConf that would otherwise not consider attending our conference or not be able to join us.

In order to apply for this special sponsorship, please write an email to outreach@debian.org, before July 6th, about your interest in Debian and your sponsorship needs (accomodation, travel). Please include a sentence or two about why you are applying for a Diversity Sponsorship. You can also nominate people you think should be considered for this sponsorship programme.

Please feel free to send this announcement on to groups or individuals that could be interested in this sponsorship programme.

And we’re also looking forward to your feedback. We’re just getting started and you can help shape these efforts.

Categories: Elsewhere

C.J. Adams-Collier: Trip Report: UW signing-party

Wed, 17/06/2015 - 01:28

Dear Debian Users,

I met last night with a friend from many years ago and a number of students of cryptography. I was disappointed to see the prevalence of black hat, anti-government hackers at the event. I was hoping that civilized humanity had come to agree that using cryptography for deception, harm to others and plausible deniability is bad, m’kay? When one speaks of the government as “they,” nobody’s going to get anywhere really quick. Let’s take responsibility for the upkeep of the environment in which we find ourselves, please.

Despite what I perceived as a negative focus of the presentation, it was good to meet with peers in the Seattle area. I was very pleasantly surprised to find that better than half of the attendees were not male, that many of the socioeconomic classes of the city were represented, as were those of various ethnic backgrounds. I am really quite proud of the progress of our State University, even if I’m not always in agreement with the content that they’re polluting our kids’ brains with. I guess I should roll up my sleeves and get busy, eh?

V/R,

C.J.

Categories: Elsewhere

Norbert Preining: ptex2pdf release 0.8

Wed, 17/06/2015 - 00:38

I have uploaded ptex2pdf to CTAN. Changes are mainly about how files are searched and extensions are treated. ptex2pdf now follows the TeX standard, that is:

  • first check if the passed in filename can be found with libkpathsea
  • if not, try the filename with .tex appended
  • if that also does not fail, try the filename with .ltx appended.

This also made finding the .dvi file more robust.

The files should be in today’s TeX Live, and can be downloaded directly from here: ptex2pdf-0.8.zip, or only the lua scrip: ptex2pdf.lua.

For more details, see the main description page: /software-projects/ptex2pdf (in Japanese) or see the github project.

Categories: Elsewhere

Julien Danjou: Timezones and Python

Tue, 16/06/2015 - 18:15

Recently, I've been fighting with the never ending issue of timezones. I never thought I would have plunged into this rabbit hole, but hacking on OpenStack and Gnocchi I felt into that trap easily is, thanks to Python.

“Why you really, really, should never ever deal with timezones”

To get a glimpse of the complexity of timezones, I recommend that you watch Tom Scott's video on the subject. It's fun and it summarizes remarkably well the nightmare that timezones are and why you should stop thinking that you're smart.

The importance of timezones in applications

Once you've heard what Tom says, I think it gets pretty clear that a timestamp without any timezone attached does not give any useful information. It should be considered irrelevant and useless. Without the necessary context given by the timezone, you cannot infer what point in time your application is really referring to.

That means your application should never handle timestamps with no timezone information. It should try to guess or raises an error if no timezone is provided in any input.

Of course, you can infer that having no timezone information means UTC. This sounds very handy, but can also be dangerous in certain applications or language – such as Python, as we'll see.

Indeed, in certain applications, converting timestamps to UTC and losing the timezone information is a terrible idea. Imagine that a user create a recurring event every Wednesday at 10:00 in its local timezone, say CET. If you convert that to UTC, the event will end up being stored as every Wednesday at 09:00.

Now imagine that the CET timezone switches from UTC+01:00 to UTC+02:00: your application will compute that the event starts at 11:00 CET every Wednesday. Which is wrong, because as the user told you, the event starts at 10:00 CET, whatever the definition of CET is. Not at 11:00 CET. So CET means CET, not necessarily UTC+1.

As for endpoints like REST API, a thing I daily deal with, all timestamps should include a timezone information. It's nearly impossible to know what timezone the timestamps are in otherwise: UTC? Server local? User local? No way to know.

Python design & defect

Python comes with a timestamp object named datetime.datetime. It can store date and time precise to the microsecond, and is qualified of timezone "aware" or "unaware", whether it embeds a timezone information or not.

To build such an object based on the current time, one can use datetime.datetime.utcnow() to retrieve the date and time for the UTC timezone, and datetime.datetime.now() to retrieve the date and time for the current timezone, whatever it is.

>>> import datetime
>>> datetime.datetime.utcnow()
datetime.datetime(2015, 6, 15, 13, 24, 48, 27631)
>>> datetime.datetime.now()
datetime.datetime(2015, 6, 15, 15, 24, 52, 276161)


As you can notice, none of these results contains timezone information. Indeed, Python datetime API always returns unaware datetime objects, which is very unfortunate. Indeed, as soon as you get one of this object, there is no way to know what the timezone is, therefore these objects are pretty "useless" on their own.

Armin Ronacher proposes that an application always consider that the unaware datetime objects from Python are considered as UTC. As we just saw, that statement cannot be considered true for objects returned by datetime.datetime.now(), so I would not advise doing so. datetime objects with no timezone should be considered as a "bug" in the application.

Recommendations

My recommendation list comes down to:

  1. Always use aware datetime object, i.e. with timezone information. That makes sure you can compare them directly (aware and unaware datetime objects are not comparable) and will return them correctly to users. Leverage pytz to have timezone objects.
  2. Use ISO 8601 as input and output string format. Use datetime.datetime.isoformat() to return timestamps as string formatted using that format, which includes the timezone information.

In Python, that's equivalent to having:

>>> import datetime
>>> import pytz
>>> def utcnow():
return datetime.datetime.now(tz=pytz.utc)
>>> utcnow()
datetime.datetime(2015, 6, 15, 14, 45, 19, 182703, tzinfo=<UTC>)
>>> utcnow().isoformat()
'2015-06-15T14:45:21.982600+00:00'


If you need to parse strings containing ISO 8601 formatted timestamp, you can rely on the iso8601, which returns timestamps with correct timezone information. This makes timestamps directly comparable:

>>> import iso8601
>>> iso8601.parse_date(utcnow().isoformat())
datetime.datetime(2015, 6, 15, 14, 46, 43, 945813, tzinfo=<FixedOffset '+00:00' datetime.timedelta(0)>)
>>> iso8601.parse_date(utcnow().isoformat()) < utcnow()
True


If you need to store those timestamps, the same rule should apply. If you rely on MongoDB, it assumes that all the timestamp are in UTC, so be careful when storing them – you will have to normalize the timestamp to UTC.

For MySQL, nothing is assumed, it's up to the application to insert them in a timezone that makes sense to it. Obviously, if you have multiple applications accessing the same database with different data sources, this can end up being a nightmare.

PostgreSQL has a special data type that is recommended called timestamp with timezone, and which can store the timezone associated, and do all the computation for you. That's the recommended way to store them obviously. That does not mean you should not use UTC in most cases; that just means you are sure that the timestamp are stored in UTC since it's written in the database, and you check if any other application inserted timestamps with different timezone.

OpenStack status

As a side note, I've improved OpenStack situation recently by changing the oslo.utils.timeutils module to deprecate some useless and dangerous functions. I've also added support for returning timezone aware objects when using the oslo_utils.timeutils.utcnow() function. It's not possible to make it a default unfortunately for backward compatibility reason, but it's there nevertheless, and it's advised to use it. Thanks to my colleague Victor for the help!

Have a nice day, whatever your timezone is!

Categories: Elsewhere

Simon Josefsson: SSH Host Certificates with YubiKey NEO

Tue, 16/06/2015 - 14:05

If you manage a bunch of server machines, you will undoubtedly have run into the following OpenSSH question:

The authenticity of host 'host.example.org (1.2.3.4)' can't be established. RSA key fingerprint is 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5. Are you sure you want to continue connecting (yes/no)?

If the server is a single-user machine, where you are the only person expected to login on it, answering “yes” once and then using the ~/.ssh/known_hosts file to record the key fingerprint will (sort-of) work and protect you against future man-in-the-middle attacks. I say sort-of, since if you want to access the server from multiple machines, you will need to sync the known_hosts file somehow. And once your organization grows larger, and you aren’t the only person that needs to login, having a policy that everyone just answers “yes” on first connection on all their machines is bad. The risk that someone is able to successfully MITM attack you grows every time someone types “yes” to these prompts.

Setting up one (or more) SSH Certificate Authority (CA) to create SSH Host Certificates, and have your users trust this CA, will allow you and your users to automatically trust the fingerprint of the host through the indirection of the SSH Host CA. I was surprised (but probably shouldn’t have been) to find that deploying this is straightforward. Even setting this up with hardware-backed keys, stored on a YubiKey NEO, is easy. Below I will explain how to set this up for a hypothethical organization where two persons (sysadmins) are responsible for installing and configuring machines.

I’m going to assume that you already have a couple of hosts up and running and that they run the OpenSSH daemon, so they have a /etc/ssh/ssh_host_rsa_key* public/private keypair, and that you have one YubiKey NEO with the PIV applet and that the NEO is in CCID mode. I don’t believe it matters, but I’m running a combination of Debian and Ubuntu machines. The Yubico PIV tool is used to configure the YubiKey NEO, and I will be using OpenSC‘s PKCS#11 library to connect OpenSSH with the YubiKey NEO. Let’s install some tools:

apt-get install yubikey-personalization yubico-piv-tool opensc-pkcs11 pcscd

Every person responsible for signing SSH Host Certificates in your organization needs a YubiKey NEO. For my example, there will only be two persons, but the number could be larger. Each one of them will have to go through the following process.

The first step is to prepare the NEO. First mode switch it to CCID using some device configuration tool, like yubikey-personalization.

ykpersonalize -m1

Then prepare the PIV applet in the YubiKey NEO. This is covered by the YubiKey NEO PIV Introduction but I’ll reproduce the commands below. Do this on a disconnected machine, saving all files generated on one or more secure media and store that in a safe.

user=simon key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'` echo $key > ssh-$user-key.txt pin=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6` echo $pin > ssh-$user-pin.txt puk=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8` echo $puk > ssh-$user-puk.txt yubico-piv-tool -a set-mgm-key -n $key yubico-piv-tool -k $key -a change-pin -P 123456 -N $pin yubico-piv-tool -k $key -a change-puk -P 12345678 -N $puk

Then generate a RSA private key for the SSH Host CA, and generate a dummy X.509 certificate for that key. The only use for the X.509 certificate is to make PIV/PKCS#11 happy — they want to be able to extract the public-key from the smartcard, and do that through the X.509 certificate.

openssl genrsa -out ssh-$user-ca-key.pem 2048 openssl req -new -x509 -batch -key ssh-$user-ca-key.pem -out ssh-$user-ca-crt.pem

You import the key and certificate to the PIV applet as follows:

yubico-piv-tool -k $key -a import-key -s 9c < ssh-$user-ca-key.pem yubico-piv-tool -k $key -a import-certificate -s 9c < ssh-$user-ca-crt.pem

You now have a SSH Host CA ready to go! The first thing you want to do is to extract the public-key for the CA, and you use OpenSSH's ssh-keygen for this, specifying OpenSC's PKCS#11 module.

ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -e > ssh-$user-ca-key.pub

If you happen to use YubiKey NEO with OpenPGP using gpg-agent/scdaemon, you may get the following error message:

no slots cannot read public key from pkcs11

The reason is that scdaemon exclusively locks the smartcard, so no other application can access it. You need to kill scdaemon, which can be done as follows:

gpg-connect-agent SCD KILLSCD SCD BYE /bye

The output from ssh-keygen may look like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Now all your users in your organization needs to add a line to their ~/.ssh/known_hosts as follows:

@cert-authority *.example.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Each sysadmin needs to go through this process, and each user needs to add one line for each sysadmin. While you could put the same key/certificate on multiple YubiKey NEOs, to allow users to only have to put one line into their file, dealing with revocation becomes a bit more complicated if you do that. If you have multiple CA keys in use at the same time, you can roll over to new CA keys without disturbing production. Users may also have different policies for different machines, so that not all sysadmins have the power to create host keys for all machines in your organization.

The CA setup is now complete, however it isn't doing anything on its own. We need to sign some host keys using the CA, and to configure the hosts' sshd to use them. What you could do is something like this, for every host host.example.com that you want to create keys for:

h=host.example.com scp root@$h:/etc/ssh/ssh_host_rsa_key.pub . gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -s ssh-$user-ca-key.pub -I $h -h -n $h -V +52w ssh_host_rsa_key.pub scp ssh_host_rsa_key-cert.pub root@$h:/etc/ssh/

The ssh-keygen command will use OpenSC's PKCS#11 library to talk to the PIV applet on the NEO, and it will prompt you for the PIN. Enter the PIN that you set above. The output of the command would be something like this:

Enter PIN for 'PIV_II (PIV Card Holder pin)': Signed host key ssh_host_rsa_key-cert.pub: id "host.example.com" serial 0 for host.example.com valid from 2015-06-16T13:39:00 to 2016-06-14T13:40:58

The host now has a SSH Host Certificate installed. To use it, you must make sure that /etc/ssh/sshd_config has the following line:

HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub

You need to restart sshd to apply the configuration change. If you now try to connect to the host, you will likely still use the known_hosts fingerprint approach. So remove the fingerprint from your machine:

ssh-keygen -R $h

Now if you attempt to ssh to the host, and using the -v parameter to ssh, you will see the following:

debug1: Server host key: RSA-CERT 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5 debug1: Host 'host.example.com' is known and matches the RSA-CERT host certificate.

Success!

One aspect that may warrant further discussion is the host keys. Here I only created host certificates for the hosts' RSA key. You could create host certificate for the DSA, ECDSA and Ed25519 keys as well. The reason I did not do that was that in this organization, we all used GnuPG's gpg-agent/scdaemon with YubiKey NEO's OpenPGP Card Applet with RSA keys for user authentication. So only the host RSA key is relevant.

Revocation of a YubiKey NEO key is implemented by asking users to drop the corresponding line for one of the sysadmins, and regenerate the host certificate for the hosts that the sysadmin had created host certificates for. This is one reason users should have at least two CAs for your organization that they trust for signing host certificates, so they can migrate away from one of them to the other without interrupting operations.

Categories: Elsewhere

Lunar: Reproducible builds: week 7 in Stretch cycle

Mon, 15/06/2015 - 19:33

What happened about the reproducible builds effort for this week:

Presentations

On June 7th, Reiner Herrmann presented the project at the Gulaschprogrammiernacht 15 in Karlsruhe, Germany. Video and audio recordings in German are available, and so are the slides in English.

Toolchain fixes
  • Joachim Breitner uploaded ghc/7.8.4-9 which uses a hash of the command line instead of the pid when calculating a “random” directory name.
  • Lunar uploaded mozilla-devscripts/0.42 which now properly sets the timezone. Patch by Reiner Herrmann.
  • Dmitry Shachnev uploaded python-qt4/4.11.4+dfsg-1 which now outputs the list of imported module in a stable order. The issue has been fixed upstream. Original patch by Reiner Herrmann.
  • Norbert Preining uploaded tex-common/6.00 which tries to ensure reproducible builds in files generated by dh_installtex.
  • Barry Warsaw uploaded wheel/0.24.0-2 which makes the output deterministic. Barry has submitted the fixes upstream based on patches by Reiner Herrman.

Daniel Kahn Gillmor's report on help2man started a discussion with Brendan O'Dea and Ximin Luo about standardizing a common environment variable that would provide a replacement for an embedded build date. After various proposals and research by Ximin about date handling in several programming languages, the best solution seems to define SOURCE_DATE_EPOCH with a value suitable for gmtime(3).

  1. Martin Borgert wondered if Sphinx could be changed in a way that would avoid having to tweak debian/rules in packages using it to produce HTML documentation.

Daniel Kahn Gillmor opened a new report about icont producing unreproducible binaries.

Packages fixed

The following 32 packages became reproducible due to changes in their build dependencies: agda, alex, c2hs, clutter-1.0, colorediffs-extension, cpphs, darcs-monitor, dispmua, haskell-curl, haskell-glfw, haskell-glib, haskell-gluraw, haskell-glut, haskell-gnutls, haskell-gsasl, haskell-hfuse, haskell-hledger-interest, haskell-hslua, haskell-hsqml, haskell-hssyck, haskell-libxml-sax, haskell-openglraw, haskell-readline, haskell-terminfo, haskell-x11, jarjar-maven-plugin, kxml2, libcgi-struct-xs-perl, libobject-id-perl, maven-docck-plugin, parboiled, pegdown.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

reproducible.debian.net

A new variation to better notice when a package captures the environment has been introduced. (h01ger)

The test on Debian packages works by building the package twice in a short time frame. But sometimes, a mirror push can happen between the first and the second build, resulting in a package built in a different build environment. This situation is now properly detected and will run a third build automatically. (h01ger)

OpenWrt, the distribution specialized in embedded devices like small routers, is now being tested for reproducibility. The situation looks very good for their packages which seems mostly affected by timestamps in the tarball. System images will require more work on debbindiff to be better understood. (h01ger)

debbindiff development

Reiner Herrmann added support for decompling Java .class file and .ipk package files (used by OpenWrt). This is now available in version 22 released on 2015-06-14.

Documentation update

Stephen Kitt documented the new --insert-timestamp available since binutils-mingw-w64 version 6.2 available to insert a ready-made date in PE binaries built with mingw-w64.

Package reviews

195 obsolete reviews have been removed, 65 added and 126 updated this week.

New identified issues:

Misc.

Holger Levsen reported an issue with the locales-all package that Provides: locales but is actually missing some of the files provided by locales.

Coreboot upstream has been quick to react after the announcement of the tests set up the week before. Patrick Georgi has fixed all issues in a couple of days and all Coreboot images are now reproducible (without a payload). SeaBIOS is one of the most frequently used payload on PC hardware and can now be made reproducible too.

Paul Kocialkowski wrote to the mailing list asking for help on getting U-Boot tested for reproducibility.

Lunar had a chat with maintainers of Open Build Service to better understand the difference between their system and what we are doing for Debian.

Categories: Elsewhere

Petter Reinholdtsen: Graphing the Norwegian company ownership structure

Mon, 15/06/2015 - 14:00

It is a bit work to figure out the ownership structure of companies in Norway. The information is publicly available, but one need to recursively look up ownership for all owners to figure out the complete ownership graph of a given set of companies. To save me the work in the future, I wrote a script to do this automatically, outputting the ownership structure using the Graphviz/dotty format. The data source is web scraping from Proff, because I failed to find a useful source directly from the official keepers of the ownership data, Brønnøysundsregistrene.

To get an ownership graph for a set of companies, fetch the code from git and run it using the organisation number. I'm using the Norwegian newspaper Dagbladet as an example here, as its ownership structure is very simple:

% time ./bin/eierskap-dotty 958033540 > dagbladet.dot real 0m2.841s user 0m0.184s sys 0m0.036s %

The script accept several organisation numbers on the command line, allowing a cluster of companies to be graphed in the same image. The resulting dot file for the example above look like this. The edges are labeled with the ownership percentage, and the nodes uses the organisation number as their name and the name as the label:

digraph ownership { rankdir = LR; "Aller Holding A/s" -> "910119877" [label="100%"] "910119877" -> "998689015" [label="100%"] "998689015" -> "958033540" [label="99%"] "974530600" -> "958033540" [label="1%"] "958033540" [label="AS DAGBLADET"] "998689015" [label="Berner Media Holding AS"] "974530600" [label="Dagbladets Stiftelse"] "910119877" [label="Aller Media AS"] }

To view the ownership graph, run "dotty dagbladet.dot" or convert it to a PNG using "dot -T png dagbladet.dot > dagbladet.png". The result can be seen below:

Note that I suspect the "Aller Holding A/S" entry to be incorrect data in the official ownership register, as that name is not registered in the official company register for Norway. The ownership register is sensitive to typos and there seem to be no strict checking of the ownership links.

Let me know if you improve the script or find better data sources. The code is licensed according to GPL 2 or newer.

Update 2015-06-15: Since the initial post I've been told that "Aller Holding A/S" is a Danish company, which explain why it did not have a Norwegian organisation number. I've also been told that there is a web services API available from Brønnøysundsregistrene, for those willing to accept the terms or pay the price.

Categories: Elsewhere

Alessio Treglia: How to have a successful OpenStack project

Mon, 15/06/2015 - 10:30

It’s no secret that OpenStack is becoming the de-facto standard for private cloud and a way for telecom operators to differentiate against big names such as Amazon or Google.
OpenStack has already been adopted in some specific projects, but the wide adoption in enterprises is starting now, mostly because people simply find it difficult to understand. VMWare is still something to compare to, but OpenStack and cloud is different. While cloud implies virtualization, virtualization is not cloud.

Cloud is a huge shift in your organization and will change forever your way of working in the IT projects, improving your IT dramatically and cutting down costs.

In order to get the best of OpenStack, you need to understand deeply how cloud works. Moreover, you need to understand the whole picture beyond the software itself to provide new levels of agility, flexibility, and cost savings in your business.

Giuseppe Paterno’, leading European consultant and recently awarded by HP, wrote OpenStack Explained to guide you through the OpenStack technology and reveal his secret ingredient to have a successful project. You can download the ebook for a small donation to provide emergency and reconstruction aid for Nepal. Your donation is certified by ZEWO , the Swiss federal agency that ensures that funds go to a real charity project.

… but hurry up, the ebook is in a limited edition and it ends on July 2015.

Donate & Download here: https://life-changer.helvetas.ch/openstack

Categories: Elsewhere

Norbert Preining: Kobo firmware 3.16.0 mega update (KSM, nickel patch, ssh, fonts)

Sun, 14/06/2015 - 02:44

As with all the previous versions, I have prepared a mega update for the Kobo firmware 3.16.0, including all my favorite patches and features. Please see details for 3.15.0, 3.13.1 and 3.12.1. As before, the following addons are included: Kobo Start Menu, koreader, coolreader, pbchess, ssh access, custom dictionaries, plus as a new treat, some side-loaded fonts. What I dropped this time is most of the kobohack part, as the libraries seem to get outdated. But I included the dropbear ssh server from the kobohack package.

So what are all these items:

  • firmware (thread): the basic software of the device, shipped by Kobo company
  • Kobo Start Menu (V07, update 4 thread): an menu that pops up before the reading software (nickel) starts, which allows to start alternative readers (like koreader) etc.
  • KOreader (koreader-nightly-20150608, thread): an alternative document reader that supports epub, azw, pdf, djvu and many more
  • pbchess and CoolReader (2015.5, thread): a chess program and another alternative reader, bundled together with several other games
  • kobohack (web site): I only use the ssh servert
  • ssh access (old post: makes a full computer from your device by allowing you to log into it via ssh
  • custom dictionaries (thread): in particular Japanese-English
  • side-loaded fonts: GentiumBasic and GentiumBookBasic, Verdana, DroidSerif, and Charter-eInk

Please see this post for details.

Here is the combined Kobo-3.16.0-combined/KoboRoot.tgz

Update procedure

First install the normal firmware as obtained from above. Then, install the KoboRoot.tgz from above.

WARNINGS

Here a a few things you need to do after you have installed the mega update:

  • ssh passwords and telnet disabling needs to be done! This is an important step, please see this old post and follow the steps from step 9.

The combined KoboRoot.tgz is provided without warranty. If you need to reset your device, don’t blame me!

Categories: Elsewhere

Tomasz Buchert: Tagging unreplied messages with notmuch

Sun, 14/06/2015 - 01:00

Some people are very bad at responding to e-mails. Or they don’t check their mailbox as often as I do, who knows. Anyway, sometimes I want to ping somebody about an e-mail that I sent some time ago. Till now, I did it by going through a list of my sent e-mails and resending messages that were unreplied. However, that was somewhat inefficient.

As a solution, I coded a post-new hook for notmuch that tags all unreplied messages. The implementation is rather short and straightforward (see GitHub repo). It marks all replied messages with response and everything else with noresponse. The precise definition of replied message is: a message whose ID is mentioned in at least one In-Reply-To header in your mailbox.

To solve my initial problem, I also tag my sent, but unreplied messages with noack so that I can easily obtain the list of people to ping eventually. I also have the backlog tag which groups e-mails sent to me and which I haven’t replied yet.

Feel free to use it if you find it useful.

Categories: Elsewhere

Craig Small: Linux 4.0 ate my docker images

Sun, 14/06/2015 - 00:45

I have previously written about the gitlab CI runners that use docker.  Yesterday I made some changes to procps and pushed them to gitlab which would then start the CI.  This morning I checked and it said build failed – ok, so that’s not terribly unusual. The output from the runner was:

gitlab-ci-multi-runner 0.3.3 (dbaf96f) Using Docker executor with image csmall/testdebian ... Pulling docker image csmall/testdebian ... Build failed with Error: image csmall/testdebian: not found

Hmm, I know I have that image, it just must be the runner so, let’s see what images I have:

$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE

Now, I know I have images, I had about 10 or so of them, where did they go? I even looked in the /var/lib/docker directories and can see the json configs, what have you done with my images docker?

Storage Drivers

The first hint I got from stackexchange where someone lost their AUFS images and needed to load the aufs kernel module. Now I know there are two places or methods where docker stores its images. They are called aufs and devicemapper. There is some debate around which one is better and to be honest with what I do I don’t much care, I just want it to work.

The version of kernel is significant. It seems the default storage container was AUFS and this requires the aufs.ko kernel module.  Linux 4.0 (the version shipped with Debian) does NOT have that module, or at least I couldn’t find it.

For new images, this isn’t a problem. Docker will just create the new images using devicemapper and everyone is happy. The problem is where you have old aufs images, like me. I want those images.

Rescue the Images

I’m not sure if this is the best or most correct way of getting your images, but for me it worked. I got the idea basically from someone who wanted to switch from aufs to devicemapper images for other reasons.

You first need to reboot and select at the grub prompt a 3.x kernel that has aufs support. Then when the system comes up, you should see all your images, like this:

$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE csmall/testdebian latest 6979033105a4 5 weeks ago 369.4 MB gcc 5.1 b063030b23b8 5 weeks ago 1.225 GB gcc 5.1.0 b063030b23b8 5 weeks ago 1.225 GB gcc latest b063030b23b8 5 weeks ago 1.225 GB ruby 2.1 236bf35223e7 6 weeks ago 779.8 MB ruby 2.1.6 236bf35223e7 6 weeks ago 779.8 MB debian jessie 41b730702607 6 weeks ago 125.1 MB debian latest 41b730702607 6 weeks ago 125.1 MB debian 8 41b730702607 6 weeks ago 125.1 MB debian 8.0 41b730702607 6 weeks ago 125.1 MB busybox buildroot-2014.02 8c2e06607696 8 weeks ago 2.433 MB busybox latest 8c2e06607696 8 weeks ago 2.433 MB

What a relief to see this! Work out what images you need to transfer over. In my case it was just the csmall/testdebian one. You need to save it to a tar file.

$ docker save csmall/testdebian &gt; csmall-testdebian.tar.gz

Once you have all your images you want, reboot back to your 4.x kernel. You then need to load each image back into docker.

$ docker load csmall-testdebian.tar.gz

and then test to see its there

$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE csmall/testdebian latest 6979033105a4 5 weeks ago 368.3 MB

The size of my image was slightly different. I’m not too sure why this is the case but assume it is to do with the storage types.  My CI builds now run, they’re failing because my test program is trying to link with the library (it shouldn’t be) but at least its not a docker problem.

Categories: Elsewhere

Pages