Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 14 hours 8 min ago

Vincent Sanders: To a child, often the box a toy came in is more appealing than the toy itself.

Mon, 16/02/2015 - 01:31
I think Allen Klein might not have been referring to me when he said that but I do seem to like creating boxes for my toys.

My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.

Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.

One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.

Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.

A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.

The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)

The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.

The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.

As usual all the design files are publicly available from my design repo.
Categories: Elsewhere

Antonio Terceiro: rmail: reviving upstream maintaince

Sun, 15/02/2015 - 16:37

It is always fun to write new stuff, and be able to show off that shiny new piece of code that just come out of your brilliance and/or restless effort. But the world does not spin based just on shiny things; for free software to continue making the world work, we also need the dusty, and maybe and little rusty, things that keep our systems together. Someone needs to make sure the rust does not take over, and that these venerable but useful pieces of code keep it together as the ecosystem around them evolves. As you know, Someone is probably the busiest person there is, so often you will have to take Someone’s job for yourself.

rmail is a Ruby library able to parse, modify, and generate MIME mail messages. While handling transitions of Ruby interpreters in Debian, it was one of the packages we always had to fix for new Ruby versions, to the point where the Debian package has accumulated quite a few patches. The situation became ridiculous.

It was considered to maybe drop it from the Debian archive, but dropping it would mean either also dropping feed2imap and sup or porting both to other mail library.

Since doing this type of port is always painful, I decided instead to do something about the sorry state in which rmail was on the upstream side.

The reasons why it was not properly maintained upstream does not matter: people lose interest, move on to other projects, are not active users anymore; that is normal in free software projects, and instead of blaming upstream maintainers in any way we need to thank them for writing us free software in the first place, and step up to fix the stuff we use.

I got in touch with the people listed as owner for the package on rubygems.org, and got owner permission, which means I can now publish new versions myself.

With that, I cloned the repository where the original author had imported the latest code uploaded to rubygems and had started to receive contributions, but that repository was inactive for more than one year. It had already got some contributions from the sup developers which never made it in a new rmail release, so the sup people started using their own fork called “rmail-sup”.

Already in my repository, I have imported all the patches that still made sense from the Debian repository, did a bunch of updates, mainly to modernize the build system, and did a 1.1.0 release to rubygems.org. This release is pretty much compatible with 1.0.0, but since I did not test it with Ruby versions older than than one in my work laptop (2.1.5), I bumped the minor version number as warning to prospective users still on older Ruby versions.

In this release, the test suite passes 100% clean, what always gives my mind a lot of comfort:

$ rake /usr/bin/ruby2.1 -I"lib:." -I"/usr/lib/ruby/vendor_ruby" "/usr/lib/ruby/vendor_ruby/rake/rake_test_loader.rb" "test/test*.rb" Loaded suite /usr/lib/ruby/vendor_ruby/rake/rake_test_loader Started ............................................................................... ............................................................................... ........ Finished in 2.096916712 seconds. 166 tests, 24213 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications 100% passed 79.16 tests/s, 11546.95 assertions/s

And in the new release I have just uploaded to the Debian experimental suite (1.1.0-1), I was able to drop all of the patches and just use the upstream source as is.

So that’s it: if you use rmail for anything, consider testing version 1.1.0-1 from Debian experimental, or 1.1.0 from rubygems.org if you into that, and report any bugs to the [github repository](https://github.com/terceiro/rmail). My only commitment for now is keep it working, but if you want to add new features I will definitively review and merge them.

Categories: Elsewhere

Jingjie Jiang: Less is More

Sun, 15/02/2015 - 15:13
Things haven’t gone well recently.

I have been working on refactoring stuff for the past two or three weeks. At first, I found it much more exciting and interesting than merely fixing minor bugs. So I worked on most parts of the codebase, and made lots of changes.

It took me much more time and energy to do all this stuff. But sadly, the more energy results in a more broken codebase. It seems all these efforts are a waste since the code is no longer be able to be merged. It was very frustrating.

Get back on track

Well, I guess such trials and failures are just inevitable on the way towards an experienced developer. Despite all the divergences and dismay I have gone through, my internship must be back on track.

I am now more realistic. My first and foremost task is to GET THINGS DONE by focusing on small and doable changes. Although it seems the improvement is LESS, but it means MORE to have a not-so-perfect finished-product (in my view), rather than a to-be-perfect messy.

Journey continues.
Categories: Elsewhere

Sergio Talens-Oliag: Retooling

Sun, 15/02/2015 - 10:45

I haven't blogged for a long time, but I've decided that I'm going to try to write again, at least about technical stuff.

My plan was to blog about the projects I've been working on lately, the main one being the setup of the latest version of Kolab with the systems we already have at work, but I'll do that on the next days.

Today I'm just going to make a list of the tools I use on a daily basis and my plans to start using additional ones in the near future.

Shells, Terminals and Text Editors

I do almost all my work on Z Shell sessions running inside tmux; for terminal emulation I use gnome-terminal on X, VX ConnectBot on Android systems and iTerm2 on Mac OS X.

For text editing I've been using Vim for a long time (even on Mobile devices) and while I'm aware I don't know half of the things it can do, what I know is good enough for my day to day needs.

In the past I also used Emacs as a programming editor and my main tool to write HTML, SGML and XML, but since I haven't really needed an IDE for a long time and I mainly use Lightweight Markup Languages I haven't used it for a long time (I briefly tried to use Org mode, but for some reason I ended up leaving it).

Documentation formats and tools

Since a long time ago I've been an advocate of Lightweight Markup Languages; I started to use LaTeX and Lout, then moved to SGML/XML formats (LinuxDoc and DocBook) and finally moved to plain text based formats.

I started using Wiki formats (parsewiki) and soon moved to reStructuredText; I also use other markup languages like Markdown (for this blog, aka ikiwiki) and tried MultiMarkdown to replace reStructuredText for general use, but as I never liked Markdown syntax I didn't liked an extended version of it.

While I've been using ReStructuredText for a long time, I recently found Asciidoctor and the Asciidoc format and I guess I'll be using it instead of rst whenever I can (I still need to try the slide backends and conversions to ODT, but if that works I guess I'll write all my new documents using Asciidoc).

Programming languages

I'm not a developer, but I read and patch a lot of free software code written on a lot of different programming languages (I wouldn't be able to write whole programs on most of them, but thanks to Stack Overflow I'm usually able to fix what I need).

Anyway, I'm able to program in some languages; I write a lot of shell scripts and I go for Python and C when I need something more complicated.

On the near future I plan to read about javascript programming and nodejs (I'll probably need it at work) and I already started looking at Haskell (I guess it was time to learn about functional programming and after reading about it, it looks like haskell is the way to go for me).

Version Control

For a long time I've been a Subversion user, at least for my own projects, but seems that everything has moved to git now and I finally started to use it (I even opened a github account) and plan to move all my personal subversion repositories at home and at work to git, including the move of all my debian packages from svn-buildpackage to git-buildpackage.

Further Reading

With the previous plans in mind, I've started reading a couple of interesting books:

  • Learn You a Haskell by Miran Lipovača (http://learnyouahaskell.com/)
  • Pro Git written by Scott Chacon and Ben Straub (http://git-scm.com/book/en/v2)

Now I just need to get enough time to finish reading them ... ;)

Categories: Elsewhere

Clint Adams: Now with Stripe and Twitter too

Sun, 15/02/2015 - 06:17

I just had the best Valentine's Day ever.

Categories: Elsewhere

Matthew Palmer: The Vicious Circle of Documentation

Sun, 15/02/2015 - 01:00

Ever worked at a company (or on a codebase, or whatever) where it seemed like, no matter what the question was, the answer was written down somewhere you could easily find it? Most people haven’t, sadly, but they do exist, and I can assure you that it is an absolute pleasure.

On the other hand, practically everyone has experienced completely undocumented systems and processes, where knowledge is shared by word-of-mouth, or lost every time someone quits.

Why are there so many more undocumented systems than documented ones out there, and how can we cause more well-documented systems to exist? The answer isn’t “people are lazy”, and the solution is simple – though not easy.

Why Johnny Doesn’t Read

When someone needs to know something, they might go look for some documentation, or they might ask someone else or just guess wildly. The behaviour “look for documentation” is often reinforced negatively, by the result “documentation doesn’t exist”.

At the same time, the behaviours “ask someone” and “guess wildly” are positively reinforced, by the results “I get my question answered” and/or “at least I can get on with my work”. Over time, people optimise their behaviour by skipping the “look for documentation” step, and just go straight to asking other people (or guessing wildly).

Why Johnny Doesn’t Write

When someone writes documentation, they’re hoping that people will read it and not have to ask them questions in order to be productive and do the right thing. Hence, the behaviour “write documentation” is negatively reinforced by the results “I still get asked questions”, and “nobody does things the right way around here, dammit!”

Worse, though, is that there is very little positive reinforcement for the author: when someone does read the docs, and thus doesn’t ask a question, the author almost certainly doesn’t know they dodged a bullet. Similarly, when someone does things the right way, it’s unlikely that anyone will notice. It’s only the mistakes that catch the attention.

Given that the experience of writing documentation tends to skew towards the negative, it’s not surprising that eventually, the time spent writing documentation is reallocated to other, more utility-producing activities.

Death Spiral

The combination of these two situations is self-reinforcing. While a suitably motivated reader might start by strictly looking for documentation, or an author initially be enthused to always fully documenting their work, over time the “reflex” will be for readers to just go ask someone, because “there’s never any documentation!”, and for authors to not write documentation because “nobody bothers to read what I write anyway!”.

It is important to recognise that this iterative feedback loop is the “natural state” of the reader/author ecosystem, resulting in something akin to thermodynamic entropy. To avoid the system descending into chaos, energy needs to be constantly applied to keep the system in order.

The Solution

Effective methods for avoiding the vicious circle can be derived from the things that cause it. Change the forces that apply themselves to readers and authors, and they will behave differently.

On the reader’s side, the most effective way to encourage people to read documentation is for it to consistently exist. This means that those in control of a project or system mustn’t consider something “done” until the documentation is in a good state. Patches shouldn’t be landed, and releases shouldn’t be made, unless the documentation is altered to match the functional changes being made. Yes, this requires discipline, which is just a form of energy application to prevent entropic decay.

Writing documentation should be an explicit and well-understood part of somebody’s job description. Whoever is responsible for documentation needs to be given the time to do it properly. Writing well takes time and mental energy, and that time needs to be factored into the plans. Never forget that skimping on documentation, like short-changing QA or customer support, is a false economy that will cost more in the long term than it saves in the short term.

Even if the documentation exists, though, some people are going to tend towards asking people rather than consulting the documentation. This isn’t a moral failing on their part, but only happens when they believe that asking someone is more beneficial to them than going to the documentation. To change the behaviour, you need to change the belief.

You could change the belief by increasing the “cost” of asking. You could fire (or hellban) anyone who ever asks a question that is answered in the documentation. But you shouldn’t. You could yell “RTFM!” at everyone who asks a question. Thankfully that’s one acronym that’s falling out of favour.

Alternately, you can reduce the “cost” of getting the answer from the documentation. Possibly the largest single productivity boost for programmers, for example, has been the existence of Google. Whatever your problem, there’s a pretty good chance that a search or two will find a solution. For your private documentation, you probably don’t have the power of Google available, but decent full-text search systems are available. Use them.

Finally, authors would benefit from more positive reinforcement. If you find good documentation, let the author know! It requires a lot of effort (comparatively) to look up an author’s contact details and send them a nice e-mail. The “like” button is a more low-energy way of achieving a similar outcome – you click the button, and the author gets a warm, fuzzy feeling. If your internal documentation system doesn’t have some way to “close the loop” and let readers easily give authors a bit of kudos, fix it so it does.

Heck, even if authors just know that a page they wrote was loaded N times in the past week, that’s better than the current situation, in which deafening silence persists, punctuated by the occasional plaintive cry of “Hey, do you know how to…?”.

Do you have any other ideas for how to encourage readers to read, and for authors to write?

Categories: Elsewhere

John Goerzen: Willis Goerzen – a good reason to live in Kansas

Sat, 14/02/2015 - 23:55

From time to time, people ask me, with a bit of a disbelieving look on their face, “Tell me again why you chose to move to Kansas?” I can explain something about how people really care about their neighbors out here, how connections through time to a place are strong, how the people are hard-working, achieve great things, and would rather not talk about their achievements too much. But none of this really conveys it.

This week, as I got word that my great uncle Willis Goerzen passed away, it occured to me that the reason I live in Kansas is simple: people like Willis.

Willis was a man that, through and through, simply cared. For everyone. He had hugs ready anytime. When I used to see him in church every Sunday, I’d usually hear his loud voice saying, “Well John!” Then a hug, then, “How are you doing?” When I was going through a tough time in life, hugs from Willis and Thelma were deeply meaningful. I could see how deeply he cared in his moist eyes, the way he sought me out to offer words of comfort, reassurance, compassion, and strength.

Willis didn’t just defy the stereotypes on men having to hide their emotions; he also did so by being just gut-honest. Americans often ask, in sort of a greeting, “How are you?” and usually get an answer like “fine”. If I asked Willis “How are you?”, I might hear “great!” or “it’s hard” or “pretty terrible.” In a place where old-fashioned stoicism is still so common, this was so refreshing. Willis and I could have deep, heart-to-heart conversations or friendly ones.

Willis also loved to work. He worked on a farm, in construction, and then for many years doing plumbing and heating work. When he retired, he just kept on doing it. Not for the money, but because he wanted to. I remember calling him up one time about 10 years ago, asking if he was interested in helping me with a heating project. His response: “I’ll hitch up the horses and be right there!” (Of course, he had no horses anymore.) When I had a project to renovate what had been my grandpa’s farmhouse (that was Willis’s brother), he did all the plumbing work. He told me, “John, it’s great to be retired. I can still do what I love to do, but since I’m so cheap, I don’t have to be fast. My old knees can move at their own speed.” He did everything so precisely, built it so sturdy, that I used to joke that if a tornado struck the house, the house would be a pile of rubble but the ductwork would still be fine.

One of his biggest frustrations about ill health was being unable to work, and in fact he had a project going before cancer started to get the best of him. He was quite distraught that, for the first time in his life, he didn’t properly finish a job.

Willis installed a three-zone system (using automated dampers to send heat or cool from a single furnace/AC into only the parts of the house where it was needed) for me. He had never done that before. The night Willis and his friend Bob came over to finish the setup was one to remember. The two guys, both in their 70s, were figuring it all out, and their excitement was catching. By the time the evening was over, I certainly was more excited about thermostats than I ever had been in my life.

I heard a story about him once – he was removing some sort of noxious substance from someone’s house. I forget what it was — whatever it was, it had pretty bad long-term health effects. His comment: “Look, I’m old. It’s not going to be this that does me in.” And he was right.

In his last few years, Willis started up a project that only Willis would dream up. He invited people to bring him all their old and broken down appliances and metal junk – air conditioners, dehumidifiers, you name it. He carefully took them apart, stripped them down, and took the metals into a metal salvage yard. He then donated all the money he got to a charity that helped the poor, and it was nearly $5000.

Willis had a sense of humor about him that he somehow deployed at those perfect moments when you least expected it. Back in 2006, before I had moved into the house that had been grandpa’s, there was a fire there. I lost two barns (one was the big old red one with lots of character) and a chicken house. When I got out there to see what had happened, Willis was already there. It was quite the disappointment for me. Willis asked me if grandpa’s old manure spreader was still in the chicken house. (Cattle manure is sometimes used as a fertilizer.) This old manure spreader was horse-drawn. I told him it was, and so it had burned up. So Willis put his arm around me, and said, “John, do you know what we always used to call a manure spreader?” “Nope.” “Shit-slinger!” That was so surprising I couldn’t help but break out laughing. Willis was the only person that got me to laugh that day.

In his last few years, Willis battled several health ailments. When he was in a nursing home for a while due to complications from knee surgery, I’d drop by to visit. And lately as he was declining, I tried to drop in at his house to visit with Willis and Thelma as much as possible. Willis was always so appreciative of those visits. He always tried to get in a hug if he could, even if Thelma and I had to hold on to him when he stood up. He would say sometimes, “John, you are so good to come here and visit with me.” And he’d add, “I love you.” As did I.

Sometimes when Willis was felling down about not being able to work more, or not finish a project, I told him how he was an inspiration to me, and to many others. And I reminded him that I visited with him because I wanted do, and being able to do that meant as much to me as it did to him. I’m not sure if he ever could quite believe how deeply true that was, because his humble nature was a part of who he was.

My last visit earlier last week was mostly with Thelma. Willis was not able to be very alert, but I held his hand and made sure to tell him that I love and care for him that time. I’m not sure if he was able to hear, but I am sure that he didn’t need to. Willis left behind a community of hundreds of people that love him and had their lives touched by his kind and inspirational presence.

Categories: Elsewhere

Steinar H. Gunderson: Running Cisco management software in KVM

Sat, 14/02/2015 - 14:00

In terms of wireless, Cisco primarily makes hardware (access points), but they also have a relatively wide range of associated support software: In particular, WLC (Wireless Controller, the thing that all your Cisco APs talk to for centralized configuration/authentication/load balancing/etc.), PI (Prime Infrastructure, logging/management/inventory), and MSE (Mobility Services Engine, physical positioning management).

You can buy all of these as hardware appliances, but they're also lately available as virtual appliances—after all, they're just Linux machines with software on. (You will need a Cisco support contract to download them; then you will get a free 30-day trial if you don't have a license.) But of course, since this is ENTERPRISE and it is NETWORKING, it flat-out assumes that your virtualization solution of choice is VMware ESXi. Not VMware Player, not VirtualBox, not KVM, not Hyper-V. Of course you already have an ESXi box with ~24 GB spare RAM, no?

Well, I didn't, and I wanted to try all of these three anyways. First of all, I should add that this is quite obviously not supported by Cisco. You will not get any support, and you will not be able to buy a license against such VMs; it's only for learning and evaluation. So here's the hackery needed to get it to run under KVM/QEMU:

vWLC is easy; supposedly it's even sort-of supported. Just untar the .vmdk image, convert the inner image with qemu-img to qcow2, and run. Tada.

MSE is harder. You can install it (assuming you give it exactly 8192 MB of RAM; no more, no less), but half-way through the process, it will freeze in strange ways. What's going on under-the-hood is that it tries to get the number of CPUs, and this fails in the default CPU map KVM presents through SMBIOS. The magic incantation you need to add is

-smbios type=1,product="VMware Virtual Platform" -smbios type=4,sock_pfx="Proc"

And add -cpu host for good measure.

PI, on the other hand, took quite a while. I will not go into details, but it turns out what you need is to modify the emulated BIOS from SeaBIOS to simulate the ESXi Option ROM:

( cat pc-bios/bios-256k.bin ; dd if=/dev/zero bs=655511 count=1 ; printf "VMware-56 4d 45 81 db 1a 63 8d-a9 45 65 c1 af f3 a1 a1\0" ; dd if=/dev/zero bs=130866 count=1 ) > mybios.bin

and then start with -bios mybios.bin.

Now, after all of this is done and installation is done, you can even get a root shell from the appliance, upgrade the kernel (find a more modern CentOS kernel from somewhere), modify the initrd script to load virtio_blk and virtio_pci, and then use virtio disk instead of IDE. (virtio networking works out-of-the-box.)

Easy, huh?

Categories: Elsewhere

Wouter Verhelst: Docker

Sat, 14/02/2015 - 10:58

... is the new hype these days. Everyone seems to want to be part of it; even Microsoft wants to allow Docker to run on its platform. How they visualise that is slightly beyond me, seen as how Docker is mostly a case of "run a bunch of LXC instances", which by their definition can't happen on Windows. Presumably they'll just run a lot more VMs, then, which is a possible workaround. Or maybe Docker for Windows will be the same in concept, but not in implementation. I guess the future will tell.

As I understand the premise, the idea of Docker is that getting software to run on "all" distributions is a Hard Problem[TM], so in a Docker thing you just define that this particular stuff is meant to run on top of this and this and that environment, and Docker then compartmentalises everything for you. It should make things easier to maintain, and that's a good thing.

I'm not a fan. If the problem that Docker tries to fix is "making software run on all platforms is hard", then Docker's "solution" is "I give up, it's not possible". That's sad. Sure, having a platform which manages your virtualisation for you, without having to manually create virtual machines (or having to write software to do so) is great. And sure, compartmentalising software so that every application runs in its own space can help towards security, manageability, and a whole bunch of other advantages.

But having an environment which says "if you want to run this applicaiton, I'll set up a chroot with distribution X for you; if you want to run this other application, I'll set up a chroot with distribution Y for you; and if you want to run yet this other application yere, I'll start doing a chroot with distribution Z for you" will, in the end, get you a situation where, if there's another bug in libc6 or libssl, you now have a nightmare trying to track down all the different versions in all the docker instances to make sure they're all fixed. And while it may work perfectly well on the open Internet, if you're on a corporate network with a paranoid firewall and proxy, downloading packages from public mirrors is harder than just creating a local mirror instead. Which you now have to do not only for your local distribution of choice, but also for the distributions of choice of all the developers of the software you're trying to use. Which may result in more work than just trying to massage the software in question to actually bloody well work, dammit.

I'm sure Docker has a solution for some or all of the problems it introduces, and I'm not saying it doesn't work in practice. I'm sure it does fix some part of the "Making software run on all platforms is hard" problem, and so I might even end up using it at some point. But from an aesthetical point of view, I don't think Docker is a good system.

I'm not very fond of giving up.

Categories: Elsewhere

Junichi Uekawa: February.

Sat, 14/02/2015 - 02:12
February. Wow.

Categories: Elsewhere

Jonathan Carter: Debconf 2016 to be hosted in Cape Town

Fri, 13/02/2015 - 22:35

Long story short, we put in a bid to host Debconf 16 in Cape Town, and we got it!

Back at Debconf 12 (Nicaragua), many people asked me when we’re hosting a Debconf in South Africa. I just laughed and said “Who knows, maybe some day”. During the conference I talked to Stefano Rivera (tumbleweed) who said that many people asked him too. We came to the conclusion that we’d both really really want to do it but just didn’t have enough time at that stage. I wanted to get to a point where I could take 6 months off for it and suggested that we prepare a bid for 2019. Stefano thought that this was quite funny, I think at some point we managed to get that estimate down to 2017-2018.

That date crept back even more with great people like Allison Randal and Bernelle Verster joining our team, along with other locals Graham Inggs, Raoul SnymanAdrianna PińskaNigel Kukard, Simon CrossMarc Welz, Neill Muller, Jan Groenewald, and our international mentors such as Nattie Mayer-Hutchings, Martin Krafft and Hannes von Haugwitz. Now, we’re having that Debconf next year. It’s almost hard to believe, not sure how I’ll sleep tonight, we’ve waited so long for this and we’ve got a mountain of work ahead of us, but we’ve got a strong team and I think Debconf 2016 attendees are in for a treat!

Since I happened to live close to Montréal back in 2012, I supported the idea of a Debconf bid for Montréal first, and then for Cape Town afterwards. Little did I know then that the two cities would be the only two cities bidding against each other 3 years later. I think both cities are superb locations to host a Debconf, and I’m supporting Montréal’s bid for 2017.

Want to get involved? We have a mailing list and IRC channel: #debconf16-capetown on oftc. Thanks again for all the great support from everyone involved so far!

Categories: Elsewhere

Richard Hartmann: DC16.za

Fri, 13/02/2015 - 21:59

Here's to a happy, successful, and overall quite awesome DebConf16 in Cape Town, South Africa.

As a very welcome surprise, the Montreal team is already planning a mini-DC and already have a strong bid for DC17.

Update: Well, that was quick...

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 07

Fri, 13/02/2015 - 21:16

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1071 (Including 192 bugs affecting key packages)
    • Affecting Jessie: 147 (key packages: 110) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 106 (key packages: 82) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 25 bugs are tagged 'patch'. (key packages: 23) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 4 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 77 bugs are neither tagged patch, nor marked done. (key packages: 59) Help make a first step towards resolution!
      • Affecting Jessie only: 41 (key packages: 28) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 11 bugs are in packages that are unblocked by the release team. (key packages: 6)
        • 30 bugs are in packages that are not unblocked. (key packages: 22)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Olivier Berger: Testing the RuneStone interactive Python courses server in docker

Fri, 13/02/2015 - 15:36

I’ve been working on setting up a Docker container environment allowing to test the RuneStone Interactive server.

RuneStone Interactive allows the publication of courses containing interactive Python examples, and while most of the content is static (the Python examples are run innside a Python interpreter implemented in JavaScript, hence locally in the JS VM of the Web browser), the tool also offers an environment allowing to monitor the progress of learners in a course, which is dynamic and is queried by the browser over AJAX APIs.

That’s the part which I wanted to be able to operate for test purposes. As it is a web2py application, it’s not exactly obvious to gather all dependencies and run locally. Well, in fact it is, but I want to understand the architecture of the tool to be able to understand the deployment constraints, so making a docker image will help in this purpose.

The result is the following :

Now, it’s easier to test the writing of a new course (yet another container above the latter one), and directly test for real.

Categories: Elsewhere

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910 - part II

Fri, 13/02/2015 - 12:40

In my recent attempt to setup a motion detection camera I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software motion. Now I got a bit further. This seems to be an issue with the format used by motion. I've check the output of v4l2-ctl ...

$ v4l2-ctl -d /dev/video1 --list-formats-ext
[..]
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
[..]
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)

Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]

Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
[..]
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)

Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]

... and motion:

$ motion
[..]
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)

[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008
[..]

Ok, so both formats YUYV and MJPG are supported and recognized and I can choose both via the v4l2palette configuration variable, citing motion.conf:

# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_SGBRG8 : 4 'GBRG'
# V4L2_PIX_FMT_SGRBG8 : 5 'GRBG'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_PJPG : 7 'PJPG'
# V4L2_PIX_FMT_MJPEG : 8 'MJPEG'
# V4L2_PIX_FMT_JPEG : 9 'JPEG'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY : 14 'UYVY'
# V4L2_PIX_FMT_YUYV : 15 'YUYV'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
#
v4l2_palette 17

Now motion uses YUYV as default mode as shown by its output. So it seems that all I have to do is to choose MJPEG in my motion.conf:

v4l2_palette 8

Testing again ...

$ motion
[..]
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 )
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 )
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 )
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [ERR] [ALL] motion_init: Error capturing first image
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1
[..]

... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to $searchengine the above happens to a lot of people. I just found one often suggested fix: pre-load v4l2convert.so from libv4l-0:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so motion

But the problem persists and I'm out of ideas :( So atm it lokks like I cannot use the MJPEG format and don't get 30 fps at 1280x720 pixels. During writing I then discovered a solution by good old trial-and-error: Leaving the v4l2_palette variable at its default value 17 (YU12) and pre-loading v4l2convert.so makes use of YU12 and the framerate at least raises to 24 fps:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/v4l2convert.so motion
[..]
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[..]
[1] [NTC] [EVT] event_new_video FPS 24
[..]

Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:

v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get 24 fps too, the resulting movie suffers of jumps every few frames. So here I got pretty good results with a more conservative setting. By increasing framerate - tested up to 15 fps with good results - pre_capture needed to be decreased accordingly to values between 1..3 to minimize jumps:

v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :)

TODO

I guess, the results can be optimzed further by playing around with ffmpeg_bps and ffmpeg_variable_bitrate. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various norm settings (PAL, NTSC, etc.).

Categories: Elsewhere

Steve McIntyre: Linaro VLANd v0.2

Fri, 13/02/2015 - 07:53

I've been working on this for too long without really talking about it, so let's fix that now!

VLANd is a simple (hah!) python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.2, with a lot of changes included since the last release:

  • Massive numbers of bugfixes and code cleanups
  • Improve how we talk to the Cisco switches - disable paging on long output
  • Switch from "print" to "logging.foo" for messages, and add logfile support
  • Improved test suite coverage, and added core test scripts for the lab environment

I've demonstrated this code today in Hong Kong at the Linaro Connect event, and now I'm going on vacation for 4 weeks. Australia here I come! :-)

Categories: Elsewhere

Neil Williams: OpenTAC mailing list

Fri, 13/02/2015 - 03:10

After the OpenTAC session at Linaro Connect, we do now have a mailing list to support any and all discussions related to OpenTAC. Thanks to Daniel Silverstone for the list.

List archive: http://listmaster.pepperfish.net/pipermail/opentac-vero-apparatus.com

More information on OpenTAC: http://wiki.vero-apparatus.com/OpenTAC

Categories: Elsewhere

Richard Hartmann: A Dance with Dragons

Thu, 12/02/2015 - 22:51

Yesterday, I went to the Federal Office for Information Security (BSI) on an invitation to their "expert round-table on SDN".

While the initial mix of industry attendees was of.. varied technical knowledge.. I was pleasantly surprised by the level of preparation by the BSI. None of them were networkers, but they did have a clear agenda and a pretty good idea of what they wanted to know.

During the first round-table, they went through

  • This is our idea of what we think SDN is
  • Is SDN a fad or here to stay?
  • What does the industry think about SDN?
  • What are the current, future, and potential benefits of SDN?
  • What are the current, future, and potential risks of SDN?
  • How can SDN improve the security of critical infrastructure?
  • How can you ensure that the whole stack from hardware through data plane to control plane can be trusted?
  • How can critical parts of the SDN stack be developed in, or strongly influenced from, players in Germany or at least Europe?

Yes, some of those questions are rather basic and/or generic, but that was on purpose. The mix of clear expectations and open-ended questions was quite effective at getting at what they wanted to know.

During lunch, we touched on the more general topic of how to reach and interact with technical audiences, with regards to both networks and software. The obvious answer for initial contact in regards to networks was DENOG; which they didn't know about.

With software, the answer is not quite as simple. My suggestion was to engage in a positive way and thus build trust over time. Their clear advantage is that, contrary to most other services, their raison d'être is purely defensive and non-military so they can focus on audits, support of key pieces of software, and, most important of all, talk about their results. No idea if they will actually pursue this, but here's to hoping; we could all use more government players on the good side.

Categories: Elsewhere

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910

Thu, 12/02/2015 - 20:02

Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a Logitech C910 USB camera which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found motion, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.

Logitech C910 HD camera

Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.


$ dmesg
[..]
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17

$ lsusb
[..]
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910
[..]

$ v4l2-ctl -V -d /dev/video1
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 2560
Size Image : 1843200
Colorspace : SRGB

Also the uvcvideo kernel module is loaded and the user in question is part of the video group.

Installation and start

Installation of the software is as easy as always:

apt-get install motion

It is possible to run the software as a service. But for testing, I copied /etc/motion/motion.conf to ~/.motion/motion.conf, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.


daemon off

Note that in my case the correct device is /dev/video1 because the laptop has a built-in camera, that is /dev/video0. Also the target directory should be writeable by my user:


videodevice /dev/video1
target_dir ~/Videos

Then running motion from the command line ...


motion
[..]
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[..]
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability:
------------------------
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
cap.capabilities=0x84000001
------------------------
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[..]
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items

... will begin to capture motion detection events and also output a live stream. CTRL+C will stop it again.

Live stream

The live stream is available by pointing the browser to localhost:8081. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:


stream_motion on
stream_maxrate 100

The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by stream_maxrate or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the stream_quality value. Because I neither need the stream nor the control feed I disabled both:


stream_port 0
webcontrol_port 0
Picture capturing

By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:

output_pictures off

FYI: If you want a good picture quality, then the value of quality should very probably be increased.

Video capturing

This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the width and height variables. For example the following configures motion for 1280x720 pixels (720p):


width 1280
height 720

The result was really disappointing. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the framerate variable. Of course its default value of 2 (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of framerate, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following event_new_video FPS.)

The solution to the issue, that videos are running too fast, however is to increase the pre_capture value, the number of pre-captured (buffered) pictures from before motion was detected. Even small values like 3..5 result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.


framerate 100
pre_capture 5

Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running guvcview, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are ffmpeg_bps and ffmpeg_variable_bitrate. I got the best results just changing the latter:


ffmpeg_variable_bitrate 2

Finally the resulting video quality is promising. I'll start with this configuration setting up an observation camera for the bird feeding ground.

There is one more tweak for me. I got even better results by enabling the auto_brightness feature.


auto_brightness on
Complete configuration

So the complete configuration looks like this (only those options changed to the original config file)


daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080
Links
Categories: Elsewhere

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2015

Thu, 12/02/2015 - 19:24

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 48 work hours have been equally split among 4 paid contributors. Their reports are available:

Evolution of the situation

During the last month, the number of paid work hours has made a noticeable jump: we’re now at 58 hours per month. At this rate, we would need 3 more months to reach our minimal goal of funding the equivalent of a half-time position. Unfortunately, the number of new sponsors actually in the process is not likely to be enough to have a similar raise next month.

So, as usual, we are looking for more sponsors.

In terms of security updates waiting to be handled, the situation looks a bit worse than last month: the dla-needed.txt file lists 37 packages awaiting an update (7 more than last month), the list of open vulnerabilities in Squeeze shows about 63 affected packages in total (7 more than last month).

The increase is not too worrying, but the waiting time before an issue is dealt with is sometimes more problematic. To be able to deal with all incoming issues in a timely manner, the LTS team needs more resources: some months will have more issues than usual, some issues will be longer to handle than others, etc.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: Elsewhere

Pages