Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 27 min 16 sec ago

Chris Lamb: Sprezzatura

Wed, 21/01/2015 - 11:31

Wolf Hall on Twitter et al:

He says, "Majesty, we were talking of Castiglione's book. You have found time to read it?"

"Indeed. He extrolls sprezzatura. The art of doing everything gracefully and well, without the appearance of effort. A quality princes should cultivate."

"Yes. But besides sprezzatura one must exhibit at all times a dignified public restraint..."

Categories: Elsewhere

Enrico Zini: miniscreen

Wed, 21/01/2015 - 11:13
Playing with python, terminfo and command output

I am experimenting with showing progress on the terminal for a subcommand that is being run, showing what is happening without scrolling away the output of the main program, and I came out with this little toy. It shows the last X lines of a subcommand output, then gets rid of everything after the command has ended.

Usability-wise, it feels like a tease to me: it looks like I'm being shown all sorts of information then they are taken away from me before I managed to make sense of them. However, I find it cute enough to share:

#!/usr/bin/env python3 #coding: utf-8 # Copyright 2015 Enrico Zini <enrico@enricozini.org>. Licensed under the terms # of the GNU General Public License, version 2 or any later version. import argparse import fcntl import select import curses import contextlib import subprocess import os import sys import collections import shlex import shutil import logging def stream_output(proc): """ Take a subprocess.Popen object and generate its output, line by line, annotated with "stdout" or "stderr". At process termination it generates one last element: ("result", return_code) with the return code of the process. """ fds = [proc.stdout, proc.stderr] bufs = [b"", b""] types = ["stdout", "stderr"] # Set both pipes as non-blocking for fd in fds: fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK) # Multiplex stdout and stderr with different prefixes while len(fds) > 0: s = select.select(fds, (), ()) for fd in s[0]: idx = fds.index(fd) buf = fd.read() if len(buf) == 0: fds.pop(idx) if len(bufs[idx]) != 0: yield types[idx], bufs.pop(idx) types.pop(idx) else: bufs[idx] += buf lines = bufs[idx].split(b"\n") bufs[idx] = lines.pop() for l in lines: yield types[idx], l res = proc.wait() yield "result", res @contextlib.contextmanager def miniscreen(has_fancyterm, name, maxlines=3, silent=False): """ Show the output of a process scrolling in a portion of the screen. has_fancyterm: true if the terminal supports fancy features; if false, just write lines to standard output name: name of the process being run, to use as a header maxlines: maximum height of the miniscreen silent: do nothing whatsoever, used to disable this without needing to change the code structure Usage: with miniscreen(True, "my process", 5) as print_line: for i in range(10): print_line(("stdout", "stderr")[i % 2], "Line #{}".format(i)) """ if not silent and has_fancyterm: # Discover all the terminal control sequences that we need output_normal = str(curses.tigetstr("sgr0"), "ascii") output_up = str(curses.tigetstr("cuu1"), "ascii") output_clreol = str(curses.tigetstr("el"), "ascii") cols, lines = shutil.get_terminal_size() output_width = cols fg_color = (curses.tigetstr("setaf") or curses.tigetstr("setf") or "") sys.stdout.write(str(curses.tparm(fg_color, 6), "ascii")) output_lines = collections.deque(maxlen=maxlines) def print_lines(): """ Print the lines in our buffer, then move back to the beginning """ sys.stdout.write("{} progress:".format(name)) sys.stdout.write(output_clreol) for msg in output_lines: sys.stdout.write("\n") sys.stdout.write(msg) sys.stdout.write(output_clreol) sys.stdout.write(output_up * len(output_lines)) sys.stdout.write("\r") try: print_lines() def _progress_line(type, line): """ Print a new line to the miniscreen """ # Add the new line to our output buffer msg = "{} {}".format("." if type == "stdout" else "!", line) if len(msg) > output_width - 4: msg = msg[:output_width - 4] + "..." output_lines.append(msg) # Update the miniscreen print_lines() yield _progress_line # Clear the miniscreen by filling our ring buffer with empty lines # then printing them out for i in range(maxlines): output_lines.append("") print_lines() finally: sys.stdout.write(output_normal) elif not silent: def _progress_line(type, line): print("{}: {}".format(type, line)) yield _progress_line else: def _progress_line(type, line): pass yield _progress_line def run_command_fancy(name, cmd, env=None, logfd=None, fancy=True, debug=False): quoted_cmd = " ".join(shlex.quote(x) for x in cmd) log.info("%s running command %s", name, quoted_cmd) if logfd: print("runcmd:", quoted_cmd, file=logfd) # Run the script itself on an empty environment, so that what was # documented is exactly what was run proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env) with miniscreen(fancy, name, silent=debug) as progress: stderr = [] for type, val in stream_output(proc): if type == "stdout": val = val.decode("utf-8") if logfd: print("stdout:", val, file=logfd) log.debug("%s stdout: %s", name, val) progress(type, val) elif type == "stderr": val = val.decode("utf-8") if logfd: print("stderr:", val, file=logfd) stderr.append(val) log.debug("%s stderr: %s", name, val) progress(type, val) elif type == "result": if logfd: print("retval:", val, file=logfd) log.debug("%s retval: %d", name, val) retval = val if retval != 0: lastlines = min(len(stderr), 5) log.error("%s exited with code %s", name, retval) log.error("Last %d lines of standard error:", lastlines) for line in stderr[-lastlines:]: log.error("%s: %s", name, line) return retval parser = argparse.ArgumentParser(description="run a command showing only a portion of its output") parser.add_argument("--logfile", action="store", help="specify a file where the full execution log will be written") parser.add_argument("--debug", action="store_true", help="debugging output on the terminal") parser.add_argument("--verbose", action="store_true", help="verbose output on the terminal") parser.add_argument("command", nargs="*", help="command to run") args = parser.parse_args() if args.debug: loglevel = logging.DEBUG elif args.verbose: loglevel = logging.INFO else: loglevel = logging.WARN logging.basicConfig(level=loglevel, stream=sys.stderr) log = logging.getLogger() fancy = False if not args.debug and sys.stdout.isatty(): curses.setupterm() if curses.tigetnum("colors") > 0: fancy = True if args.logfile: logfd = open("output.log", "wt") else: logfd = None retval = run_command_fancy("miniscreen example", args.command, logfd=logfd) sys.exit(retval)
Categories: Elsewhere

Jonathan McDowell: Moving to Jekyll

Wed, 21/01/2015 - 11:00

I’ve been meaning to move away from Movable Type for a while; they no longer provide the “Open Source” variant, I’ve had some issues with the commenting side of things (more the fault of spammers than Movable Type itself) and there are a few minor niggles that I wanted to resolve. Nothing has been particularly pressing me to move and I haven’t been blogging as much so while I’ve been keeping an eye open for a replacement I haven’t exerted a lot of energy into the process. I have a little bit of time at present so I asked around on IRC for suggestions. One was ikiwiki, which I use as part of helping maintain the SPI website (and think is fantastic for that), the other was Jekyll. Both are available as part of Debian Jessie.

Jekyll looked a bit fancier out of the box (I’m no web designer so pre-canned themes help me a lot), so I decided to spend some time investigating it a bit more. I’d found a Movable Type to ikiwiki converter which provided a starting point for exporting from the SQLite3 DB I was using for MT. Most of my posts are in markdown, the rest (mostly from my Blosxom days) are plain HTML, so there wasn’t any need to do any conversion on the actual content. A minor amount of poking convinced Jekyll to use the same URL format (permalink: /:year/:month/:title.html in the _config.yml did what I wanted) and I had to do a few bits of fix up for some images that had been uploaded into MT, but overall fairly simple stuff.

Next I had to think about comments. My initial thought was to just ignore them for the moment; they weren’t really working on the MT install that well so it’s not a huge loss. I then decided I should at least see what the options were. Google+ has the ability to embed in your site, so I had a play with that. It worked well enough but I didn’t really want to force commenters into the Google ecosystem. Next up was Disqus, which I’ve seen used in various places. It seems to allow logins via various 3rd parties, can cope with threading and deals with the despamming. It was easy enough to integrate to play with, and while I was doing so I discovered that it could cope with importing comments. So I tweaked my conversion script to generate a WXR based file of the comments. This then imported easily into Disqus (and also I double checked that the export system worked).

I’m sure the use of a third party to handle comments will put some people off, but given the ability to export I’m confident if I really feel like dealing with despamming comments again at some point I can switch to something locally hosted. I do wish it didn’t require Javascript, but again it’s a trade off I’m willing to make at present.

Anyway. Thanks to Tollef for the pointer (and others who made various suggestions). Hopefully I haven’t broken (or produced a slew of “new” posts for) any of the feed readers pointed at my site (but you should update to use feed.xml rather than any of the others - I may remove them in the future once I see usage has died down).

(In the off chance it’s useful to someone else the conversion script I ended up with is available. There’s a built in Jekyll importer that may be a better move, but I liked ending up with a git repository containing a commit for each post.)

Categories: Elsewhere

Jo Shields: mono-project.com Linux packages, January 2015 edition

Wed, 21/01/2015 - 02:26

The latest version of Mono has released (actually, it happened a week ago, but it took me a while to get all sorts of exciting new features bug-checked and shipshape).

Stable packages

This release covers Mono 3.12, and MonoDevelop 5.7. These are built for all the same targets as last time, with a few caveats (MonoDevelop does not include F# or ASP.NET MVC 4 support). ARM packages will be added in a few weeks’ time, when I get the new ARM build farm working at Xamarin’s Boston office.

Ahead-of-time support

This probably seems silly since upstream Mono has included it for years, but Mono on Debian has never shipped with AOT’d mscorlib.dll or mcs.exe, for awkward package-management reasons. Mono 3.12 fixes this, and will AOT these assemblies – optimized for your computer – on installation. If you can suggest any other assemblies to add to the list, we now support a simple manifest structure so any assembly can be arbitrarily AOT’d on installation.

Goodbye Mozroots!

I am very pleased to announce that as of this release, Mono users on Linux no longer need to run “mozroots” to get SSL working. A new command, “cert-sync”, has been added to this release, which synchronizes the Mono SSL certificate store against your OS certificate store – and this tool has been integrated into the packaging system for all mono-project.com packages, so it is automatically used. Just make sure the ca-certificates-mono package is installed on Debian/Ubuntu (it’s always bundled on RPM-based) to take advantage! It should be installed on fresh installs by default. If you want to invoke the tool manually (e.g. you installed via make install, not packages) use

cert-sync /path/to/ca-bundle.crt

On Debian systems, that’s

cert-sync /etc/ssl/certs/ca-certificates.crt

and on Red Hat derivatives it’s

cert-sync /etc/pki/tls/certs/ca-bundle.crt

Your distribution might use a different path, if it’s not derived from one of those.

Windows installer back from the dead

Thanks to help from Alex Koeplinger, I’ve brought the Windows installer back from the dead. The last release on the website was for 3.2.3 (it’s actually not this version at all – it’s complicated…), so now the Windows installer has parity with the Linux and OSX versions. The Windows installer (should!) bundles everything the Mac version does – F#, PCL facades, IronWhatever, etc, along with Boehm and SGen builds of the Mono runtime done with Visual Studio 2013.

An EXPERIMENTAL OH MY GOD DON’T USE THIS IN PRODUCTION 64-bit installer is in the works, when I have the time to try and make a 64-build of Gtk#.

Categories: Elsewhere

Dimitri John Ledkov: Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available

Wed, 21/01/2015 - 01:06
I'm happy to announce that Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available for consumption.

These are 1.10.3 & 0.155 respectfully.

This means that everyone should start porting their reports, tools, and scriptage to python3.

ubuntu-dev-tools has the library portion ported to python3, as I did not dare to switch individual scripts to python3 without thorough interactive testing. Please help out porting those and/or file bug reports against the python3 port. Feel free to subscribe me to the bug reports on launchpad.

For the time being, I believe some things will not be easy to port to python3 because of the elephant in the room - bzrlib. For some things like lp-shell, it should be easy to move away from bzrlib, as non-vcs things are used there. For other things the current suggestion is to probably fork to bzr binary or a python2 process. I ponder if a minimal usable python3-bzrlib wrapper around python2 bzrlib is possible to satisfy the needs of basic and common scripts.

On a side note, launchpadlib & lazr.restfulclient have out of the box proxy support enabled. This makes things like add-apt-repository work behind networks with such setup. I think a few people will be happy about that.

All of these goodies are available in Ubuntu 15.04 (Vivid Vervet) or Debian Experimental (and/or NEW queue).
Categories: Elsewhere

Jonathan Wiltshire: Never too late for bug-squashing

Tue, 20/01/2015 - 23:20

With over a hundred RC bugs still outstanding for Jessie, there’s never been a better time to host a bug-squashing party in your local area. Here’s how I do it.

  1. At home is fine, if you don’t mind guests. You don’t need to seek out a sponsor and borrow or hire office space. If there isn’t room for couch-surfers, the project can help towards travel and accommodation expenses. My address isn’t secret, but I still don’t announce it – it’s fine to share it only with the attendees once you know who they are.
  2. You need a good work area. There should be room for people to sit and work comfortably – a dining room table and chairs is ideal. It should be quiet and free from distractions. A local mirror is handy, but a good internet connection is essential.
  3. Hungry hackers eat lots of snacks. This past weekend saw five of us get through 15 litres of soft drinks, two loaves of bread, half a kilo of cheese, two litres of soup, 22 bags of crisps, 12 jam tarts, two pints of milk, two packs of chocolate cake bars, and a large bag of biscuits (and we went out for breakfast and supper). Make sure there is plenty available before your attendees arrive, along with a good supply of tea and coffee.
  4. Have a work plan. Pick a shortlist of RC bugs to suit attendees’ strengths, or work on a particular packaging group’s bugs, or have a theme, or something. Make sure there’s a common purpose and you don’t just end up being a bunch of people round a table.
  5. Be an exemplary host. As the host you’re allowed to squash fewer bugs and instead make sure your guests are comfortable, know where the bathroom is, aren’t going hungry, etc. It’s an acceptable trade-off. (The reverse is true: if you’re attending, be an exemplary guest – and don’t spend the party reading news sites.)

Now, go host a BSP of your own, and let’s release!

Never too late for bug-squashing is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Sven Hoexter: Heads up: possible changes in fonts-lyx

Tue, 20/01/2015 - 21:02

Today the super nice upstream developers of LyX reached out to me (and pelle@) as the former and still part time lyx package maintainers to inform us of an ongoing discussion in http://www.lyx.org/trac/ticket/9229. The current aproach to fix this bug might result in a name change of all fonts shipped in fonts-lyx with the next LyX release.

Why is it relevant for people not using LyX?

For some historic reasons beyond my knowledge the LyX project ships a bunch of math symbol fonts converted to ttf files. From a seperate source package they moved to be part of the lyx source package and are currently delivered via the fonts-lyx package.

Over time a bunch of other packages picked this font package up as a dependency. Among them also rather popular packages like icedove, which results in a rather fancy popcon graph. Drawback as usual is that changes might have a visible impact in places where you do not expect them.

So if you've some clue about fonts, or depend on fonts-lyx in some way, you might want to follow that issue cited above and/or get in contact with the LyX developers.

If you've some spare time feel also invited to contribute to the lyx packaging in Debian. It really deserves a lot more love then what it seldomly gets today by the brave Nick Andrik, Per and myself.

Categories: Elsewhere

Daniel Pocock: Quantifying the performance of the Microserver

Tue, 20/01/2015 - 20:53

In my earlier blog about choosing a storage controller, I mentioned that the Microserver's on-board AMD SB820M SATA controller doesn't quite let the SSDs perform at their best.

Just how bad is it?

I did run some tests with the fio benchmarking utility.

Lets have a look at those random writes, they simulate the workload of synchronous NFS write operations:

rand-write: (groupid=3, jobs=1): err= 0: pid=1979 write: io=1024.0MB, bw=22621KB/s, iops=5655 , runt= 46355msec

Now compare it to the HP Z800 on my desk, it has the Crucial CT512MX100SSD1 on a built-in LSI SAS 1068E controller:

rand-write: (groupid=3, jobs=1): err= 0: pid=21103 write: io=1024.0MB, bw=81002KB/s, iops=20250 , runt= 12945msec

and then there is the Thinkpad with OCZ-NOCTI mSATA SSD:

rand-write: (groupid=3, jobs=1): err= 0: pid=30185 write: io=1024.0MB, bw=106088KB/s, iops=26522 , runt= 9884msec

That's right, the HP workstation is four times faster than the Microserver, but the Thinkpad whips both of them.

I don't know how much I can expect of the PCI bus in the Microserver but I suspect that any storage controller will help me get some gain here.

Categories: Elsewhere

Sven Hoexter: python-ipcalc bumped from 0.3 to 1.1.3

Tue, 20/01/2015 - 20:16

I've helped a friend to get started with Debian packaging and he has now adopted python-ipcalc. Since I've no prior experience with packaging of Python modules and there were five years of upstream development in between, I've uploaded to experimental to give it some exposure.

So if you still use the python-ipcalc package, which is part of all current Debian releases and the upcoming jessie release, please check out the package from experimental. I think the only reverse dependency within Debian is sshfp, that one of course also requires some testing.

Categories: Elsewhere

Raphael Geissert: Edit Debian, with iceweasel

Tue, 20/01/2015 - 08:00
Soon after publishing the chromium/chrome extension that allows you to edit Debian online, Moez Bouhlel sent a pull request to the extension's git repository: all the changes needed to make a firefox extension!

After another session of browser extensions discovery, I merged the commits and generated the xpi. So now you can go download the Debian online editing firefox extension and hack the world, the Debian world.

Install it and start contributing to Debian from your browser. There's no excuse now.

Categories: Elsewhere

Daniel Pocock: jSMPP project update, 2.1.1 and 2.2.1 releases

Mon, 19/01/2015 - 22:29

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under https://github.com/opentelecoms-org/jsmpp and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

Categories: Elsewhere

Daniel Pocock: Storage controllers for small Linux NFS networks

Mon, 19/01/2015 - 14:59

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?

Observations

I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading
Categories: Elsewhere

Jonathan Wiltshire: Alcester BSP, day three

Sun, 18/01/2015 - 23:27

We have had a rather more successful weekend then I feared, as you can see from our log on the wiki page. Steve reproduced and wrote patches for several installer/bootloader bugs, and Neil and I spent significant time in a maze of twist zope packages (we have managed to provide more diagnostics on the bug, even if we couldn’t resolve it). Ben and Adam have ploughed through a mixture of bugs and maintenance work.

I wrongly assumed we would only be able to touch a handful of bugs, since they are now mostly quite difficult, so it was rather pleasant to recap our progress this evening and see that it’s not all bad after all.

Alcester BSP, day three is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Gregor Herrmann: RC bugs 2014/51-2015/03

Sun, 18/01/2015 - 22:41

I have to admit that I was a bit lazy when it comes to working on RC bugs in the last weeks. here's my not-so-stellar summary:

  • #729220 – pdl: "pdl: problems upgrading from wheezy due to triggers"
    investigate (unsuccessfully), later fixed by maintainer
  • #772868 – gxine: "gxine: Trigger cycle causes dpkg to fail processing"
    switch trigger from "interest" to "interest-noawait", upload to DELAYED/2
  • #774584 – rtpproxy: "rtpproxy: Deamon does not start as init script points to wrong executable path"
    adjust path in init script, upload to DELAYED/2
  • #774791 – src:xine-ui: "xine-ui: Creates dpkg trigger cycle via libxine2-ffmpeg, libxine2-misc-plugins or libxine2-x"
    add trigger patch from Michael Gilbert, upload to DELAYED/2
  • #774862 – ciderwebmail: "ciderwebmail: unhandled symlink to directory conversion: /usr/share/ciderwebmail/root/static/images/mimeicons"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion (pkg-perl)
  • #774867 – lirc-x: "lirc-x: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion, upload to DELAYED/2
  • #775640 – src:libarchive-zip-perl: "libarchive-zip-perl: FTBFS in jessie: Tests failures"
    start to investigate (pkg-perl)
Categories: Elsewhere

Mark Brown: Heating the Internet of Things

Sun, 18/01/2015 - 22:23

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

Categories: Elsewhere

Dirk Eddelbuettel: Running UBSAN tests via clang with Rocker

Sun, 18/01/2015 - 18:37

Every now and then we get reports from CRAN about our packages failing a test there. A challenging one concerns UBSAN, or Undefined Behaviour Sanitizer. For background on UBSAN, see this RedHat blog post for gcc and this one from LLVM about clang.

I had written briefly about this before in a blog post introducing the sanitizers package for tests, as well as the corresponding package page for sanitizers, which clearly predates our follow-up Rocker.org repo / project described in this initial announcement and when we became the official R container for Docker.

Rocker had support for SAN testing, but UBSAN was not working yet. So following a recent CRAN report against our RcppAnnoy package, I was unable to replicate the error and asked for help on r-devel in this thread.

Martyn Plummer and Jan van der Laan kindly sent their configurations in the same thread and off-list; Jeff Horner did so too following an initial tweet offering help. None of these worked for me, but further trials eventually lead me to the (already mentioned above) RedHat blog post with its mention of -fno-sanitize-recover to actually have an error abort a test. Which, coupled with the settings used by Martyn, were what worked for me: clang-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover.

This is now part of the updated Dockerfile of the R-devel-SAN-Clang repo behind the r-devel-ubsan-clang. It contains these settings, as well a new support script check.r for littler---which enables testing right out the box.

Here is a complete example:

docker # run Docker (any recent version, I use 1.2.0) run # launch a container --rm # remove Docker temporary objects when dome -ti # use a terminal and interactive mode -v $(pwd):/mnt # mount the current dir. as /mnt in the container rocker/r-devel-ubsan-clang # using the rocker/r-devel-ubsan-clang container check.r # launch the check.r command from littler (in container) --setwd /mnt # with a setd() to the /mnt directory --install-deps # installing all package dependencies before the test RcppAnnoy_0.0.5.tar.gz # and test this tarball

I know. It's a mouthful. But it really is merely the standard practice of running Docker to launch a single command. And while I frequently make this the /bin/bash command (hence the -ti options I always use) to work and explore interactively, here we do one better thanks to the (pretty useful so far) check.r script I wrote over the last two days.

check.r does about the same as R CMD check. If you look inside check you will see a call to a (non-exported) function from the (R base-internal) tools package. We call the same function here. But to make things more interesting we also first install the package we test to really ensure we have all build-dependencies from CRAN met. (And we plan to extend check.r to support additional apt-get calls in case other libraries etc are needed.) We use the dependencies=TRUE option to have R smartly install Suggests: as well, but only one level (see help(install.packages) for details. With that prerequisite out of the way, the test can proceed as if we had done R CMD check (and additional R CMD INSTALL as well). The result for this (known-bad) package:

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.tar.gz also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’ trying URL 'http://cran.rstudio.com/src/contrib/Rcpp_0.11.3.tar.gz' Content type 'application/x-gzip' length 2169583 bytes (2.1 MB) opened URL ================================================== downloaded 2.1 MB trying URL 'http://cran.rstudio.com/src/contrib/BH_1.55.0-3.tar.gz' Content type 'application/x-gzip' length 7860141 bytes (7.5 MB) opened URL ================================================== downloaded 7.5 MB trying URL 'http://cran.rstudio.com/src/contrib/RUnit_0.4.28.tar.gz' Content type 'application/x-gzip' length 322486 bytes (314 KB) opened URL ================================================== downloaded 314 KB trying URL 'http://cran.rstudio.com/src/contrib/RcppAnnoy_0.0.4.tar.gz' Content type 'application/x-gzip' length 25777 bytes (25 KB) opened URL ================================================== downloaded 25 KB * installing *source* package ‘Rcpp’ ... ** package ‘Rcpp’ successfully unpacked and MD5 sums checked ** libs clang++-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover -I/usr/local/lib/R/include -DNDEBUG -I../inst/include/ -I/usr/local/include -fpic -pipe -Wall -pedantic - g -c Date.cpp -o Date.o [...] * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... Running ‘runUnitTests.R’ ERROR Running the tests in ‘tests/runUnitTests.R’ failed. Last 13 lines of output: + if (getErrors(tests)$nFail > 0) { + stop("TEST FAILED!") + } + if (getErrors(tests)$nErr > 0) { + stop("TEST HAD ERRORS!") + } + if (getErrors(tests)$nTestFunc < 1) { + stop("NO TEST FUNCTIONS RUN!") + } + } Executing test function test01getNNsByVector ... ../inst/include/annoylib.h:532:40: runtime error: index 3 out of bounds for type 'int const[2]' * checking PDF version of manual ... OK * DONE Status: 1 ERROR, 2 WARNINGs, 1 NOTE See ‘/tmp/RcppAnnoy/..Rcheck/00check.log’ for details. root@a7687c014e55:/tmp/RcppAnnoy#

The log shows that thanks to check.r, we first download and the install the required packages Rcpp, BH, RUnit and RcppAnnoy itself (in the CRAN release). Rcpp is installed first, we then cut out the middle until we get to ... the failure we set out to confirm.

Now having a tool to confirm the error, we can work on improved code.

One such fix currently under inspection in a non-release version 0.0.5.1 then passes with the exact same invocation (but pointing at RcppAnnoy_0.0.5.1.tar.gz):

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.1.tar.gz also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’ [...] * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... Running ‘runUnitTests.R’ OK * checking PDF version of manual ... OK * DONE Status: 1 WARNING See ‘/mnt/RcppAnnoy.Rcheck/00check.log’ for details. edd@max:~/git$

This proceeds the same way from the same pristine, clean container for testing. It first installs the four required packages, and the proceeds to test the new and improved tarball. Which passes the test which failed above with no issues. Good.

So we now have an "appliance" container anybody can download from free from the Docker hub, and deploy as we did here in order to have fully automated, one-command setup for testing for UBSAN errors.

UBSAN is a very powerful tool. We are only beginning to deploy it. There are many more useful configuration settings. I would love to hear from anyone who would like to work on building this out via the R-devel-SAN-Clang GitHub repo. Improvements to the littler scripts are similarly welcome (and I plan on releasing an updated littler package "soon").

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

EvolvisForge blog: Debian/m68k hacking weekend cleanup

Sun, 18/01/2015 - 17:16

OK, time to clean up ↳ tarent so people can work again tomorrow.

Not much to clean though (the participants were nice and cleaned up after themselves ☺), so it’s mostly putting stuff back to where it belongs. Oh, and drinking more of the cool Belgian beer Geert (Linux upstream) brought ☻

We were productive, reporting and fixing kernel bugs, fixing hardware, swapping and partitioning discs, upgrading software, getting buildds (mostly Amiga) back to work, trying X11 (kdrive) on a bare metal Atari Falcon (and finding a window manager that works with it), etc. – I hope someone else writes a report; for now we have a photo and a screenshot (made with trusty xwd). Watch the debian-68k mailing list archives for things to come.

I think that, issues with electric cars aside, everyone liked the food places too ;-)

Categories: Elsewhere

Andreas Metzler: Another new toy

Sun, 18/01/2015 - 16:35

Given that snow is yet a little bit sparse for snowboarding and the weather could be improved on I have made myself a late christmas present:

It is a rather sporty rodel (Torggler TS 120 Tourenrodel Spezial 2014/15, 9kg weight, with fast (non stainless) "racing rails" and 22° angle of the runners) but not a competition model. I wish I had bought this years ago. It is a lot more comfortable than a classic sled ("Davoser Schlitten"), since one is sitting in instead of on top of the sled somehow like in a hammock. Being able to steer without putting a foot into the snow has the nice side effect that the snow stays on the ground instead of ending up in my face. Obviously it is also faster which is a huge improvement even for recreational riding, since it makes the difference between riding the sledge or pulling it on flattish stretches. Strongly recommended.

FWIW I ordered this via rodelfuehrer.de (they started with a guidebook of luge tracks, which translates to "Rodelführer"), where I would happily order again.

Categories: Elsewhere

Chris Lamb: Adjusting a backing track with SoX

Sun, 18/01/2015 - 13:28

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems:

  • The recording was made at A=442, rather than the more standard A=440.
  • The tempi of the movements was not to my taste, either too fast or too slow.

SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit—ie. 1/100th of a semitone—rather than the 442Hz and 440Hz reference frequencies.

First, we calculate the cent difference with:

Next, we rip the material from the CD:

$ sudo apt-get install ripit flac [..] $ ripit --coder 2 --eject --nointeraction [..]

And finally we adjust the tempo and pitch:

$ apt-get install sox libsox-fmt-mp3 [..] $ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes) $ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast! $ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close.. $ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!

(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

Categories: Elsewhere

Ian Campbell: Using Grub 2 as a bootloader for Xen PV guests on Debian Jessie

Sun, 18/01/2015 - 10:23

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

Categories: Elsewhere

Pages